populate branch
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/trunk_on_hadoop-0.18.3@774704 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/CHANGES.txt b/CHANGES.txt
new file mode 100644
index 0000000..c57edd9
--- /dev/null
+++ b/CHANGES.txt
@@ -0,0 +1,1561 @@
+HBase Change Log
+Release 0.20.0 - Unreleased
+ INCOMPATIBLE CHANGES
+ HBASE-1147 Modify the scripts to use Zookeeper
+ HBASE-1144 Store the ROOT region location in Zookeeper
+ (Nitay Joffe via Stack)
+ HBASE-1146 Replace the HRS leases with Zookeeper
+ HBASE-61 Create an HBase-specific MapFile implementation
+ (Ryan Rawson via Stack)
+ HBASE-1145 Ensure that there is only 1 Master with Zookeeper (Removes
+ hbase.master) (Nitay Joffe via Stack)
+ HBASE-1289 Remove "hbase.fully.distributed" option and update docs
+ (Nitay Joffe via Stack)
+ HBASE-1234 Change HBase StoreKey format
+ HBASE-1348 Move 0.20.0 targeted TRUNK to 0.20.0 hadoop
+ (Ryan Rawson and Stack)
+ HBASE-1342 Add to filesystem info needed to rebuild .META.
+ HBASE-1361 Disable bloom filters
+ HBASE-1367 Get rid of Thrift exception 'NotFound'
+ HBASE-1381 Remove onelab and bloom filters files from hbase
+ HBASE-1411 Remove HLogEdit.
+
+ BUG FIXES
+ HBASE-1140 "ant clean test" fails (Nitay Joffe via Stack)
+ HBASE-1129 Master won't go down; stuck joined on rootScanner
+ HBASE-1136 HashFunction inadvertently destroys some randomness
+ (Jonathan Ellis via Stack)
+ HBASE-1138 Test that readers opened after a sync can see all data up to the
+ sync (temporary until HADOOP-4379 is resolved)
+ HBASE-1121 Cluster confused about where -ROOT- is
+ HBASE-1148 Always flush HLog on root or meta region updates
+ HBASE-1181 src/saveVersion.sh bails on non-standard Bourne shells
+ (e.g. dash) (K M via Jean-Daniel Cryans)
+ HBASE-1175 HBA administrative tools do not work when specifying region
+ name (Jonathan Gray via Andrew Purtell)
+ HBASE-1190 TableInputFormatBase with row filters scan too far (Dave
+ Latham via Andrew Purtell)
+ HBASE-1198 OOME in IPC server does not trigger abort behavior
+ HBASE-1209 Make port displayed the same as is used in URL for RegionServer
+ table in UI (Lars George via Stack)
+ HBASE-1217 add new compression and hfile blocksize to HColumnDescriptor
+ HBASE-859 HStoreKey needs a reworking
+ HBASE-1211 NPE in retries exhausted exception
+ HBASE-1233 Transactional fixes: Overly conservative scan read-set,
+ potential CME (Clint Morgan via Stack)
+ HBASE-1239 in the REST interface does not correctly clear the character
+ buffer each iteration-1185 wrong request/sec in the gui
+ reporting wrong (Brian Beggs via Stack)
+ HBASE-1245 hfile meta block handling bugs (Ryan Rawson via Stack)
+ HBASE-1238 Under upload, region servers are unable
+ to compact when loaded with hundreds of regions
+ HBASE-1247 checkAndSave doesn't Write Ahead Log
+ HBASE-1243 oldlogfile.dat is screwed, so is it's region
+ HBASE-1169 When a shutdown is requested, stop scanning META regions
+ immediately
+ HBASE-1251 HConnectionManager.getConnection(HBaseConfiguration) returns
+ same HConnection for different HBaseConfigurations
+ HBASE-1157, HBASE-1156 If we do not take start code as a part of region
+ server recovery, we could inadvertantly try to reassign regions
+ assigned to a restarted server with a different start code;
+ Improve lease handling
+ HBASE-1267 binary keys broken in trunk (again) -- part 2 and 3
+ (Ryan Rawson via Stack)
+ HBASE-1268 ZooKeeper config parsing can break HBase startup
+ (Nitay Joffe via Stack)
+ HBASE-1270 Fix TestInfoServers (Nitay Joffe via Stack)
+ HBASE-1277 HStoreKey: Wrong comparator logic (Evgeny Ryabitskiy)
+ HBASE-1275 TestTable.testCreateTable broken (Ryan Rawson via Stack)
+ HBASE-1274 TestMergeTable is broken in Hudson (Nitay Joffe via Stack)
+ HBASE-1283 thrift's package descrpition needs to update for start/stop
+ procedure (Rong-en Fan via Stack)
+ HBASE-1284 drop table drops all disabled tables
+ HBASE-1290 table.jsp either 500s out or doesnt list the regions (Ryan
+ Rawson via Andrew Purtell)
+ HBASE-1293 hfile doesn't recycle decompressors (Ryan Rawson via Andrew
+ Purtell)
+ HBASE-1150 HMsg carries safemode flag; remove (Nitay Joffe via Stack)
+ HBASE-1232 zookeeper client wont reconnect if there is a problem (Nitay
+ Joffe via Andrew Purtell)
+ HBASE-1303 Secondary index configuration prevents HBase from starting
+ (Ken Weiner via Stack)
+ HBASE-1298 master.jsp & table.jsp do not URI Encode table or region
+ names in links (Lars George via Stack)
+ HBASE-1310 Off by one error in Bytes.vintToBytes
+ HBASE-1202 getRow does not always work when specifying number of versions
+ HBASE-1324 hbase-1234 broke testget2 unit test (and broke the build)
+ HBASE-1321 hbase-1234 broke TestCompaction; fix and reenable
+ HBASE-1330 binary keys broken on trunk (Ryan Rawson via Stack)
+ HBASE-1332 regionserver carrying .META. starts sucking all cpu, drives load
+ up - infinite loop? (Ryan Rawson via Stack)
+ HBASE-1334 .META. region running into hfile errors (Ryan Rawson via Stack)
+ HBASE-1338 lost use of compaction.dir; we were compacting into live store
+ subdirectory
+ HBASE-1058 Prevent runaway compactions
+ HBASE-1292 php thrift's getRow() would throw an exception if the row does
+ not exist (Rong-en Fan via Stack)
+ HBASE-1340 Fix new javadoc warnings (Evgeny Ryabitskiy via Stack)
+ HBASE-1287 Partitioner class not used in TableMapReduceUtil.initTableReduceJob()
+ (Lars George and Billy Pearson via Stack)
+ HBASE-1320 hbase-1234 broke filter tests
+ HBASE-1355 [performance] Cache family maxversions; we were calculating on
+ each access
+ HBASE-1358 Bug in reading from Memcache method (read only from snapshot)
+ (Evgeny Ryabitskiy via Stack)
+ HBASE-1322 hbase-1234 broke TestAtomicIncrement; fix and reenable
+ (Evgeny Ryabitskiy and Ryan Rawson via Stack)
+ HBASE-1347 HTable.incrementColumnValue does not take negative 'amount'
+ (Evgeny Ryabitskiy via Stack)
+ HBASE-1365 Typo in TableInputFormatBase.setInputColums (Jon Gray via Stack)
+ HBASE-1279 Fix the way hostnames and IPs are handled
+ HBASE-1368 HBASE-1279 broke the build
+ HBASE-1264 Wrong return values of comparators for ColumnValueFilter
+ (Thomas Schneider via Andrew Purtell)
+ HBASE-1374 NPE out of ZooKeeperWrapper.loadZooKeeperConfig
+ HBASE-1336 Splitting up the compare of family+column into 2 different
+ compare
+ HBASE-1377 RS address is null in master web UI
+ HBASE-1344 WARN IllegalStateException: Cannot set a region as open if it
+ has not been pending
+ HBASE-1386 NPE in housekeeping
+ HBASE-1396 Remove unused sequencefile and mapfile config. from
+ hbase-default.xml
+ HBASE-1398 TableOperation doesnt format keys for meta scan properly
+ (Ryan Rawson via Stack)
+ HBASE-1399 Can't drop tables since HBASE-1398 (Ryan Rawson via Andrew
+ Purtell)
+ HBASE-1311 ZooKeeperWrapper: Failed to set watcher on ZNode /hbase/master
+ (Nitay Joffe via Stack)
+ HBASE-1391 NPE in TableInputFormatBase$TableRecordReader.restart if zoo.cfg
+ is wrong or missing on task trackers
+ HBASE-1323 hbase-1234 broke TestThriftServer; fix and reenable
+
+ IMPROVEMENTS
+ HBASE-1089 Add count of regions on filesystem to master UI; add percentage
+ online as difference between whats open and whats on filesystem
+ (Samuel Guo via Stack)
+ HBASE-1130 PrefixRowFilter (Michael Gottesman via Stack)
+ HBASE-1139 Update Clover in build.xml
+ HBASE-876 There are a large number of Java warnings in HBase; part 1,
+ part 2, part 3, part 4, part 5, part 6, part 7 and part 8
+ (Evgeny Ryabitskiy via Stack)
+ HBASE-896 Update jruby from 1.1.2 to 1.1.6
+ HBASE-1031 Add the Zookeeper jar
+ HBASE-1142 Cleanup thrift server; remove Text and profuse DEBUG messaging
+ (Tim Sell via Stack)
+ HBASE-1064 HBase REST xml/json improvements (Brian Beggs working of
+ initial Michael Gottesman work via Stack)
+ HBASE-5121 Fix shell usage for format.width
+ HBASE-845 HCM.isTableEnabled doesn't really tell if it is, or not
+ HBASE-903 [shell] Can't set table descriptor attributes when I alter a
+ table
+ HBASE-1166 saveVersion.sh doesn't work with git (Nitay Joffe via Stack)
+ HBASE-1167 JSP doesn't work in a git checkout (Nitay Joffe via Andrew
+ Purtell)
+ HBASE-1178 Add shutdown command to shell
+ HBASE-1184 HColumnDescriptor is too restrictive with family names
+ (Toby White via Andrew Purtell)
+ HBASE-1180 Add missing import statements to SampleUploader and remove
+ unnecessary @Overrides (Ryan Smith via Andrew Purtell)
+ HBASE-1191 ZooKeeper ensureParentExists calls fail
+ on absolute path (Nitay Joffe via Jean-Daniel Cryans)
+ HBASE-1187 After disabling/enabling a table, the regions seems to
+ be assigned to only 1-2 region servers
+ HBASE-1210 Allow truncation of output for scan and get commands in shell
+ (Lars George via Stack)
+ HBASE-1221 When using ant -projecthelp to build HBase not all the important
+ options show up (Erik Holstad via Stack)
+ HBASE-1189 Changing the map type used internally for HbaseMapWritable
+ (Erik Holstad via Stack)
+ HBASE-1188 Memory size of Java Objects - Make cacheable objects implement
+ HeapSize (Erik Holstad via Stack)
+ HBASE-1230 Document installation of HBase on Windows
+ HBASE-1241 HBase additions to ZooKeeper part 1 (Nitay Joffe via JD)
+ HBASE-1231 Today, going from a RowResult to a BatchUpdate reqiures some
+ data processing even though they are pretty much the same thing
+ (Erik Holstad via Stack)
+ HBASE-1240 Would be nice if RowResult could be comparable
+ (Erik Holstad via Stack)
+ HBASE-803 Atomic increment operations (Ryan Rawson and Jon Gray via Stack)
+ Part 1 and part 2 -- fix for a crash.
+ HBASE-1252 Make atomic increment perform a binary increment
+ (Jonathan Gray via Stack)
+ HBASE-1258,1259 ganglia metrics for 'requests' is confusing
+ (Ryan Rawson via Stack)
+ HBASE-1265 HLogEdit static constants should be final (Nitay Joffe via
+ Stack)
+ HBASE-1244 ZooKeeperWrapper constants cleanup (Nitay Joffe via Stack)
+ HBASE-1262 Eclipse warnings, including performance related things like
+ synthetic accessors (Nitay Joffe via Stack)
+ HBASE-1273 ZooKeeper WARN spits out lots of useless messages
+ (Nitay Joffe via Stack)
+ HBASE-1285 Forcing compactions should be available via thrift
+ (Tim Sell via Stack)
+ HBASE-1186 Memory-aware Maps with LRU eviction for cell cache
+ (Jonathan Gray via Andrew Purtell)
+ HBASE-1205 RegionServers should find new master when a new master comes up
+ (Nitay Joffe via Andrew Purtell)
+ HBASE-1309 HFile rejects key in Memcache with empty value
+ HBASE-1331 Lower the default scanner caching value
+ HBASE-1235 Add table enabled status to shell and UI
+ (Lars George via Stack)
+ HBASE-1333 RowCounter updates
+ HBASE-1195 If HBase directory exists but version file is inexistent, still
+ proceed with bootstrapping (Evgeny Ryabitskiy via Stack)
+ HBASE-1301 HTable.getRow() returns null if the row does no exist
+ (Rong-en Fan via Stack)
+ HBASE-1176 Javadocs in HBA should be clear about which functions are
+ asynchronous and which are synchronous
+ (Evgeny Ryabitskiy via Stack)
+ HBASE-1260 Bytes utility class changes: remove usage of ByteBuffer and
+ provide additional ByteBuffer primitives (Jon Gray via Stack)
+ HBASE-1183 New MR splitting algorithm and other new features need a way to
+ split a key range in N chunks (Jon Gray via Stack)
+ HBASE-1350 New method in HTable.java to return start and end keys for
+ regions in a table (Vimal Mathew via Stack)
+ HBASE-1271 Allow multiple tests to run on one machine
+ (Evgeny Ryabitskiy via Stack)
+ HBASE-1112 we will lose data if the table name happens to be the logs' dir
+ name (Samuel Guo via Stack)
+ HBASE-889 The current Thrift API does not allow a new scanner to be
+ created without supplying a column list unlike the other APIs.
+ (Tim Sell via Stack)
+ HBASE-1341 HTable pooler
+ HBASE-1379 re-enable LZO using hadoop-gpl-compression library
+ (Ryan Rawson via Stack)
+ HBASE-1383 hbase shell needs to warn on deleting multi-region table
+ HBASE-1286 Thrift should support next(nbRow) like functionality
+ (Alex Newman via Stack)
+ HBASE-1392 change how we build/configure lzocodec (Ryan Rawson via Stack)
+ HBASE-1397 Better distribution in the PerformanceEvaluation MapReduce
+ when rows run to the Billions
+ HBASE-1393 Narrow synchronization in HLog
+ HBASE-1404 minor edit of regionserver logging messages
+ HBASE-1405 Threads.shutdown has unnecessary branch
+ HBASE-1407 Changing internal structure of ImmutableBytesWritable
+ contructor (Erik Holstad via Stack)
+ HBASE-1345 Remove distributed mode from MiniZooKeeper (Nitay Joffe via
+ Stack)
+ HBASE-1414 Add server status logging chore to ServerManager
+ HBASE-1379 Make KeyValue implement Writable
+ (Erik Holstad and Jon Gray via Stack)
+ HBASE-1380 Make KeyValue implement HeapSize
+ (Erik Holstad and Jon Gray via Stack)
+ HBASE-1413 Fall back to filesystem block size default if HLog blocksize is
+ not specified
+ HBASE-1417 Cleanup disorientating RPC message
+
+ OPTIMIZATIONS
+ HBASE-1412 Change values for delete column and column family in KeyValue
+
+Release 0.19.0 - 01/21/2009
+ INCOMPATIBLE CHANGES
+ HBASE-885 TableMap and TableReduce should be interfaces
+ (Doğacan Güney via Stack)
+ HBASE-905 Remove V5 migration classes from 0.19.0 (Jean-Daniel Cryans via
+ Jim Kellerman)
+ HBASE-852 Cannot scan all families in a row with a LIMIT, STARTROW, etc.
+ (Izaak Rubin via Stack)
+ HBASE-953 Enable BLOCKCACHE by default [WAS -> Reevaluate HBASE-288 block
+ caching work....?] -- Update your hbase-default.xml file!
+ HBASE-636 java6 as a requirement
+ HBASE-994 IPC interfaces with different versions can cause problems
+ HBASE-1028 If key does not exist, return null in getRow rather than an
+ empty RowResult
+ HBASE-1134 OOME in HMaster when HBaseRPC is older than 0.19
+
+ BUG FIXES
+ HBASE-891 HRS.validateValuesLength throws IOE, gets caught in the retries
+ HBASE-892 Cell iteration is broken (Doğacan Güney via Jim Kellerman)
+ HBASE-898 RowResult.containsKey(String) doesn't work
+ (Doğacan Güney via Jim Kellerman)
+ HBASE-906 [shell] Truncates output
+ HBASE-912 PE is broken when other tables exist
+ HBASE-853 [shell] Cannot describe meta tables (Izaak Rubin via Stack)
+ HBASE-844 Can't pass script to hbase shell
+ HBASE-837 Add unit tests for ThriftServer.HBaseHandler (Izaak Rubin via Stack)
+ HBASE-913 Classes using log4j directly
+ HBASE-914 MSG_REPORT_CLOSE has a byte array for a message
+ HBASE-918 Region balancing during startup makes cluster unstable
+ HBASE-921 region close and open processed out of order; makes for
+ disagreement between master and regionserver on region state
+ HBASE-925 HRS NPE on way out if no master to connect to
+ HBASE-928 NPE throwing RetriesExhaustedException
+ HBASE-924 Update hadoop in lib on 0.18 hbase branch to 0.18.1
+ HBASE-929 Clarify that ttl in HColumnDescriptor is seconds
+ HBASE-930 RegionServer stuck: HLog: Could not append. Requesting close of
+ log java.io.IOException: Could not get block locations. Aborting...
+ HBASE-926 If no master, regionservers should hang out rather than fail on
+ connection and shut themselves down
+ HBASE-919 Master and Region Server need to provide root region location if
+ they are using HTable
+ With J-D's one line patch, test cases now appear to work and
+ PerformanceEvaluation works as before.
+ HBASE-939 NPE in HStoreKey
+ HBASE-945 Be consistent in use of qualified/unqualified mapfile paths
+ HBASE-946 Row with 55k deletes timesout scanner lease
+ HBASE-950 HTable.commit no longer works with existing RowLocks though it's still in API
+ HBASE-952 Deadlock in HRegion.batchUpdate
+ HBASE-954 Don't reassign root region until ProcessServerShutdown has split
+ the former region server's log
+ HBASE-957 PerformanceEvaluation tests if table exists by comparing descriptors
+ HBASE-728, HBASE-956, HBASE-955 Address thread naming, which threads are
+ Chores, vs Threads, make HLog manager the write ahead log and
+ not extend it to provided optional HLog sync operations.
+ HBASE-970 Update the copy/rename scripts to go against change API
+ HBASE-966 HBASE-748 misses some writes
+ HBASE-971 Fix the failing tests on Hudson
+ HBASE-973 [doc] In getting started, make it clear that hbase needs to
+ create its directory in hdfs
+ HBASE-963 Fix the retries in HTable.flushCommit
+ HBASE-969 Won't when storefile > 2G.
+ HBASE-976 HADOOP 0.19.0 RC0 is broke; replace with HEAD of branch-0.19
+ HBASE-977 Arcane HStoreKey comparator bug
+ HBASE-979 REST web app is not started automatically
+ HBASE-980 Undo core of HBASE-975, caching of start and end row
+ HBASE-982 Deleting a column in MapReduce fails (Doğacan Güney via Stack)
+ HBASE-984 Fix javadoc warnings
+ HBASE-985 Fix javadoc warnings
+ HBASE-951 Either shut down master or let it finish cleanup
+ HBASE-964 Startup stuck "waiting for root region"
+ HBASE-964, HBASE-678 provide for safe-mode without locking up HBase "waiting
+ for root region"
+ HBASE-990 NoSuchElementException in flushSomeRegions; took two attempts.
+ HBASE-602 HBase Crash when network card has a IPv6 address
+ HBASE-996 Migration script to up the versions in catalog tables
+ HBASE-991 Update the mapred package document examples so they work with
+ TRUNK/0.19.0.
+ HBASE-1003 If cell exceeds TTL but not VERSIONs, will not be removed during
+ major compaction
+ HBASE-1005 Regex and string comparison operators for ColumnValueFilter
+ HBASE-910 Scanner misses columns / rows when the scanner is obtained
+ during a memcache flush
+ HBASE-1009 Master stuck in loop wanting to assign but regions are closing
+ HBASE-1016 Fix example in javadoc overvie
+ HBASE-1021 hbase metrics FileContext not working
+ HBASE-1023 Check global flusher
+ HBASE-1036 HBASE-1028 broke Thrift
+ HBASE-1037 Some test cases failing on Windows/Cygwin but not UNIX/Linux
+ HBASE-1041 Migration throwing NPE
+ HBASE-1042 OOME but we don't abort; two part commit.
+ HBASE-927 We don't recover if HRS hosting -ROOT-/.META. goes down
+ HBASE-1029 REST wiki documentation incorrect
+ (Sishen Freecity via Stack)
+ HBASE-1043 Removing @Override attributes where they are no longer needed.
+ (Ryan Smith via Jim Kellerman)
+ HBASE-927 We don't recover if HRS hosting -ROOT-/.META. goes down -
+ (fix bug in createTable which caused tests to fail)
+ HBASE-1039 Compaction fails if bloomfilters are enabled
+ HBASE-1027 Make global flusher check work with percentages rather than
+ hard code memory sizes
+ HBASE-1000 Sleeper.sleep does not go back to sleep when interrupted
+ and no stop flag given.
+ HBASE-900 Regionserver memory leak causing OOME during relatively
+ modest bulk importing; part 1 and part 2
+ HBASE-1054 Index NPE on scanning (Clint Morgan via Andrew Purtell)
+ HBASE-1052 Stopping a HRegionServer with unflushed cache causes data loss
+ from org.apache.hadoop.hbase.DroppedSnapshotException
+ HBASE-1059 ConcurrentModificationException in notifyChangedReadersObservers
+ HBASE-1063 "File separator problem on Windows" (Max Lehn via Stack)
+ HBASE-1068 TestCompaction broken on hudson
+ HBASE-1067 TestRegionRebalancing broken by running of hdfs shutdown thread
+ HBASE-1070 Up default index interval in TRUNK and branch
+ HBASE-1045 Hangup by regionserver causes write to fail
+ HBASE-1079 Dumb NPE in ServerCallable hides the RetriesExhausted exception
+ HBASE-782 The DELETE key in the hbase shell deletes the wrong character
+ (Tim Sell via Stack)
+ HBASE-543, HBASE-1046, HBase-1051 A region's state is kept in several places
+ in the master opening the possibility for race conditions
+ HBASE-1087 DFS failures did not shutdown regionserver
+ HBASE-1072 Change Thread.join on exit to a timed Thread.join
+ HBASE-1098 IllegalStateException: Cannot set a region to be closed it it
+ was not already marked as closing
+ HBASE-1100 HBASE-1062 broke TestForceSplit
+ HBASE-1191 shell tools -> close_region does not work for regions that did
+ not deploy properly on startup
+ HBASE-1093 NPE in HStore#compact
+ HBASE-1097 SequenceFile.Reader keeps around buffer whose size is that of
+ largest item read -> results in lots of dead heap
+ HBASE-1107 NPE in HStoreScanner.updateReaders
+ HBASE-1083 Will keep scheduling major compactions if last time one ran, we
+ didn't.
+ HBASE-1101 NPE in HConnectionManager$TableServers.processBatchOfRows
+ HBASE-1099 Regions assigned while master is splitting logs of recently
+ crashed server; regionserver tries to execute incomplete log
+ HBASE-1104, HBASE-1098, HBASE-1096: Doubly-assigned regions redux,
+ IllegalStateException: Cannot set a region to be closed it it was
+ not already marked as closing, Does not recover if HRS carrying
+ -ROOT- goes down
+ HBASE-1114 Weird NPEs compacting
+ HBASE-1116 generated web.xml and svn don't play nice together
+ HBASE-1119 ArrayOutOfBoundsException in HStore.compact
+ HBASE-1121 Cluster confused about where -ROOT- is
+ HBASE-1125 IllegalStateException: Cannot set a region to be closed if it was
+ not already marked as pending close
+ HBASE-1124 Balancer kicks in way too early
+ HBASE-1127 OOME running randomRead PE
+ HBASE-1132 Can't append to HLog, can't roll log, infinite cycle (another
+ spin on HBASE-930)
+
+ IMPROVEMENTS
+ HBASE-901 Add a limit to key length, check key and value length on client side
+ HBASE-890 Alter table operation and also related changes in REST interface
+ (Sishen Freecity via Stack)
+ HBASE-894 [shell] Should be able to copy-paste table description to create
+ new table (Sishen Freecity via Stack)
+ HBASE-886, HBASE-895 Sort the tables in the web UI, [shell] 'list' command
+ should emit a sorted list of tables (Krzysztof Szlapinski via Stack)
+ HBASE-884 Double and float converters for Bytes class
+ (Doğacan Güney via Stack)
+ HBASE-908 Add approximate counting to CountingBloomFilter
+ (Andrzej Bialecki via Stack)
+ HBASE-920 Make region balancing sloppier
+ HBASE-902 Add force compaction and force split operations to UI and Admin
+ HBASE-942 Add convenience methods to RowFilterSet
+ (Clint Morgan via Stack)
+ HBASE-943 to ColumnValueFilter: add filterIfColumnMissing property, add
+ SubString operator (Clint Morgan via Stack)
+ HBASE-937 Thrift getRow does not support specifying columns
+ (Doğacan Güney via Stack)
+ HBASE-959 Be able to get multiple RowResult at one time from client side
+ (Sishen Freecity via Stack)
+ HBASE-936 REST Interface: enable get number of rows from scanner interface
+ (Sishen Freecity via Stack)
+ HBASE-960 REST interface: more generic column family configure and also
+ get Rows using offset and limit (Sishen Freecity via Stack)
+ HBASE-817 Hbase/Shell Truncate
+ HBASE-949 Add an HBase Manual
+ HBASE-839 Update hadoop libs in hbase; move hbase TRUNK on to an hadoop
+ 0.19.0 RC
+ HBASE-785 Remove InfoServer, use HADOOP-3824 StatusHttpServer
+ instead (requires hadoop 0.19)
+ HBASE-81 When a scanner lease times out, throw a more "user friendly" exception
+ HBASE-978 Remove BloomFilterDescriptor. It is no longer used.
+ HBASE-975 Improve MapFile performance for start and end key
+ HBASE-961 Delete multiple columns by regular expression
+ (Samuel Guo via Stack)
+ HBASE-722 Shutdown and Compactions
+ HBASE-983 Declare Perl namespace in Hbase.thrift
+ HBASE-987 We need a Hbase Partitioner for TableMapReduceUtil.initTableReduceJob
+ MR Jobs (Billy Pearson via Stack)
+ HBASE-993 Turn off logging of every catalog table row entry on every scan
+ HBASE-992 Up the versions kept by catalog tables; currently 1. Make it 10?
+ HBASE-998 Narrow getClosestRowBefore by passing column family
+ HBASE-999 Up versions on historian and keep history of deleted regions for a
+ while rather than delete immediately
+ HBASE-938 Major compaction period is not checked periodically
+ HBASE-947 [Optimization] Major compaction should remove deletes as well as
+ the deleted cell
+ HBASE-675 Report correct server hosting a table split for assignment to
+ for MR Jobs
+ HBASE-927 We don't recover if HRS hosting -ROOT-/.META. goes down
+ HBASE-1013 Add debugging around commit log cleanup
+ HBASE-972 Update hbase trunk to use released hadoop 0.19.0
+ HBASE-1022 Add storefile index size to hbase metrics
+ HBASE-1026 Tests in mapred are failing
+ HBASE-1020 Regionserver OOME handler should dump vital stats
+ HBASE-1018 Regionservers should report detailed health to master
+ HBASE-1034 Remove useless TestToString unit test
+ HBASE-1030 Bit of polish on HBASE-1018
+ HBASE-847 new API: HTable.getRow with numVersion specified
+ (Doğacan Güney via Stack)
+ HBASE-1048 HLog: Found 0 logs to remove out of total 1450; oldest
+ outstanding seqnum is 162297053 fr om region -ROOT-,,0
+ HBASE-1055 Better vm stats on startup
+ HBASE-1065 Minor logging improvements in the master
+ HBASE-1053 bring recent rpc changes down from hadoop
+ HBASE-1056 [migration] enable blockcaching on .META. table
+ HBASE-1069 Show whether HRegion major compacts or not in INFO level
+ HBASE-1066 Master should support close/open/reassignment/enable/disable
+ operations on individual regions
+ HBASE-1062 Compactions at (re)start on a large table can overwhelm DFS
+ HBASE-1102 boolean HTable.exists()
+ HBASE-1105 Remove duplicated code in HCM, add javadoc to RegionState, etc.
+ HBASE-1106 Expose getClosestRowBefore in HTable
+ (Michael Gottesman via Stack)
+ HBASE-1082 Administrative functions for table/region maintenance
+ HBASE-1090 Atomic Check And Save in HTable (Michael Gottesman via Stack)
+ HBASE-1137 Add not on xceivers count to overview documentation
+
+ NEW FEATURES
+ HBASE-875 Use MurmurHash instead of JenkinsHash [in bloomfilters]
+ (Andrzej Bialecki via Stack)
+ HBASE-625 Metrics support for cluster load history: emissions and graphs
+ HBASE-883 Secondary indexes (Clint Morgan via Andrew Purtell)
+ HBASE-728 Support for HLog appends
+
+ OPTIMIZATIONS
+ HBASE-748 Add an efficient way to batch update many rows
+ HBASE-887 Fix a hotspot in scanners
+ HBASE-967 [Optimization] Cache cell maximum length (HCD.getMaxValueLength);
+ its used checking batch size
+ HBASE-940 Make the TableOutputFormat batching-aware
+ HBASE-576 Investigate IPC performance
+
+Release 0.18.0 - September 21st, 2008
+
+ INCOMPATIBLE CHANGES
+ HBASE-697 Thrift idl needs update/edit to match new 0.2 API (and to fix bugs)
+ (Tim Sell via Stack)
+ HBASE-822 Update thrift README and HBase.thrift to use thrift 20080411
+ Updated all other languages examples (only python went in)
+
+ BUG FIXES
+ HBASE-881 Fixed bug when Master tries to reassign split or offline regions
+ from a dead server
+ HBASE-860 Fixed Bug in IndexTableReduce where it concerns writing lucene
+ index fields.
+ HBASE-805 Remove unnecessary getRow overloads in HRS (Jonathan Gray via
+ Jim Kellerman) (Fix whitespace diffs in HRegionServer)
+ HBASE-811 HTD is not fully copyable (Andrew Purtell via Jim Kellerman)
+ HBASE-729 Client region/metadata cache should have a public method for
+ invalidating entries (Andrew Purtell via Stack)
+ HBASE-819 Remove DOS-style ^M carriage returns from all code where found
+ (Jonathan Gray via Jim Kellerman)
+ HBASE-818 Deadlock running 'flushSomeRegions' (Andrew Purtell via Stack)
+ HBASE-820 Need mainline to flush when 'Blocking updates' goes up.
+ (Jean-Daniel Cryans via Stack)
+ HBASE-821 UnknownScanner happens too often (Jean-Daniel Cryans via Stack)
+ HBASE-813 Add a row counter in the new shell (Jean-Daniel Cryans via Stack)
+ HBASE-824 Bug in Hlog we print array of byes for region name
+ (Billy Pearson via Stack)
+ HBASE-825 Master logs showing byte [] in place of string in logging
+ (Billy Pearson via Stack)
+ HBASE-808,809 MAX_VERSIONS not respected, and Deletall doesn't and inserts
+ after delete don't work as expected
+ (Jean-Daniel Cryans via Stack)
+ HBASE-831 committing BatchUpdate with no row should complain
+ (Andrew Purtell via Jim Kellerman)
+ HBASE-833 Doing an insert with an unknown family throws a NPE in HRS
+ HBASE-810 Prevent temporary deadlocks when, during a scan with write
+ operations, the region splits (Jean-Daniel Cryans via Jim
+ Kellerman)
+ HBASE-843 Deleting and recreating a table in a single process does not work
+ (Jonathan Gray via Jim Kellerman)
+ HBASE-849 Speed improvement in JenkinsHash (Andrzej Bialecki via Stack)
+ HBASE-552 Bloom filter bugs (Andrzej Bialecki via Jim Kellerman)
+ HBASE-762 deleteFamily takes timestamp, should only take row and family.
+ Javadoc describes both cases but only implements the timestamp
+ case. (Jean-Daniel Cryans via Jim Kellerman)
+ HBASE-768 This message 'java.io.IOException: Install 0.1.x of hbase and run
+ its migration first' is useless (Jean-Daniel Cryans via Jim
+ Kellerman)
+ HBASE-826 Delete table followed by recreation results in honked table
+ HBASE-834 'Major' compactions and upper bound on files we compact at any
+ one time (Billy Pearson via Stack)
+ HBASE-836 Update thrift examples to work with changed IDL (HBASE-697)
+ (Toby White via Stack)
+ HBASE-854 hbase-841 broke build on hudson? - makes sure that proxies are
+ closed. (Andrew Purtell via Jim Kellerman)
+ HBASE-855 compaction can return less versions then we should in some cases
+ (Billy Pearson via Stack)
+ HBASE-832 Problem with row keys beginnig with characters < than ',' and
+ the region location cache
+ HBASE-864 Deadlock in regionserver
+ HBASE-865 Fix javadoc warnings (Rong-En Fan via Jim Kellerman)
+ HBASE-872 Getting exceptions in shell when creating/disabling tables
+ HBASE-868 Incrementing binary rows cause strange behavior once table
+ splits (Jonathan Gray via Stack)
+ HBASE-877 HCM is unable to find table with multiple regions which contains
+ binary (Jonathan Gray via Stack)
+
+ IMPROVEMENTS
+ HBASE-801 When a table haven't disable, shell could response in a "user
+ friendly" way.
+ HBASE-816 TableMap should survive USE (Andrew Purtell via Stack)
+ HBASE-812 Compaction needs little better skip algo (Daniel Leffel via Stack)
+ HBASE-806 Change HbaseMapWritable and RowResult to implement SortedMap
+ instead of Map (Jonathan Gray via Stack)
+ HBASE-795 More Table operation in TableHandler for REST interface: part 1
+ (Sishen Freecity via Stack)
+ HBASE-795 More Table operation in TableHandler for REST interface: part 2
+ (Sishen Freecity via Stack)
+ HBASE-830 Debugging HCM.locateRegionInMeta is painful
+ HBASE-784 Base hbase-0.3.0 on hadoop-0.18
+ HBASE-841 Consolidate multiple overloaded methods in HRegionInterface,
+ HRegionServer (Jean-Daniel Cryans via Jim Kellerman)
+ HBASE-840 More options on the row query in REST interface
+ (Sishen Freecity via Stack)
+ HBASE-874 deleting a table kills client rpc; no subsequent communication if
+ shell or thrift server, etc. (Jonathan Gray via Jim Kellerman)
+ HBASE-871 Major compaction periodicity should be specifyable at the column
+ family level, not cluster wide (Jonathan Gray via Stack)
+ HBASE-465 Fix javadoc for all public declarations
+ HBASE-882 The BatchUpdate class provides, put(col, cell) and delete(col)
+ but no get() (Ryan Smith via Stack and Jim Kellerman)
+
+ NEW FEATURES
+ HBASE-787 Postgresql to HBase table replication example (Tim Sell via Stack)
+ HBASE-798 Provide Client API to explicitly lock and unlock rows (Jonathan
+ Gray via Jim Kellerman)
+ HBASE-798 Add missing classes: UnknownRowLockException and RowLock which
+ were present in previous versions of the patches for this issue,
+ but not in the version that was committed. Also fix a number of
+ compilation problems that were introduced by patch.
+ HBASE-669 MultiRegion transactions with Optimistic Concurrency Control
+ (Clint Morgan via Stack)
+ HBASE-842 Remove methods that have Text as a parameter and were deprecated
+ in 0.2.1 (Jean-Daniel Cryans via Jim Kellerman)
+
+ OPTIMIZATIONS
+
+Release 0.2.0 - August 8, 2008.
+
+ INCOMPATIBLE CHANGES
+ HBASE-584 Names in the filter interface are confusing (Clint Morgan via
+ Jim Kellerman) (API change for filters)
+ HBASE-601 Just remove deprecated methods in HTable; 0.2 is not backward
+ compatible anyways
+ HBASE-82 Row keys should be array of bytes
+ HBASE-76 Purge servers of Text (Done as part of HBASE-82 commit).
+ HBASE-487 Replace hql w/ a hbase-friendly jirb or jython shell
+ Part 1: purge of hql and added raw jirb in its place.
+ HBASE-521 Improve client scanner interface
+ HBASE-288 Add in-memory caching of data. Required update of hadoop to
+ 0.17.0-dev.2008-02-07_12-01-58. (Tom White via Stack)
+ HBASE-696 Make bloomfilter true/false and self-sizing
+ HBASE-720 clean up inconsistencies around deletes (Izaak Rubin via Stack)
+ HBASE-796 Deprecates Text methods from HTable
+ (Michael Gottesman via Stack)
+
+ BUG FIXES
+ HBASE-574 HBase does not load hadoop native libs (Rong-En Fan via Stack)
+ HBASE-598 Loggging, no .log file; all goes into .out
+ HBASE-622 Remove StaticTestEnvironment and put a log4j.properties in src/test
+ HBASE-624 Master will shut down if number of active region servers is zero
+ even if shutdown was not requested
+ HBASE-629 Split reports incorrect elapsed time
+ HBASE-623 Migration script for hbase-82
+ HBASE-630 Default hbase.rootdir is garbage
+ HBASE-589 Remove references to deprecated methods in Hadoop once
+ hadoop-0.17.0 is released
+ HBASE-638 Purge \r from src
+ HBASE-644 DroppedSnapshotException but RegionServer doesn't restart
+ HBASE-641 Improve master split logging
+ HBASE-642 Splitting log in a hostile environment -- bad hdfs -- we drop
+ write-ahead-log edits
+ HBASE-646 EOFException opening HStoreFile info file (spin on HBASE-645and 550)
+ HBASE-648 If mapfile index is empty, run repair
+ HBASE-640 TestMigrate failing on hudson
+ HBASE-651 Table.commit should throw NoSuchColumnFamilyException if column
+ family doesn't exist
+ HBASE-649 API polluted with default and protected access data members and methods
+ HBASE-650 Add String versions of get, scanner, put in HTable
+ HBASE-656 Do not retry exceptions such as unknown scanner or illegal argument
+ HBASE-659 HLog#cacheFlushLock not cleared; hangs a region
+ HBASE-663 Incorrect sequence number for cache flush
+ HBASE-655 Need programmatic way to add column family: need programmatic way
+ to enable/disable table
+ HBASE-654 API HTable.getMetadata().addFamily shouldn't be exposed to user
+ HBASE-666 UnmodifyableHRegionInfo gives the wrong encoded name
+ HBASE-668 HBASE-533 broke build
+ HBASE-670 Historian deadlocks if regionserver is at global memory boundary
+ and is hosting .META.
+ HBASE-665 Server side scanner doesn't honor stop row
+ HBASE-662 UI in table.jsp gives META locations, not the table's regions
+ location (Jean-Daniel Cryans via Stack)
+ HBASE-676 Bytes.getInt returns a long (Clint Morgan via Stack)
+ HBASE-680 Config parameter hbase.io.index.interval should be
+ hbase.index.interval, according to HBaseMapFile.HbaseWriter
+ (LN via Stack)
+ HBASE-682 Unnecessary iteration in HMemcache.internalGet? got much better
+ reading performance after break it (LN via Stack)
+ HBASE-686 MemcacheScanner didn't return the first row(if it exists),
+ because HScannerInterface's output incorrect (LN via Jim Kellerman)
+ HBASE-691 get* and getScanner are different in how they treat column parameter
+ HBASE-694 HStore.rowAtOrBeforeFromMapFile() fails to locate the row if # of mapfiles >= 2
+ (Rong-En Fan via Bryan)
+ HBASE-652 dropping table fails silently if table isn't disabled
+ HBASE-683 can not get svn revision # at build time if locale is not english
+ (Rong-En Fan via Stack)
+ HBASE-699 Fix TestMigrate up on Hudson
+ HBASE-615 Region balancer oscillates during cluster startup
+ HBASE-613 Timestamp-anchored scanning fails to find all records
+ HBASE-681 NPE in Memcache
+ HBASE-701 Showing bytes in log when should be String
+ HBASE-702 deleteall doesn't
+ HBASE-704 update new shell docs and commands on help menu
+ HBASE-709 Deadlock while rolling WAL-log while finishing flush
+ HBASE-710 If clocks are way off, then we can have daughter split come
+ before rather than after its parent in .META.
+ HBASE-714 Showing bytes in log when should be string (2)
+ HBASE-627 Disable table doesn't work reliably
+ HBASE-716 TestGet2.testGetClosestBefore fails with hadoop-0.17.1
+ HBASE-715 Base HBase 0.2 on Hadoop 0.17.1
+ HBASE-718 hbase shell help info
+ HBASE-717 alter table broke with new shell returns InvalidColumnNameException
+ HBASE-573 HBase does not read hadoop-*.xml for dfs configuration after
+ moving out hadoop/contrib
+ HBASE-11 Unexpected exits corrupt DFS
+ HBASE-12 When hbase regionserver restarts, it says "impossible state for
+ createLease()"
+ HBASE-575 master dies with stack overflow error if rootdir isn't qualified
+ HBASE-582 HBase 554 forgot to clear results on each iteration caused by a filter
+ (Clint Morgan via Stack)
+ HBASE-532 Odd interaction between HRegion.get, HRegion.deleteAll and compactions
+ HBASE-10 HRegionServer hangs upon exit due to DFSClient Exception
+ HBASE-595 RowFilterInterface.rowProcessed() is called *before* fhe final
+ filtering decision is made (Clint Morgan via Stack)
+ HBASE-586 HRegion runs HStore memcache snapshotting -- fix it so only HStore
+ knows about workings of memcache
+ HBASE-588 Still a 'hole' in scanners, even after HBASE-532
+ HBASE-604 Don't allow CLASSPATH from environment pollute the hbase CLASSPATH
+ HBASE-608 HRegionServer::getThisIP() checks hadoop config var for dns interface name
+ (Jim R. Wilson via Stack)
+ HBASE-609 Master doesn't see regionserver edits because of clock skew
+ HBASE-607 MultiRegionTable.makeMultiRegionTable is not deterministic enough
+ for regression tests
+ HBASE-405 TIF and TOF use log4j directly rather than apache commons-logging
+ HBASE-618 We always compact if 2 files, regardless of the compaction threshold setting
+ HBASE-619 Fix 'logs' link in UI
+ HBASE-478 offlining of table does not run reliably
+ HBASE-453 undeclared throwable exception from HTable.get
+ HBASE-620 testmergetool failing in branch and trunk since hbase-618 went in
+ HBASE-550 EOF trying to read reconstruction log stops region deployment
+ HBASE-551 Master stuck splitting server logs in shutdown loop; on each
+ iteration, edits are aggregated up into the millions
+ HBASE-505 Region assignments should never time out so long as the region
+ server reports that it is processing the open request
+ HBASE-561 HBase package does not include LICENSE.txt nor build.xml
+ HBASE-563 TestRowFilterAfterWrite erroneously sets master address to
+ 0.0.0.0:60100 rather than relying on conf
+ HBASE-507 Use Callable pattern to sleep between retries
+ HBASE-564 Don't do a cache flush if there are zero entries in the cache.
+ HBASE-554 filters generate StackOverflowException
+ HBASE-567 Reused BatchUpdate instances accumulate BatchOperations
+ HBASE-577 NPE getting scanner
+ HBASE-19 CountingBloomFilter can overflow its storage
+ (Stu Hood and Bryan Duxbury via Stack)
+ HBASE-28 thrift put/mutateRow methods need to throw IllegalArgument
+ exceptions (Dave Simpson via Bryan Duxbury via Stack)
+ HBASE-2 hlog numbers should wrap around when they reach 999
+ (Bryan Duxbury via Stack)
+ HBASE-421 TestRegionServerExit broken
+ HBASE-426 hbase can't find remote filesystem
+ HBASE-437 Clear Command should use system.out (Edward Yoon via Stack)
+ HBASE-434, HBASE-435 TestTableIndex and TestTableMapReduce failed in Hudson builds
+ HBASE-446 Fully qualified hbase.rootdir doesn't work
+ HBASE-438 XMLOutputter state should be initialized. (Edward Yoon via Stack)
+ HBASE-8 Delete table does not remove the table directory in the FS
+ HBASE-428 Under continuous upload of rows, WrongRegionExceptions are thrown
+ that reach the client even after retries
+ HBASE-460 TestMigrate broken when HBase moved to subproject
+ HBASE-462 Update migration tool
+ HBASE-473 When a table is deleted, master sends multiple close messages to
+ the region server
+ HBASE-490 Doubly-assigned .META.; master uses one and clients another
+ HBASE-492 hbase TRUNK does not build against hadoop TRUNK
+ HBASE-496 impossible state for createLease writes 400k lines in about 15mins
+ HBASE-472 Passing on edits, we dump all to log
+ HBASE-495 No server address listed in .META.
+ HBASE-433 HBASE-251 Region server should delete restore log after successful
+ restore, Stuck replaying the edits of crashed machine.
+ HBASE-27 hregioninfo cell empty in meta table
+ HBASE-501 Empty region server address in info:server entry and a
+ startcode of -1 in .META.
+ HBASE-516 HStoreFile.finalKey does not update the final key if it is not
+ the top region of a split region
+ HBASE-525 HTable.getRow(Text) does not work (Clint Morgan via Bryan Duxbury)
+ HBASE-524 Problems with getFull
+ HBASE-528 table 'does not exist' when it does
+ HBASE-531 Merge tool won't merge two overlapping regions (port HBASE-483 to
+ trunk)
+ HBASE-537 Wait for hdfs to exit safe mode
+ HBASE-476 RegexpRowFilter behaves incorectly when there are multiple store
+ files (Clint Morgan via Jim Kellerman)
+ HBASE-527 RegexpRowFilter does not work when there are columns from
+ multiple families (Clint Morgan via Jim Kellerman)
+ HBASE-534 Double-assignment at SPLIT-time
+ HBASE-712 midKey found compacting is the first, not necessarily the optimal
+ HBASE-719 Find out why users have network problems in HBase and not in Hadoop
+ and HConnectionManager (Jean-Daniel Cryans via Stack)
+ HBASE-703 Invalid regions listed by regionserver.jsp (Izaak Rubin via Stack)
+ HBASE-674 Memcache size unreliable
+ HBASE-726 Unit tests won't run because of a typo (Sebastien Rainville via Stack)
+ HBASE-727 Client caught in an infinite loop when trying to connect to cached
+ server locations (Izaak Rubin via Stack)
+ HBASE-732 shell formatting error with the describe command
+ (Izaak Rubin via Stack)
+ HBASE-731 delete, deletefc in HBase shell do not work correctly
+ (Izaak Rubin via Stack)
+ HBASE-734 scan '.META.', {LIMIT => 10} crashes (Izaak Rubin via Stack)
+ HBASE-736 Should have HTable.deleteAll(String row) and HTable.deleteAll(Text row)
+ (Jean-Daniel Cryans via Stack)
+ HBASE-740 ThriftServer getting table names incorrectly (Tim Sell via Stack)
+ HBASE-742 Rename getMetainfo in HTable as getTableDescriptor
+ HBASE-739 HBaseAdmin.createTable() using old HTableDescription doesn't work
+ (Izaak Rubin via Stack)
+ HBASE-744 BloomFilter serialization/deserialization broken
+ HBASE-742 Column length limit is not enforced (Jean-Daniel Cryans via Stack)
+ HBASE-737 Scanner: every cell in a row has the same timestamp
+ HBASE-700 hbase.io.index.interval need be configuratable in column family
+ (Andrew Purtell via Stack)
+ HBASE-62 Allow user add arbitrary key/value pairs to table and column
+ descriptors (Andrew Purtell via Stack)
+ HBASE-34 Set memcache flush size per column (Andrew Purtell via Stack)
+ HBASE-42 Set region split size on table creation (Andrew Purtell via Stack)
+ HBASE-43 Add a read-only attribute to columns (Andrew Purtell via Stack)
+ HBASE-424 Should be able to enable/disable .META. table
+ HBASE-679 Regionserver addresses are still not right in the new tables page
+ HBASE-758 Throwing IOE read-only when should be throwing NSRE
+ HBASE-743 bin/hbase migrate upgrade fails when redo logs exists
+ HBASE-754 The JRuby shell documentation is wrong in "get" and "put"
+ (Jean-Daniel Cryans via Stack)
+ HBASE-756 In HBase shell, the put command doesn't process the timestamp
+ (Jean-Daniel Cryans via Stack)
+ HBASE-757 REST mangles table names (Sishen via Stack)
+ HBASE-706 On OOME, regionserver sticks around and doesn't go down with cluster
+ (Jean-Daniel Cryans via Stack)
+ HBASE-759 TestMetaUtils failing on hudson
+ HBASE-761 IOE: Stream closed exception all over logs
+ HBASE-763 ClassCastException from RowResult.get(String)
+ (Andrew Purtell via Stack)
+ HBASE-764 The name of column request has padding zero using REST interface
+ (Sishen Freecity via Stack)
+ HBASE-750 NPE caused by StoreFileScanner.updateReaders
+ HBASE-769 TestMasterAdmin fails throwing RegionOfflineException when we're
+ expecting IllegalStateException
+ HBASE-766 FileNotFoundException trying to load HStoreFile 'data'
+ HBASE-770 Update HBaseRPC to match hadoop 0.17 RPC
+ HBASE-780 Can't scan '.META.' from new shell
+ HBASE-424 Should be able to enable/disable .META. table
+ HBASE-771 Names legal in 0.1 are not in 0.2; breaks migration
+ HBASE-788 Div by zero in Master.jsp (Clint Morgan via Jim Kellerman)
+ HBASE-791 RowCount doesn't work (Jean-Daniel Cryans via Stack)
+ HBASE-751 dfs exception and regionserver stuck during heavy write load
+ HBASE-793 HTable.getStartKeys() ignores table names when matching columns
+ (Andrew Purtell and Dru Jensen via Stack)
+ HBASE-790 During import, single region blocks requests for >10 minutes,
+ thread dumps, throws out pending requests, and continues
+ (Jonathan Gray via Stack)
+
+ IMPROVEMENTS
+ HBASE-559 MR example job to count table rows
+ HBASE-596 DemoClient.py (Ivan Begtin via Stack)
+ HBASE-581 Allow adding filters to TableInputFormat (At same time, ensure TIF
+ is subclassable) (David Alves via Stack)
+ HBASE-603 When an exception bubbles out of getRegionServerWithRetries, wrap
+ the exception with a RetriesExhaustedException
+ HBASE-600 Filters have excessive DEBUG logging
+ HBASE-611 regionserver should do basic health check before reporting
+ alls-well to the master
+ HBASE-614 Retiring regions is not used; exploit or remove
+ HBASE-538 Improve exceptions that come out on client-side
+ HBASE-569 DemoClient.php (Jim R. Wilson via Stack)
+ HBASE-522 Where new Text(string) might be used in client side method calls,
+ add an overload that takes String (Done as part of HBASE-82)
+ HBASE-570 Remove HQL unit test (Done as part of HBASE-82 commit).
+ HBASE-626 Use Visitor pattern in MetaRegion to reduce code clones in HTable
+ and HConnectionManager (Jean-Daniel Cryans via Stack)
+ HBASE-621 Make MAX_VERSIONS work like TTL: In scans and gets, check
+ MAX_VERSIONs setting and return that many only rather than wait on
+ compaction (Jean-Daniel Cryans via Stack)
+ HBASE-504 Allow HMsg's carry a payload: e.g. exception that happened over
+ on the remote side.
+ HBASE-583 RangeRowFilter/ColumnValueFilter to allow choice of rows based on
+ a (lexicographic) comparison to column's values
+ (Clint Morgan via Stack)
+ HBASE-579 Add hadoop 0.17.x
+ HBASE-660 [Migration] addColumn/deleteColumn functionality in MetaUtils
+ HBASE-632 HTable.getMetadata is very inefficient
+ HBASE-671 New UI page displaying all regions in a table should be sorted
+ HBASE-672 Sort regions in the regionserver UI
+ HBASE-677 Make HTable, HRegion, HRegionServer, HStore, and HColumnDescriptor
+ subclassable (Clint Morgan via Stack)
+ HBASE-682 Regularize toString
+ HBASE-672 Sort regions in the regionserver UI
+ HBASE-469 Streamline HStore startup and compactions
+ HBASE-544 Purge startUpdate from internal code and test cases
+ HBASE-557 HTable.getRow() should receive RowResult objects
+ HBASE-452 "region offline" should throw IOException, not IllegalStateException
+ HBASE-541 Update hadoop jars.
+ HBASE-523 package-level javadoc should have example client
+ HBASE-415 Rewrite leases to use DelayedBlockingQueue instead of polling
+ HBASE-35 Make BatchUpdate public in the API
+ HBASE-409 Add build path to svn:ignore list (Edward Yoon via Stack)
+ HBASE-408 Add .classpath and .project to svn:ignore list
+ (Edward Yoon via Stack)
+ HBASE-410 Speed up the test suite (make test timeout 5 instead of 15 mins).
+ HBASE-281 Shell should allow deletions in .META. and -ROOT- tables
+ (Edward Yoon & Bryan Duxbury via Stack)
+ HBASE-56 Unnecessary HQLClient Object creation in a shell loop
+ (Edward Yoon via Stack)
+ HBASE-3 rest server: configure number of threads for jetty
+ (Bryan Duxbury via Stack)
+ HBASE-416 Add apache-style logging to REST server and add setting log
+ level, etc.
+ HBASE-406 Remove HTable and HConnection close methods
+ (Bryan Duxbury via Stack)
+ HBASE-418 Move HMaster and related classes into master package
+ (Bryan Duxbury via Stack)
+ HBASE-410 Speed up the test suite - Apparently test timeout was too
+ aggressive for Hudson. TestLogRolling timed out even though it
+ was operating properly. Change test timeout to 10 minutes.
+ HBASE-436 website: http://hadoop.apache.org/hbase
+ HBASE-417 Factor TableOperation and subclasses into separate files from
+ HMaster (Bryan Duxbury via Stack)
+ HBASE-440 Add optional log roll interval so that log files are garbage
+ collected
+ HBASE-407 Keep HRegionLocation information in LRU structure
+ HBASE-444 hbase is very slow at determining table is not present
+ HBASE-438 XMLOutputter state should be initialized.
+ HBASE-414 Move client classes into client package
+ HBASE-79 When HBase needs to be migrated, it should display a message on
+ stdout, not just in the logs
+ HBASE-461 Simplify leases.
+ HBASE-419 Move RegionServer and related classes into regionserver package
+ HBASE-457 Factor Master into Master, RegionManager, and ServerManager
+ HBASE-464 HBASE-419 introduced javadoc errors
+ HBASE-468 Move HStoreKey back to o.a.h.h
+ HBASE-442 Move internal classes out of HRegionServer
+ HBASE-466 Move HMasterInterface, HRegionInterface, and
+ HMasterRegionInterface into o.a.h.h.ipc
+ HBASE-479 Speed up TestLogRolling
+ HBASE-480 Tool to manually merge two regions
+ HBASE-477 Add support for an HBASE_CLASSPATH
+ HBASE-443 Move internal classes out of HStore
+ HBASE-515 At least double default timeouts between regionserver and master
+ HBASE-529 RegionServer needs to recover if datanode goes down
+ HBASE-456 Clearly state which ports need to be opened in order to run HBase
+ HBASE-536 Remove MiniDFS startup from MiniHBaseCluster
+ HBASE-521 Improve client scanner interface
+ HBASE-562 Move Exceptions to subpackages (Jean-Daniel Cryans via Stack)
+ HBASE-631 HTable.getRow() for only a column family
+ (Jean-Daniel Cryans via Stack)
+ HBASE-731 Add a meta refresh tag to the Web ui for master and region server
+ (Jean-Daniel Cryans via Stack)
+ HBASE-735 hbase shell doesn't trap CTRL-C signal (Jean-Daniel Cryans via Stack)
+ HBASE-730 On startup, rinse STARTCODE and SERVER from .META.
+ (Jean-Daniel Cryans via Stack)
+ HBASE-738 overview.html in need of updating (Izaak Rubin via Stack)
+ HBASE-745 scaling of one regionserver, improving memory and cpu usage (partial)
+ (LN via Stack)
+ HBASE-746 Batching row mutations via thrift (Tim Sell via Stack)
+ HBASE-772 Up default lease period from 60 to 120 seconds
+ HBASE-779 Test changing hbase.hregion.memcache.block.multiplier to 2
+ HBASE-783 For single row, single family retrieval, getRow() works half
+ as fast as getScanner().next() (Jean-Daniel Cryans via Stack)
+ HBASE-789 add clover coverage report targets (Rong-en Fan via Stack)
+
+ NEW FEATURES
+ HBASE-47 Option to set TTL for columns in hbase
+ (Andrew Purtell via Bryan Duxbury and Stack)
+ HBASE-23 UI listing regions should be sorted by address and show additional
+ region state (Jean-Daniel Cryans via Stack)
+ HBASE-639 Add HBaseAdmin.getTableDescriptor function
+ HBASE-533 Region Historian
+ HBASE-487 Replace hql w/ a hbase-friendly jirb or jython shell
+ HBASE-548 Tool to online single region
+ HBASE-71 Master should rebalance region assignments periodically
+ HBASE-512 Add configuration for global aggregate memcache size
+ HBASE-40 Add a method of getting multiple (but not all) cells for a row
+ at once
+ HBASE-506 When an exception has to escape ServerCallable due to exhausted
+ retries, show all the exceptions that lead to this situation
+ HBASE-747 Add a simple way to do batch updates of many rows (Jean-Daniel
+ Cryans via JimK)
+ HBASE-733 Enhance Cell so that it can contain multiple values at multiple
+ timestamps
+ HBASE-511 Do exponential backoff in clients on NSRE, WRE, ISE, etc.
+ (Andrew Purtell via Jim Kellerman)
+
+ OPTIMIZATIONS
+ HBASE-430 Performance: Scanners and getRow return maps with duplicate data
+
+Release 0.1.3 - 07/25/2008
+
+ BUG FIXES
+ HBASE-644 DroppedSnapshotException but RegionServer doesn't restart
+ HBASE-645 EOFException opening region (HBASE-550 redux)
+ HBASE-641 Improve master split logging
+ HBASE-642 Splitting log in a hostile environment -- bad hdfs -- we drop
+ write-ahead-log edits
+ HBASE-646 EOFException opening HStoreFile info file (spin on HBASE-645 and 550)
+ HBASE-648 If mapfile index is empty, run repair
+ HBASE-659 HLog#cacheFlushLock not cleared; hangs a region
+ HBASE-663 Incorrect sequence number for cache flush
+ HBASE-652 Dropping table fails silently if table isn't disabled
+ HBASE-674 Memcache size unreliable
+ HBASE-665 server side scanner doesn't honor stop row
+ HBASE-681 NPE in Memcache (Clint Morgan via Jim Kellerman)
+ HBASE-680 config parameter hbase.io.index.interval should be
+ hbase.index.interval, accroding to HBaseMapFile.HbaseWriter
+ (LN via Stack)
+ HBASE-684 unnecessary iteration in HMemcache.internalGet? got much better
+ reading performance after break it (LN via Stack)
+ HBASE-686 MemcacheScanner didn't return the first row(if it exists),
+ because HScannerInterface's output incorrect (LN via Jim Kellerman)
+ HBASE-613 Timestamp-anchored scanning fails to find all records
+ HBASE-709 Deadlock while rolling WAL-log while finishing flush
+ HBASE-707 High-load import of data into single table/family never triggers split
+ HBASE-710 If clocks are way off, then we can have daughter split come
+ before rather than after its parent in .META.
+
+Release 0.1.2 - 05/13/2008
+
+ BUG FIXES
+ HBASE-577 NPE getting scanner
+ HBASE-574 HBase does not load hadoop native libs (Rong-En Fan via Stack).
+ HBASE-11 Unexpected exits corrupt DFS - best we can do until we have at
+ least a subset of HADOOP-1700
+ HBASE-573 HBase does not read hadoop-*.xml for dfs configuration after
+ moving out hadoop/contrib
+ HBASE-12 when hbase regionserver restarts, it says "impossible state for
+ createLease()"
+ HBASE-575 master dies with stack overflow error if rootdir isn't qualified
+ HBASE-500 Regionserver stuck on exit
+ HBASE-582 HBase 554 forgot to clear results on each iteration caused by a filter
+ (Clint Morgan via Stack)
+ HBASE-532 Odd interaction between HRegion.get, HRegion.deleteAll and compactions
+ HBASE-590 HBase migration tool does not get correct FileSystem or root
+ directory if configuration is not correct
+ HBASE-595 RowFilterInterface.rowProcessed() is called *before* fhe final
+ filtering decision is made (Clint Morgan via Stack)
+ HBASE-586 HRegion runs HStore memcache snapshotting -- fix it so only HStore
+ knows about workings of memcache
+ HBASE-572 Backport HBASE-512 to 0.1 branch
+ HBASE-588 Still a 'hole' in scanners, even after HBASE-532
+ HBASE-604 Don't allow CLASSPATH from environment pollute the hbase CLASSPATH
+ HBASE-608 HRegionServer::getThisIP() checks hadoop config var for dns interface name
+ (Jim R. Wilson via Stack)
+ HBASE-609 Master doesn't see regionserver edits because of clock skew
+ HBASE-607 MultiRegionTable.makeMultiRegionTable is not deterministic enough
+ for regression tests
+ HBASE-478 offlining of table does not run reliably
+ HBASE-618 We always compact if 2 files, regardless of the compaction threshold setting
+ HBASE-619 Fix 'logs' link in UI
+ HBASE-620 testmergetool failing in branch and trunk since hbase-618 went in
+
+ IMPROVEMENTS
+ HBASE-559 MR example job to count table rows
+ HBASE-578 Upgrade branch to 0.16.3 hadoop.
+ HBASE-596 DemoClient.py (Ivan Begtin via Stack)
+
+
+Release 0.1.1 - 04/11/2008
+
+ BUG FIXES
+ HBASE-550 EOF trying to read reconstruction log stops region deployment
+ HBASE-551 Master stuck splitting server logs in shutdown loop; on each
+ iteration, edits are aggregated up into the millions
+ HBASE-505 Region assignments should never time out so long as the region
+ server reports that it is processing the open request
+ HBASE-552 Fix bloom filter bugs (Andrzej Bialecki via Jim Kellerman)
+ HBASE-507 Add sleep between retries
+ HBASE-555 Only one Worker in HRS; on startup, if assigned tens of regions,
+ havoc of reassignments because open processing is done in series
+ HBASE-547 UI shows hadoop version, not hbase version
+ HBASE-561 HBase package does not include LICENSE.txt nor build.xml
+ HBASE-556 Add 0.16.2 to hbase branch -- if it works
+ HBASE-563 TestRowFilterAfterWrite erroneously sets master address to
+ 0.0.0.0:60100 rather than relying on conf
+ HBASE-554 filters generate StackOverflowException (Clint Morgan via
+ Jim Kellerman)
+ HBASE-567 Reused BatchUpdate instances accumulate BatchOperations
+
+ NEW FEATURES
+ HBASE-548 Tool to online single region
+
+Release 0.1.0
+
+ INCOMPATIBLE CHANGES
+ HADOOP-2750 Deprecated methods startBatchUpdate, commitBatch, abortBatch,
+ and renewLease have been removed from HTable (Bryan Duxbury via
+ Jim Kellerman)
+ HADOOP-2786 Move hbase out of hadoop core
+ HBASE-403 Fix build after move of hbase in svn
+ HBASE-494 Up IPC version on 0.1 branch so we cannot mistakenly connect
+ with a hbase from 0.16.0
+
+ NEW FEATURES
+ HBASE-506 When an exception has to escape ServerCallable due to exhausted retries,
+ show all the exceptions that lead to this situation
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+ HADOOP-2731 Under load, regions become extremely large and eventually cause
+ region servers to become unresponsive
+ HADOOP-2693 NPE in getClosestRowBefore (Bryan Duxbury & Stack)
+ HADOOP-2599 Some minor improvements to changes in HADOOP-2443
+ (Bryan Duxbury & Stack)
+ HADOOP-2773 Master marks region offline when it is recovering from a region
+ server death
+ HBASE-425 Fix doc. so it accomodates new hbase untethered context
+ HBase-421 TestRegionServerExit broken
+ HBASE-426 hbase can't find remote filesystem
+ HBASE-446 Fully qualified hbase.rootdir doesn't work
+ HBASE-428 Under continuous upload of rows, WrongRegionExceptions are
+ thrown that reach the client even after retries
+ HBASE-490 Doubly-assigned .META.; master uses one and clients another
+ HBASE-496 impossible state for createLease writes 400k lines in about 15mins
+ HBASE-472 Passing on edits, we dump all to log
+ HBASE-79 When HBase needs to be migrated, it should display a message on
+ stdout, not just in the logs
+ HBASE-495 No server address listed in .META.
+ HBASE-433 HBASE-251 Region server should delete restore log after successful
+ restore, Stuck replaying the edits of crashed machine.
+ HBASE-27 hregioninfo cell empty in meta table
+ HBASE-501 Empty region server address in info:server entry and a
+ startcode of -1 in .META.
+ HBASE-516 HStoreFile.finalKey does not update the final key if it is not
+ the top region of a split region
+ HBASE-524 Problems with getFull
+ HBASE-514 table 'does not exist' when it does
+ HBASE-537 Wait for hdfs to exit safe mode
+ HBASE-534 Double-assignment at SPLIT-time
+
+ IMPROVEMENTS
+ HADOOP-2555 Refactor the HTable#get and HTable#getRow methods to avoid
+ repetition of retry-on-failure logic (thanks to Peter Dolan and
+ Bryan Duxbury)
+ HBASE-281 Shell should allow deletions in .META. and -ROOT- tables
+ HBASE-480 Tool to manually merge two regions
+ HBASE-477 Add support for an HBASE_CLASSPATH
+ HBASE-515 At least double default timeouts between regionserver and master
+ HBASE-482 package-level javadoc should have example client or at least
+ point at the FAQ
+ HBASE-497 RegionServer needs to recover if datanode goes down
+ HBASE-456 Clearly state which ports need to be opened in order to run HBase
+ HBASE-483 Merge tool won't merge two overlapping regions
+ HBASE-476 RegexpRowFilter behaves incorectly when there are multiple store
+ files (Clint Morgan via Jim Kellerman)
+ HBASE-527 RegexpRowFilter does not work when there are columns from
+ multiple families (Clint Morgan via Jim Kellerman)
+
+Release 0.16.0
+
+ 2008/02/04 HBase is now a subproject of Hadoop. The first HBase release as
+ a subproject will be release 0.1.0 which will be equivalent to
+ the version of HBase included in Hadoop 0.16.0. In order to
+ accomplish this, the HBase portion of HBASE-288 (formerly
+ HADOOP-1398) has been backed out. Once 0.1.0 is frozen (depending
+ mostly on changes to infrastructure due to becoming a sub project
+ instead of a contrib project), this patch will re-appear on HBase
+ trunk.
+
+ INCOMPATIBLE CHANGES
+ HADOOP-2056 A table with row keys containing colon fails to split regions
+ HADOOP-2079 Fix generated HLog, HRegion names
+ HADOOP-2495 Minor performance improvements: Slim-down BatchOperation, etc.
+ HADOOP-2506 Remove the algebra package
+ HADOOP-2519 Performance improvements: Customized RPC serialization
+ HADOOP-2478 Restructure how HBase lays out files in the file system (phase 1)
+ (test input data)
+ HADOOP-2478 Restructure how HBase lays out files in the file system (phase 2)
+ Includes migration tool org.apache.hadoop.hbase.util.Migrate
+ HADOOP-2558 org.onelab.filter.BloomFilter class uses 8X the memory it should
+ be using
+
+ NEW FEATURES
+ HADOOP-2061 Add new Base64 dialects
+ HADOOP-2084 Add a LocalHBaseCluster
+ HADOOP-2068 RESTful interface (Bryan Duxbury via Stack)
+ HADOOP-2316 Run REST servlet outside of master
+ (Bryan Duxbury & Stack)
+ HADOOP-1550 No means of deleting a'row' (Bryan Duxbuery via Stack)
+ HADOOP-2384 Delete all members of a column family on a specific row
+ (Bryan Duxbury via Stack)
+ HADOOP-2395 Implement "ALTER TABLE ... CHANGE column" operation
+ (Bryan Duxbury via Stack)
+ HADOOP-2240 Truncate for hbase (Edward Yoon via Stack)
+ HADOOP-2389 Provide multiple language bindings for HBase (Thrift)
+ (David Simpson via Stack)
+
+ OPTIMIZATIONS
+ HADOOP-2479 Save on number of Text object creations
+ HADOOP-2485 Make mapfile index interval configurable (Set default to 32
+ instead of 128)
+ HADOOP-2553 Don't make Long objects calculating hbase type hash codes
+ HADOOP-2377 Holding open MapFile.Readers is expensive, so use less of them
+ HADOOP-2407 Keeping MapFile.Reader open is expensive: Part 2
+ HADOOP-2533 Performance: Scanning, just creating MapWritable in next
+ consumes >20% CPU
+ HADOOP-2443 Keep lazy cache of regions in client rather than an
+ 'authoritative' list (Bryan Duxbury via Stack)
+ HADOOP-2600 Performance: HStore.getRowKeyAtOrBefore should use
+ MapFile.Reader#getClosest (before)
+ (Bryan Duxbury via Stack)
+
+ BUG FIXES
+ HADOOP-2059 In tests, exceptions in min dfs shutdown should not fail test
+ (e.g. nightly #272)
+ HADOOP-2064 TestSplit assertion and NPE failures (Patch build #952 and #953)
+ HADOOP-2124 Use of `hostname` does not work on Cygwin in some cases
+ HADOOP-2083 TestTableIndex failed in #970 and #956
+ HADOOP-2109 Fixed race condition in processing server lease timeout.
+ HADOOP-2137 hql.jsp : The character 0x19 is not valid
+ HADOOP-2109 Fix another race condition in processing dead servers,
+ Fix error online meta regions: was using region name and not
+ startKey as key for map.put. Change TestRegionServerExit to
+ always kill the region server for the META region. This makes
+ the test more deterministic and getting META reassigned was
+ problematic.
+ HADOOP-2155 Method expecting HBaseConfiguration throws NPE when given Configuration
+ HADOOP-2156 BufferUnderflowException for un-named HTableDescriptors
+ HADOOP-2161 getRow() is orders of magnitudes slower than get(), even on rows
+ with one column (Clint Morgan and Stack)
+ HADOOP-2040 Hudson hangs AFTER test has finished
+ HADOOP-2274 Excess synchronization introduced by HADOOP-2139 negatively
+ impacts performance
+ HADOOP-2196 Fix how hbase sits in hadoop 'package' product
+ HADOOP-2276 Address regression caused by HADOOP-2274, fix HADOOP-2173 (When
+ the master times out a region servers lease, the region server
+ may not restart)
+ HADOOP-2253 getRow can return HBASE::DELETEVAL cells
+ (Bryan Duxbury via Stack)
+ HADOOP-2295 Fix assigning a region to multiple servers
+ HADOOP-2234 TableInputFormat erroneously aggregates map values
+ HADOOP-2308 null regioninfo breaks meta scanner
+ HADOOP-2304 Abbreviated symbol parsing error of dir path in jar command
+ (Edward Yoon via Stack)
+ HADOOP-2320 Committed TestGet2 is managled (breaks build).
+ HADOOP-2322 getRow(row, TS) client interface not properly connected
+ HADOOP-2309 ConcurrentModificationException doing get of all region start keys
+ HADOOP-2321 TestScanner2 does not release resources which sometimes cause the
+ test to time out
+ HADOOP-2315 REST servlet doesn't treat / characters in row key correctly
+ (Bryan Duxbury via Stack)
+ HADOOP-2332 Meta table data selection in Hbase Shell
+ (Edward Yoon via Stack)
+ HADOOP-2347 REST servlet not thread safe but run in a threaded manner
+ (Bryan Duxbury via Stack)
+ HADOOP-2365 Result of HashFunction.hash() contains all identical values
+ HADOOP-2362 Leaking hdfs file handle on region split
+ HADOOP-2338 Fix NullPointerException in master server.
+ HADOOP-2380 REST servlet throws NPE when any value node has an empty string
+ (Bryan Duxbury via Stack)
+ HADOOP-2350 Scanner api returns null row names, or skips row names if
+ different column families do not have entries for some rows
+ HADOOP-2283 AlreadyBeingCreatedException (Was: Stuck replay of failed
+ regionserver edits)
+ HADOOP-2392 TestRegionServerExit has new failure mode since HADOOP-2338
+ HADOOP-2324 Fix assertion failures in TestTableMapReduce
+ HADOOP-2396 NPE in HMaster.cancelLease
+ HADOOP-2397 The only time that a meta scanner should try to recover a log is
+ when the master is starting
+ HADOOP-2417 Fix critical shutdown problem introduced by HADOOP-2338
+ HADOOP-2418 Fix assertion failures in TestTableMapReduce, TestTableIndex,
+ and TestTableJoinMapReduce
+ HADOOP-2414 Fix ArrayIndexOutOfBoundsException in bloom filters.
+ HADOOP-2430 Master will not shut down if there are no active region servers
+ HADOOP-2199 Add tools for going from hregion filename to region name in logs
+ HADOOP-2441 Fix build failures in TestHBaseCluster
+ HADOOP-2451 End key is incorrectly assigned in many region splits
+ HADOOP-2455 Error in Help-string of CREATE command (Edward Yoon via Stack)
+ HADOOP-2465 When split parent regions are cleaned up, not all the columns are
+ deleted
+ HADOOP-2468 TestRegionServerExit failed in Hadoop-Nightly #338
+ HADOOP-2467 scanner truncates resultset when > 1 column families
+ HADOOP-2503 REST Insert / Select encoding issue (Bryan Duxbury via Stack)
+ HADOOP-2505 formatter classes missing apache license
+ HADOOP-2504 REST servlet method for deleting a scanner was not properly
+ mapped (Bryan Duxbury via Stack)
+ HADOOP-2507 REST servlet does not properly base64 row keys and column names
+ (Bryan Duxbury via Stack)
+ HADOOP-2530 Missing type in new hbase custom RPC serializer
+ HADOOP-2490 Failure in nightly #346 (Added debugging of hudson failures).
+ HADOOP-2558 fixes for build up on hudson (part 1, part 2, part 3, part 4)
+ HADOOP-2500 Unreadable region kills region servers
+ HADOOP-2579 Initializing a new HTable object against a nonexistent table
+ throws a NoServerForRegionException instead of a
+ TableNotFoundException when a different table has been created
+ previously (Bryan Duxbury via Stack)
+ HADOOP-2587 Splits blocked by compactions cause region to be offline for
+ duration of compaction.
+ HADOOP-2592 Scanning, a region can let out a row that its not supposed
+ to have
+ HADOOP-2493 hbase will split on row when the start and end row is the
+ same cause data loss (Bryan Duxbury via Stack)
+ HADOOP-2629 Shell digests garbage without complaint
+ HADOOP-2619 Compaction errors after a region splits
+ HADOOP-2621 Memcache flush flushing every 60 secs with out considering
+ the max memcache size
+ HADOOP-2584 Web UI displays an IOException instead of the Tables
+ HADOOP-2650 Remove Writables.clone and use WritableUtils.clone from
+ hadoop instead
+ HADOOP-2668 Documentation and improved logging so fact that hbase now
+ requires migration comes as less of a surprise
+ HADOOP-2686 Removed tables stick around in .META.
+ HADOOP-2688 IllegalArgumentException processing a shutdown stops
+ server going down and results in millions of lines of output
+ HADOOP-2706 HBase Shell crash
+ HADOOP-2712 under load, regions won't split
+ HADOOP-2675 Options not passed to rest/thrift
+ HADOOP-2722 Prevent unintentional thread exit in region server and master
+ HADOOP-2718 Copy Constructor HBaseConfiguration(Configuration) will override
+ hbase configurations if argumant is not an instance of
+ HBaseConfiguration.
+ HADOOP-2753 Back out 2718; programmatic config works but hbase*xml conf
+ is overridden
+ HADOOP-2718 Copy Constructor HBaseConfiguration(Configuration) will override
+ hbase configurations if argumant is not an instance of
+ HBaseConfiguration (Put it back again).
+ HADOOP-2631 2443 breaks HTable.getStartKeys when there is more than one
+ table or table you are enumerating isn't the first table
+ Delete empty file: src/contrib/hbase/src/java/org/apache/hadoop/hbase/mapred/
+ TableOutputCollector.java per Nigel Daley
+
+ IMPROVEMENTS
+ HADOOP-2401 Add convenience put method that takes writable
+ (Johan Oskarsson via Stack)
+ HADOOP-2074 Simple switch to enable DEBUG level-logging in hbase
+ HADOOP-2088 Make hbase runnable in $HADOOP_HOME/build(/contrib/hbase)
+ HADOOP-2126 Use Bob Jenkins' hash for bloom filters
+ HADOOP-2157 Make Scanners implement Iterable
+ HADOOP-2176 Htable.deleteAll documentation is ambiguous
+ HADOOP-2139 (phase 1) Increase parallelism in region servers.
+ HADOOP-2267 [Hbase Shell] Change the prompt's title from 'hbase' to 'hql'.
+ (Edward Yoon via Stack)
+ HADOOP-2139 (phase 2) Make region server more event driven
+ HADOOP-2289 Useless efforts of looking for the non-existant table in select
+ command.
+ (Edward Yoon via Stack)
+ HADOOP-2257 Show a total of all requests and regions on the web ui
+ (Paul Saab via Stack)
+ HADOOP-2261 HTable.abort no longer throws exception if there is no active update.
+ HADOOP-2287 Make hbase unit tests take less time to complete.
+ HADOOP-2262 Retry n times instead of n**2 times.
+ HADOOP-1608 Relational Algrebra Operators
+ (Edward Yoon via Stack)
+ HADOOP-2198 HTable should have method to return table metadata
+ HADOOP-2296 hbase shell: phantom columns show up from select command
+ HADOOP-2297 System.exit() Handling in hbase shell jar command
+ (Edward Yoon via Stack)
+ HADOOP-2224 Add HTable.getRow(ROW, ts)
+ (Bryan Duxbury via Stack)
+ HADOOP-2339 Delete command with no WHERE clause
+ (Edward Yoon via Stack)
+ HADOOP-2299 Support inclusive scans (Bryan Duxbury via Stack)
+ HADOOP-2333 Client side retries happen at the wrong level
+ HADOOP-2357 Compaction cleanup; less deleting + prevent possible file leaks
+ HADOOP-2392 TestRegionServerExit has new failure mode since HADOOP-2338
+ HADOOP-2370 Allow column families with an unlimited number of versions
+ (Edward Yoon via Stack)
+ HADOOP-2047 Add an '--master=X' and '--html' command-line parameters to shell
+ (Edward Yoon via Stack)
+ HADOOP-2351 If select command returns no result, it doesn't need to show the
+ header information (Edward Yoon via Stack)
+ HADOOP-2285 Add being able to shutdown regionservers (Dennis Kubes via Stack)
+ HADOOP-2458 HStoreFile.writeSplitInfo should just call
+ HStoreFile.Reference.write
+ HADOOP-2471 Add reading/writing MapFile to PerformanceEvaluation suite
+ HADOOP-2522 Separate MapFile benchmark from PerformanceEvaluation
+ (Tom White via Stack)
+ HADOOP-2502 Insert/Select timestamp, Timestamp data type in HQL
+ (Edward Yoon via Stack)
+ HADOOP-2450 Show version (and svn revision) in hbase web ui
+ HADOOP-2472 Range selection using filter (Edward Yoon via Stack)
+ HADOOP-2548 Make TableMap and TableReduce generic
+ (Frederik Hedberg via Stack)
+ HADOOP-2557 Shell count function (Edward Yoon via Stack)
+ HADOOP-2589 Change an classes/package name from Shell to hql
+ (Edward Yoon via Stack)
+ HADOOP-2545 hbase rest server should be started with hbase-daemon.sh
+ HADOOP-2525 Same 2 lines repeated 11 million times in HMaster log upon
+ HMaster shutdown
+ HADOOP-2616 hbase not spliting when the total size of region reaches max
+ region size * 1.5
+ HADOOP-2643 Make migration tool smarter.
+
+Release 0.15.1
+Branch 0.15
+
+ INCOMPATIBLE CHANGES
+ HADOOP-1931 Hbase scripts take --ARG=ARG_VALUE when should be like hadoop
+ and do ---ARG ARG_VALUE
+
+ NEW FEATURES
+ HADOOP-1768 FS command using Hadoop FsShell operations
+ (Edward Yoon via Stack)
+ HADOOP-1784 Delete: Fix scanners and gets so they work properly in presence
+ of deletes. Added a deleteAll to remove all cells equal to or
+ older than passed timestamp. Fixed compaction so deleted cells
+ do not make it out into compacted output. Ensure also that
+ versions > column max are dropped compacting.
+ HADOOP-1720 Addition of HQL (Hbase Query Language) support in Hbase Shell.
+ The old shell syntax has been replaced by HQL, a small SQL-like
+ set of operators, for creating, altering, dropping, inserting,
+ deleting, and selecting, etc., data in hbase.
+ (Inchul Song and Edward Yoon via Stack)
+ HADOOP-1913 Build a Lucene index on an HBase table
+ (Ning Li via Stack)
+ HADOOP-1957 Web UI with report on cluster state and basic browsing of tables
+
+ OPTIMIZATIONS
+
+ BUG FIXES
+ HADOOP-1527 Region server won't start because logdir exists
+ HADOOP-1723 If master asks region server to shut down, by-pass return of
+ shutdown message
+ HADOOP-1729 Recent renaming or META tables breaks hbase shell
+ HADOOP-1730 unexpected null value causes META scanner to exit (silently)
+ HADOOP-1747 On a cluster, on restart, regions multiply assigned
+ HADOOP-1776 Fix for sporadic compaction failures closing and moving
+ compaction result
+ HADOOP-1780 Regions are still being doubly assigned
+ HADOOP-1797 Fix NPEs in MetaScanner constructor
+ HADOOP-1799 Incorrect classpath in binary version of Hadoop
+ HADOOP-1805 Region server hang on exit
+ HADOOP-1785 TableInputFormat.TableRecordReader.next has a bug
+ (Ning Li via Stack)
+ HADOOP-1800 output should default utf8 encoding
+ HADOOP-1801 When hdfs is yanked out from under hbase, hbase should go down gracefully
+ HADOOP-1813 OOME makes zombie of region server
+ HADOOP-1814 TestCleanRegionServerExit fails too often on Hudson
+ HADOOP-1820 Regionserver creates hlogs without bound
+ (reverted 2007/09/25) (Fixed 2007/09/30)
+ HADOOP-1821 Replace all String.getBytes() with String.getBytes("UTF-8")
+ HADOOP-1832 listTables() returns duplicate tables
+ HADOOP-1834 Scanners ignore timestamp passed on creation
+ HADOOP-1847 Many HBase tests do not fail well.
+ HADOOP-1847 Many HBase tests do not fail well. (phase 2)
+ HADOOP-1870 Once file system failure has been detected, don't check it again
+ and get on with shutting down the hbase cluster.
+ HADOOP-1888 NullPointerException in HMemcacheScanner (reprise)
+ HADOOP-1903 Possible data loss if Exception happens between snapshot and
+ flush to disk.
+ HADOOP-1920 Wrapper scripts broken when hadoop in one location and hbase in
+ another
+ HADOOP-1923, HADOOP-1924 a) tests fail sporadically because set up and tear
+ down is inconsistent b) TestDFSAbort failed in nightly #242
+ HADOOP-1929 Add hbase-default.xml to hbase jar
+ HADOOP-1941 StopRowFilter throws NPE when passed null row
+ HADOOP-1966 Make HBase unit tests more reliable in the Hudson environment.
+ HADOOP-1975 HBase tests failing with java.lang.NumberFormatException
+ HADOOP-1990 Regression test instability affects nightly and patch builds
+ HADOOP-1996 TestHStoreFile fails on windows if run multiple times
+ HADOOP-1937 When the master times out a region server's lease, it is too
+ aggressive in reclaiming the server's log.
+ HADOOP-2004 webapp hql formatting bugs
+ HADOOP_2011 Make hbase daemon scripts take args in same order as hadoop
+ daemon scripts
+ HADOOP-2017 TestRegionServerAbort failure in patch build #903 and
+ nightly #266
+ HADOOP-2029 TestLogRolling fails too often in patch and nightlies
+ HADOOP-2038 TestCleanRegionExit failed in patch build #927
+
+ IMPROVEMENTS
+ HADOOP-1737 Make HColumnDescriptor data publically members settable
+ HADOOP-1746 Clean up findbugs warnings
+ HADOOP-1757 Bloomfilters: single argument constructor, use enum for bloom
+ filter types
+ HADOOP-1760 Use new MapWritable and SortedMapWritable classes from
+ org.apache.hadoop.io
+ HADOOP-1793 (Phase 1) Remove TestHClient (Phase2) remove HClient.
+ HADOOP-1794 Remove deprecated APIs
+ HADOOP-1802 Startup scripts should wait until hdfs as cleared 'safe mode'
+ HADOOP-1833 bin/stop_hbase.sh returns before it completes
+ (Izaak Rubin via Stack)
+ HADOOP-1835 Updated Documentation for HBase setup/installation
+ (Izaak Rubin via Stack)
+ HADOOP-1868 Make default configuration more responsive
+ HADOOP-1884 Remove useless debugging log messages from hbase.mapred
+ HADOOP-1856 Add Jar command to hbase shell using Hadoop RunJar util
+ (Edward Yoon via Stack)
+ HADOOP-1928 Have master pass the regionserver the filesystem to use
+ HADOOP-1789 Output formatting
+ HADOOP-1960 If a region server cannot talk to the master before its lease
+ times out, it should shut itself down
+ HADOOP-2035 Add logo to webapps
+
+
+Below are the list of changes before 2007-08-18
+
+ 1. HADOOP-1384. HBase omnibus patch. (jimk, Vuk Ercegovac, and Michael Stack)
+ 2. HADOOP-1402. Fix javadoc warnings in hbase contrib. (Michael Stack)
+ 3. HADOOP-1404. HBase command-line shutdown failing (Michael Stack)
+ 4. HADOOP-1397. Replace custom hbase locking with
+ java.util.concurrent.locks.ReentrantLock (Michael Stack)
+ 5. HADOOP-1403. HBase reliability - make master and region server more fault
+ tolerant.
+ 6. HADOOP-1418. HBase miscellaneous: unit test for HClient, client to do
+ 'Performance Evaluation', etc.
+ 7. HADOOP-1420, HADOOP-1423. Findbugs changes, remove reference to removed
+ class HLocking.
+ 8. HADOOP-1424. TestHBaseCluster fails with IllegalMonitorStateException. Fix
+ regression introduced by HADOOP-1397.
+ 9. HADOOP-1426. Make hbase scripts executable + add test classes to CLASSPATH.
+ 10. HADOOP-1430. HBase shutdown leaves regionservers up.
+ 11. HADOOP-1392. Part1: includes create/delete table; enable/disable table;
+ add/remove column.
+ 12. HADOOP-1392. Part2: includes table compaction by merging adjacent regions
+ that have shrunk in size.
+ 13. HADOOP-1445 Support updates across region splits and compactions
+ 14. HADOOP-1460 On shutdown IOException with complaint 'Cannot cancel lease
+ that is not held'
+ 15. HADOOP-1421 Failover detection, split log files.
+ For the files modified, also clean up javadoc, class, field and method
+ visibility (HADOOP-1466)
+ 16. HADOOP-1479 Fix NPE in HStore#get if store file only has keys < passed key.
+ 17. HADOOP-1476 Distributed version of 'Performance Evaluation' script
+ 18. HADOOP-1469 Asychronous table creation
+ 19. HADOOP-1415 Integrate BSD licensed bloom filter implementation.
+ 20. HADOOP-1465 Add cluster stop/start scripts for hbase
+ 21. HADOOP-1415 Provide configurable per-column bloom filters - part 2.
+ 22. HADOOP-1498. Replace boxed types with primitives in many places.
+ 23. HADOOP-1509. Made methods/inner classes in HRegionServer and HClient protected
+ instead of private for easier extension. Also made HRegion and HRegionInfo public too.
+ Added an hbase-default.xml property for specifying what HRegionInterface extension to use
+ for proxy server connection. (James Kennedy via Jim Kellerman)
+ 24. HADOOP-1534. [hbase] Memcache scanner fails if start key not present
+ 25. HADOOP-1537. Catch exceptions in testCleanRegionServerExit so we can see
+ what is failing.
+ 26. HADOOP-1543 [hbase] Add HClient.tableExists
+ 27. HADOOP-1519 [hbase] map/reduce interface for HBase. (Vuk Ercegovac and
+ Jim Kellerman)
+ 28. HADOOP-1523 Hung region server waiting on write locks
+ 29. HADOOP-1560 NPE in MiniHBaseCluster on Windows
+ 30. HADOOP-1531 Add RowFilter to HRegion.HScanner
+ Adds a row filtering interface and two implemenentations: A page scanner,
+ and a regex row/column-data matcher. (James Kennedy via Stack)
+ 31. HADOOP-1566 Key-making utility
+ 32. HADOOP-1415 Provide configurable per-column bloom filters.
+ HADOOP-1466 Clean up visibility and javadoc issues in HBase.
+ 33. HADOOP-1538 Provide capability for client specified time stamps in HBase
+ HADOOP-1466 Clean up visibility and javadoc issues in HBase.
+ 34. HADOOP-1589 Exception handling in HBase is broken over client server connections
+ 35. HADOOP-1375 a simple parser for hbase (Edward Yoon via Stack)
+ 36. HADOOP-1600 Update license in HBase code
+ 37. HADOOP-1589 Exception handling in HBase is broken over client server
+ 38. HADOOP-1574 Concurrent creates of a table named 'X' all succeed
+ 39. HADOOP-1581 Un-openable tablename bug
+ 40. HADOOP-1607 [shell] Clear screen command (Edward Yoon via Stack)
+ 41. HADOOP-1614 [hbase] HClient does not protect itself from simultaneous updates
+ 42. HADOOP-1468 Add HBase batch update to reduce RPC overhead
+ 43. HADOOP-1616 Sporadic TestTable failures
+ 44. HADOOP-1615 Replacing thread notification-based queue with
+ java.util.concurrent.BlockingQueue in HMaster, HRegionServer
+ 45. HADOOP-1606 Updated implementation of RowFilterSet, RowFilterInterface
+ (Izaak Rubin via Stack)
+ 46. HADOOP-1579 Add new WhileMatchRowFilter and StopRowFilter filters
+ (Izaak Rubin via Stack)
+ 47. HADOOP-1637 Fix to HScanner to Support Filters, Add Filter Tests to
+ TestScanner2 (Izaak Rubin via Stack)
+ 48. HADOOP-1516 HClient fails to readjust when ROOT or META redeployed on new
+ region server
+ 49. HADOOP-1646 RegionServer OOME's under sustained, substantial loading by
+ 10 concurrent clients
+ 50. HADOOP-1468 Add HBase batch update to reduce RPC overhead (restrict batches
+ to a single row at a time)
+ 51. HADOOP-1528 HClient for multiple tables (phase 1) (James Kennedy & JimK)
+ 52. HADOOP-1528 HClient for multiple tables (phase 2) all HBase client side code
+ (except TestHClient and HBaseShell) have been converted to use the new client
+ side objects (HTable/HBaseAdmin/HConnection) instead of HClient.
+ 53. HADOOP-1528 HClient for multiple tables - expose close table function
+ 54. HADOOP-1466 Clean up warnings, visibility and javadoc issues in HBase.
+ 55. HADOOP-1662 Make region splits faster
+ 56. HADOOP-1678 On region split, master should designate which host should
+ serve daughter splits. Phase 1: Master balances load for new regions and
+ when a region server fails.
+ 57. HADOOP-1678 On region split, master should designate which host should
+ serve daughter splits. Phase 2: Master assigns children of split region
+ instead of HRegionServer serving both children.
+ 58. HADOOP-1710 All updates should be batch updates
+ 59. HADOOP-1711 HTable API should use interfaces instead of concrete classes as
+ method parameters and return values
+ 60. HADOOP-1644 Compactions should not block updates
+ 60. HADOOP-1672 HBase Shell should use new client classes
+ (Edward Yoon via Stack).
+ 61. HADOOP-1709 Make HRegionInterface more like that of HTable
+ HADOOP-1725 Client find of table regions should not include offlined, split parents
diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100644
index 0000000..d645695
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/NOTICE.txt b/NOTICE.txt
new file mode 100644
index 0000000..4fb7d74
--- /dev/null
+++ b/NOTICE.txt
@@ -0,0 +1,44 @@
+This product includes software developed by The Apache Software
+Foundation (http://www.apache.org/).
+
+In addition, this product includes software developed by:
+
+
+European Commission project OneLab (http://www.one-lab.org)
+
+
+Facebook, Inc. (http://developers.facebook.com/thrift/ -- Page includes the Thrift Software License)
+
+
+JUnit (http://www.junit.org/)
+
+
+Michael Gottesman developed AgileJSON. Its source code is here:
+
+ http://github.com/gottesmm/agile-json-2.0/tree/master
+
+It has this license at the head of the each source file:
+
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal
+ * in the Software without restriction, including without limitation the
+ * rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all
+ * copies or substantial portions of the Software.
+ *
+ * The Software shall be used for Good, not Evil.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
diff --git a/README.txt b/README.txt
new file mode 100644
index 0000000..c968312
--- /dev/null
+++ b/README.txt
@@ -0,0 +1 @@
+See the docs directory or http://hbase.org
diff --git a/bin/Formatter.rb b/bin/Formatter.rb
new file mode 100644
index 0000000..a793290
--- /dev/null
+++ b/bin/Formatter.rb
@@ -0,0 +1,137 @@
+# Results formatter
+module Formatter
+ # Base abstract class for results formatting.
+ class Formatter
+ # Takes an output stream and a print width.
+ def initialize(o, w = 100)
+ raise TypeError.new("Type %s of parameter %s is not IO" % [o.class, o]) \
+ unless o.instance_of? IO
+ @out = o
+ @maxWidth = w
+ @rowCount = 0
+ end
+
+ attr_reader :rowCount
+
+ def header(args = [], widths = [])
+ row(args, false, widths) if args.length > 0
+ @rowCount = 0
+ end
+
+ # Output a row.
+ # Inset is whether or not to offset row by a space.
+ def row(args = [], inset = true, widths = [])
+ if not args or args.length == 0
+ # Print out nothing
+ return
+ end
+ if args.class == String
+ output(@maxWidth, args)
+ puts
+ return
+ end
+ # TODO: Look at the type. Is it RowResult?
+ if args.length == 1
+ splits = split(@maxWidth, dump(args[0]))
+ for l in splits
+ output(@maxWidth, l)
+ puts
+ end
+ elsif args.length == 2
+ col1width = (not widths or widths.length == 0) ? @maxWidth / 4 : @maxWidth * widths[0] / 100
+ col2width = (not widths or widths.length < 2) ? @maxWidth - col1width - 2 : @maxWidth * widths[1] / 100 - 2
+ splits1 = split(col1width, dump(args[0]))
+ splits2 = split(col2width, dump(args[1]))
+ biggest = (splits2.length > splits1.length)? splits2.length: splits1.length
+ index = 0
+ while index < biggest
+ if inset
+ # Inset by one space if inset is set.
+ @out.print(" ")
+ end
+ output(col1width, splits1[index])
+ if not inset
+ # Add extra space so second column lines up w/ second column output
+ @out.print(" ")
+ end
+ @out.print(" ")
+ output(col2width, splits2[index])
+ index += 1
+ puts
+ end
+ else
+ # Print a space to set off multi-column rows
+ print ' '
+ first = true
+ for e in args
+ @out.print " " unless first
+ first = false
+ @out.print e
+ end
+ puts
+ end
+ @rowCount += 1
+ end
+
+ def split(width, str)
+ result = []
+ index = 0
+ while index < str.length do
+ result << str.slice(index, width)
+ index += width
+ end
+ result
+ end
+
+ def dump(str)
+ if str.instance_of? Fixnum
+ return
+ end
+ # Remove double-quotes added by 'dump'.
+ return str.dump[1..-2]
+ end
+
+ def output(width, str)
+ # Make up a spec for printf
+ spec = "%%-%ds" % width
+ @out.printf(spec, str)
+ end
+
+ def footer(startTime = nil, rowCount = nil)
+ if not rowCount
+ rowCount = @rowCount
+ end
+ if not startTime
+ return
+ end
+ # Only output elapsed time and row count if startTime passed
+ @out.puts("%d row(s) in %.4f seconds" % [rowCount, Time.now - startTime])
+ end
+ end
+
+
+ class Console < Formatter
+ end
+
+ class XHTMLFormatter < Formatter
+ # http://www.germane-software.com/software/rexml/doc/classes/REXML/Document.html
+ # http://www.crummy.com/writing/RubyCookbook/test_results/75942.html
+ end
+
+ class JSON < Formatter
+ end
+
+ # Do a bit of testing.
+ if $0 == __FILE__
+ formatter = Console.new(STDOUT)
+ now = Time.now
+ formatter.header(['a', 'b'])
+ formatter.row(['a', 'b'])
+ formatter.row(['xxxxxxxxx xxxxxxxxxxx xxxxxxxxxxx xxxxxxxxxxxx xxxxxxxxx xxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxx xxxxxxxxxxxxxx'])
+ formatter.row(['yyyyyy yyyyyy yyyyy yyy', 'xxxxxxxxx xxxxxxxxxxx xxxxxxxxxxx xxxxxxxxxxxx xxxxxxxxx xxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxx xxxxxxxxxxxxxx xxx xx x xx xxx xx xx xx x xx x x xxx x x xxx x x xx x x x x x x xx '])
+ formatter.row(["NAME => 'table1', FAMILIES => [{NAME => 'fam2', VERSIONS => 3, COMPRESSION => 'NONE', IN_MEMORY => false, BLOCKCACHE => false, LENGTH => 2147483647, TTL => FOREVER, BLOOMFILTER => NONE}, {NAME => 'fam1', VERSIONS => 3, COMPRESSION => 'NONE', IN_MEMORY => false, BLOCKCACHE => false, LENGTH => 2147483647, TTL => FOREVER, BLOOMFILTER => NONE}]"])
+ formatter.footer(now)
+ end
+end
+
+
diff --git a/bin/HBase.rb b/bin/HBase.rb
new file mode 100644
index 0000000..c78b072
--- /dev/null
+++ b/bin/HBase.rb
@@ -0,0 +1,556 @@
+# HBase ruby classes.
+# Has wrapper classes for org.apache.hadoop.hbase.client.HBaseAdmin
+# and for org.apache.hadoop.hbase.client.HTable. Classes take
+# Formatters on construction and outputs any results using
+# Formatter methods. These classes are only really for use by
+# the hirb.rb HBase Shell script; they don't make much sense elsewhere.
+# For example, the exists method on Admin class prints to the formatter
+# whether the table exists and returns nil regardless.
+include Java
+include_class('java.lang.Integer') {|package,name| "J#{name}" }
+include_class('java.lang.Long') {|package,name| "J#{name}" }
+include_class('java.lang.Boolean') {|package,name| "J#{name}" }
+
+import org.apache.hadoop.hbase.client.HBaseAdmin
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.io.BatchUpdate
+import org.apache.hadoop.hbase.io.RowResult
+import org.apache.hadoop.hbase.io.Cell
+import org.apache.hadoop.hbase.io.hfile.Compression
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.HColumnDescriptor
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.hbase.HRegionInfo
+
+module HBase
+ COLUMN = "COLUMN"
+ COLUMNS = "COLUMNS"
+ TIMESTAMP = "TIMESTAMP"
+ NAME = HConstants::NAME
+ VERSIONS = HConstants::VERSIONS
+ IN_MEMORY = HConstants::IN_MEMORY
+ STOPROW = "STOPROW"
+ STARTROW = "STARTROW"
+ ENDROW = STOPROW
+ LIMIT = "LIMIT"
+ METHOD = "METHOD"
+ MAXLENGTH = "MAXLENGTH"
+
+ # Wrapper for org.apache.hadoop.hbase.client.HBaseAdmin
+ class Admin
+ def initialize(configuration, formatter)
+ @admin = HBaseAdmin.new(configuration)
+ @formatter = formatter
+ end
+
+ def list
+ now = Time.now
+ @formatter.header()
+ for t in @admin.listTables()
+ @formatter.row([t.getNameAsString()])
+ end
+ @formatter.footer(now)
+ end
+
+ def describe(tableName)
+ now = Time.now
+ @formatter.header(["DESCRIPTION", "ENABLED"], [64])
+ found = false
+ tables = @admin.listTables().to_a
+ tables.push(HTableDescriptor::META_TABLEDESC, HTableDescriptor::ROOT_TABLEDESC)
+ for t in tables
+ if t.getNameAsString() == tableName
+ @formatter.row([t.to_s, "%s" % [@admin.isTableEnabled(tableName)]], true, [64])
+ found = true
+ end
+ end
+ if not found
+ raise ArgumentError.new("Failed to find table named " + tableName)
+ end
+ @formatter.footer(now)
+ end
+
+ def exists(tableName)
+ now = Time.now
+ @formatter.header()
+ e = @admin.tableExists(tableName)
+ @formatter.row([e.to_s])
+ @formatter.footer(now)
+ end
+
+ def flush(tableNameOrRegionName)
+ now = Time.now
+ @formatter.header()
+ @admin.flush(tableNameOrRegionName)
+ @formatter.footer(now)
+ end
+
+ def compact(tableNameOrRegionName)
+ now = Time.now
+ @formatter.header()
+ @admin.compact(tableNameOrRegionName)
+ @formatter.footer(now)
+ end
+
+ def major_compact(tableNameOrRegionName)
+ now = Time.now
+ @formatter.header()
+ @admin.majorCompact(tableNameOrRegionName)
+ @formatter.footer(now)
+ end
+
+ def split(tableNameOrRegionName)
+ now = Time.now
+ @formatter.header()
+ @admin.split(tableNameOrRegionName)
+ @formatter.footer(now)
+ end
+
+ def enable(tableName)
+ # TODO: Need an isEnabled method
+ now = Time.now
+ @admin.enableTable(tableName)
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ def disable(tableName)
+ # TODO: Need an isDisabled method
+ now = Time.now
+ @admin.disableTable(tableName)
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ def enable_region(regionName)
+ online(regionName, false)
+ end
+
+ def disable_region(regionName)
+ online(regionName, true)
+ end
+
+ def online(regionName, onOrOff)
+ now = Time.now
+ meta = HTable.new(HConstants::META_TABLE_NAME)
+ bytes = Bytes.toBytes(regionName)
+ hriBytes = meta.get(bytes, HConstants::COL_REGIONINFO).getValue()
+ hri = Writables.getWritable(hriBytes, HRegionInfo.new());
+ hri.setOffline(onOrOff)
+ p hri
+ bu = BatchUpdate.new(bytes)
+ bu.put(HConstants::COL_REGIONINFO, Writables.getBytes(hri))
+ meta.commit(bu);
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ def drop(tableName)
+ now = Time.now
+ @formatter.header()
+ if @admin.isTableEnabled(tableName)
+ raise IOError.new("Table " + tableName + " is enabled. Disable it first")
+ else
+ @admin.deleteTable(tableName)
+ end
+ @formatter.footer(now)
+ end
+
+ def truncate(tableName)
+ now = Time.now
+ @formatter.header()
+ hTable = HTable.new(tableName)
+ tableDescription = hTable.getTableDescriptor()
+ puts 'Truncating ' + tableName + '; it may take a while'
+ puts 'Disabling table...'
+ disable(tableName)
+ puts 'Dropping table...'
+ drop(tableName)
+ puts 'Creating table...'
+ @admin.createTable(tableDescription)
+ @formatter.footer(now)
+ end
+
+ # Pass tablename and an array of Hashes
+ def create(tableName, args)
+ now = Time.now
+ # Pass table name and an array of Hashes. Later, test the last
+ # array to see if its table options rather than column family spec.
+ raise TypeError.new("Table name must be of type String") \
+ unless tableName.instance_of? String
+ # For now presume all the rest of the args are column family
+ # hash specifications. TODO: Add table options handling.
+ htd = HTableDescriptor.new(tableName)
+ for arg in args
+ if arg.instance_of? String
+ htd.addFamily(HColumnDescriptor.new(makeColumnName(arg)))
+ else
+ raise TypeError.new(arg.class.to_s + " of " + arg.to_s + " is not of Hash type") \
+ unless arg.instance_of? Hash
+ htd.addFamily(hcd(arg))
+ end
+ end
+ @admin.createTable(htd)
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ def alter(tableName, args)
+ now = Time.now
+ raise TypeError.new("Table name must be of type String") \
+ unless tableName.instance_of? String
+ htd = @admin.getTableDescriptor(tableName.to_java_bytes)
+ method = args.delete(METHOD)
+ if method == "delete"
+ @admin.deleteColumn(tableName, makeColumnName(args[NAME]))
+ elsif method == "table_att"
+ args[MAX_FILESIZE]? htd.setMaxFileSize(JLong.valueOf(args[MAX_FILESIZE])) :
+ htd.setMaxFileSize(HTableDescriptor::DEFAULT_MAX_FILESIZE);
+ args[READONLY]? htd.setReadOnly(JBoolean.valueOf(args[READONLY])) :
+ htd.setReadOnly(HTableDescriptor::DEFAULT_READONLY);
+ args[MEMCACHE_FLUSHSIZE]?
+ htd.setMemcacheFlushSize(JLong.valueOf(args[MEMCACHE_FLUSHSIZE])) :
+ htd.setMemcacheFlushSize(HTableDescriptor::DEFAULT_MEMCACHE_FLUSH_SIZE);
+ @admin.modifyTable(tableName.to_java_bytes, htd)
+ else
+ descriptor = hcd(args)
+ if (htd.hasFamily(descriptor.getNameAsString().to_java_bytes))
+ @admin.modifyColumn(tableName, descriptor.getNameAsString(),
+ descriptor);
+ else
+ @admin.addColumn(tableName, descriptor);
+ end
+ end
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ def close_region(regionName, server)
+ now = Time.now
+ s = nil
+ s = [server].to_java if server
+ @admin.closeRegion(regionName, s)
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ # Make a legal column name of the passed String
+ # Check string ends in colon. If not, add it.
+ def makeColumnName(arg)
+ index = arg.index(':')
+ if not index
+ # Add a colon. If already a colon, its in the right place,
+ # or an exception will come up out of the addFamily
+ arg << ':'
+ end
+ arg
+ end
+
+ def shutdown()
+ @admin.shutdown()
+ end
+
+ def hcd(arg)
+ # Return a new HColumnDescriptor made of passed args
+ # TODO: This is brittle code.
+ # Here is current HCD constructor:
+ # public HColumnDescriptor(final byte [] familyName, final int maxVersions,
+ # final String compression, final boolean inMemory,
+ # final boolean blockCacheEnabled, final int blocksize,
+ # final int maxValueLength,
+ # final int timeToLive, final boolean bloomFilter) {
+ name = arg[NAME]
+ raise ArgumentError.new("Column family " + arg + " must have a name") \
+ unless name
+ name = makeColumnName(name)
+ # TODO: What encoding are Strings in jruby?
+ return HColumnDescriptor.new(name.to_java_bytes,
+ # JRuby uses longs for ints. Need to convert. Also constants are String
+ arg[VERSIONS]? JInteger.new(arg[VERSIONS]): HColumnDescriptor::DEFAULT_VERSIONS,
+ arg[HColumnDescriptor::COMPRESSION]? arg[HColumnDescriptor::COMPRESSION]: HColumnDescriptor::DEFAULT_COMPRESSION,
+ arg[IN_MEMORY]? JBoolean.valueOf(arg[IN_MEMORY]): HColumnDescriptor::DEFAULT_IN_MEMORY,
+ arg[HColumnDescriptor::BLOCKCACHE]? JBoolean.valueOf(arg[HColumnDescriptor::BLOCKCACHE]): HColumnDescriptor::DEFAULT_BLOCKCACHE,
+ arg[HColumnDescriptor::BLOCKSIZE]? JInteger.valueOf(arg[HColumnDescriptor::BLOCKSIZE]): HColumnDescriptor::DEFAULT_BLOCKSIZE,
+ arg[HColumnDescriptor::LENGTH]? JInteger.new(arg[HColumnDescriptor::LENGTH]): HColumnDescriptor::DEFAULT_LENGTH,
+ arg[HColumnDescriptor::TTL]? JInteger.new(arg[HColumnDescriptor::TTL]): HColumnDescriptor::DEFAULT_TTL,
+ arg[HColumnDescriptor::BLOOMFILTER]? JBoolean.valueOf(arg[HColumnDescriptor::BLOOMFILTER]): HColumnDescriptor::DEFAULT_BLOOMFILTER)
+ end
+ end
+
+ # Wrapper for org.apache.hadoop.hbase.client.HTable
+ class Table
+ def initialize(configuration, tableName, formatter)
+ @table = HTable.new(configuration, tableName)
+ @formatter = formatter
+ end
+
+ # Delete a cell
+ def delete(row, column, timestamp = HConstants::LATEST_TIMESTAMP)
+ now = Time.now
+ bu = BatchUpdate.new(row, timestamp)
+ bu.delete(column)
+ @table.commit(bu)
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ def deleteall(row, column = nil, timestamp = HConstants::LATEST_TIMESTAMP)
+ now = Time.now
+ @table.deleteAll(row, column, timestamp)
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ def getAllColumns
+ htd = @table.getTableDescriptor()
+ result = []
+ for f in htd.getFamilies()
+ n = f.getNameAsString()
+ n << ':'
+ result << n
+ end
+ result
+ end
+
+ def scan(args = {})
+ now = Time.now
+ limit = -1
+ maxlength = -1
+ if args != nil and args.length > 0
+ limit = args["LIMIT"] || -1
+ maxlength = args["MAXLENGTH"] || -1
+ filter = args["FILTER"] || nil
+ startrow = args["STARTROW"] || ""
+ stoprow = args["STOPROW"] || nil
+ timestamp = args["TIMESTAMP"] || HConstants::LATEST_TIMESTAMP
+ columns = args["COLUMNS"] || getAllColumns()
+
+ if columns.class == String
+ columns = [columns]
+ elsif columns.class != Array
+ raise ArgumentError.new("COLUMNS must be specified as a String or an Array")
+ end
+ cs = columns.to_java(java.lang.String)
+
+ if stoprow
+ s = @table.getScanner(cs, startrow, stoprow, timestamp)
+ else
+ s = @table.getScanner(cs, startrow, timestamp, filter)
+ end
+ else
+ columns = getAllColumns()
+ s = @table.getScanner(columns.to_java(java.lang.String))
+ end
+ count = 0
+ @formatter.header(["ROW", "COLUMN+CELL"])
+ i = s.iterator()
+ while i.hasNext()
+ r = i.next()
+ row = String.from_java_bytes r.getRow()
+ for k, v in r
+ column = String.from_java_bytes k
+ cell = toString(column, v, maxlength)
+ @formatter.row([row, "column=%s, %s" % [column, cell]])
+ end
+ count += 1
+ if limit != -1 and count >= limit
+ break
+ end
+ end
+ @formatter.footer(now)
+ end
+
+ def put(row, column, value, timestamp = nil)
+ now = Time.now
+ bu = nil
+ if timestamp
+ bu = BatchUpdate.new(row, timestamp)
+ else
+ bu = BatchUpdate.new(row)
+ end
+ bu.put(column, value.to_java_bytes)
+ @table.commit(bu)
+ @formatter.header()
+ @formatter.footer(now)
+ end
+
+ def isMetaTable()
+ tn = @table.getTableName()
+ return Bytes.equals(tn, HConstants::META_TABLE_NAME) ||
+ Bytes.equals(tn, HConstants::ROOT_TABLE_NAME)
+ end
+
+ # Make a String of the passed cell.
+ # Intercept cells whose format we know such as the info:regioninfo in .META.
+ def toString(column, cell, maxlength)
+ if isMetaTable()
+ if column == 'info:regioninfo'
+ hri = Writables.getHRegionInfoOrNull(cell.getValue())
+ return "timestamp=%d, value=%s" % [cell.getTimestamp(), hri.toString()]
+ elsif column == 'info:serverstartcode'
+ return "timestamp=%d, value=%s" % [cell.getTimestamp(), \
+ Bytes.toLong(cell.getValue())]
+ end
+ end
+ cell.toString()
+ val = cell.toString()
+ maxlength != -1 ? val[0, maxlength] : val
+ end
+
+ # Get from table
+ def get(row, args = {})
+ now = Time.now
+ result = nil
+ if args == nil or args.length == 0 or (args.length == 1 and args[MAXLENGTH] != nil)
+ result = @table.getRow(row.to_java_bytes)
+ else
+ # Its a hash.
+ columns = args[COLUMN]
+ if columns == nil
+ # Maybe they used the COLUMNS key
+ columns = args[COLUMNS]
+ end
+ if columns == nil
+ # May have passed TIMESTAMP and row only; wants all columns from ts.
+ ts = args[TIMESTAMP]
+ if not ts
+ raise ArgumentError.new("Failed parse of " + args + ", " + args.class)
+ end
+ result = @table.getRow(row.to_java_bytes, ts)
+ else
+ # Columns are non-nil
+ if columns.class == String
+ # Single column
+ result = @table.get(row, columns,
+ args[TIMESTAMP]? args[TIMESTAMP]: HConstants::LATEST_TIMESTAMP,
+ args[VERSIONS]? args[VERSIONS]: 1)
+ elsif columns.class == Array
+ result = @table.getRow(row, columns.to_java(:string),
+ args[TIMESTAMP]? args[TIMESTAMP]: HConstants::LATEST_TIMESTAMP)
+ else
+ raise ArgumentError.new("Failed parse column argument type " +
+ args + ", " + args.class)
+ end
+ end
+ end
+ # Print out results. Result can be Cell or RowResult.
+ maxlength = args[MAXLENGTH] || -1
+ h = nil
+ if result.instance_of? RowResult
+ h = String.from_java_bytes result.getRow()
+ @formatter.header(["COLUMN", "CELL"])
+ if result
+ for k, v in result
+ column = String.from_java_bytes k
+ @formatter.row([column, toString(column, v, maxlength)])
+ end
+ end
+ else
+ # Presume Cells
+ @formatter.header()
+ if result
+ for c in result
+ @formatter.row([toString(nil, c, maxlength)])
+ end
+ end
+ end
+ @formatter.footer(now)
+ end
+
+ def count(interval = 1000)
+ now = Time.now
+ columns = getAllColumns()
+ cs = columns.to_java(java.lang.String)
+ s = @table.getScanner(cs)
+ count = 0
+ i = s.iterator()
+ @formatter.header()
+ while i.hasNext()
+ r = i.next()
+ count += 1
+ if count % interval == 0
+ @formatter.row(["Current count: " + count.to_s + ", row: " + \
+ (String.from_java_bytes r.getRow())])
+ end
+ end
+ @formatter.footer(now, count)
+ end
+
+ end
+
+ # Testing. To run this test, there needs to be an hbase cluster up and
+ # running. Then do: ${HBASE_HOME}/bin/hbase org.jruby.Main bin/HBase.rb
+ if $0 == __FILE__
+ # Add this directory to LOAD_PATH; presumption is that Formatter module
+ # sits beside this one. Then load it up.
+ $LOAD_PATH.unshift File.dirname($PROGRAM_NAME)
+ require 'Formatter'
+ # Make a console formatter
+ formatter = Formatter::Console.new(STDOUT)
+ # Now add in java and hbase classes
+ configuration = HBaseConfiguration.new()
+ admin = Admin.new(configuration, formatter)
+ # Drop old table. If it does not exist, get an exception. Catch and
+ # continue
+ TESTTABLE = "HBase_rb_testtable"
+ begin
+ admin.disable(TESTTABLE)
+ admin.drop(TESTTABLE)
+ rescue org.apache.hadoop.hbase.TableNotFoundException
+ # Just suppress not found exception
+ end
+ admin.create(TESTTABLE, [{NAME => 'x', VERSIONS => 5}])
+ # Presume it exists. If it doesn't, next items will fail.
+ table = Table.new(configuration, TESTTABLE, formatter)
+ for i in 1..10
+ table.put('x%d' % i, 'x:%d' % i, 'x%d' % i)
+ end
+ table.get('x1', {COLUMN => 'x:1'})
+ if formatter.rowCount() != 1
+ raise IOError.new("Failed first put")
+ end
+ table.scan(['x:'])
+ if formatter.rowCount() != 10
+ raise IOError.new("Failed scan of expected 10 rows")
+ end
+ # Verify that limit works.
+ table.scan(['x:'], {LIMIT => 3})
+ if formatter.rowCount() != 3
+ raise IOError.new("Failed scan of expected 3 rows")
+ end
+ # Should only be two rows if we start at 8 (Row x10 sorts beside x1).
+ table.scan(['x:'], {STARTROW => 'x8', LIMIT => 3})
+ if formatter.rowCount() != 2
+ raise IOError.new("Failed scan of expected 2 rows")
+ end
+ # Scan between two rows
+ table.scan(['x:'], {STARTROW => 'x5', ENDROW => 'x8'})
+ if formatter.rowCount() != 3
+ raise IOError.new("Failed endrow test")
+ end
+ # Verify that delete works
+ table.delete('x1', 'x:1');
+ table.scan(['x:1'])
+ scan1 = formatter.rowCount()
+ table.scan(['x:'])
+ scan2 = formatter.rowCount()
+ if scan1 != 0 or scan2 != 9
+ raise IOError.new("Failed delete test")
+ end
+ # Verify that deletall works
+ table.put('x2', 'x:1', 'x:1')
+ table.deleteall('x2')
+ table.scan(['x:2'])
+ scan1 = formatter.rowCount()
+ table.scan(['x:'])
+ scan2 = formatter.rowCount()
+ if scan1 != 0 or scan2 != 8
+ raise IOError.new("Failed deleteall test")
+ end
+ admin.disable(TESTTABLE)
+ admin.drop(TESTTABLE)
+ end
+end
diff --git a/bin/copy_table.rb b/bin/copy_table.rb
new file mode 100644
index 0000000..9bbc452
--- /dev/null
+++ b/bin/copy_table.rb
@@ -0,0 +1,149 @@
+# Script that copies table in hbase. As written, will not work for rare
+# case where there is more than one region in .META. table. Does the
+# update of the hbase .META. and copies the directories in filesystem.
+# HBase MUST be shutdown when you run this script.
+#
+# To see usage for this script, run:
+#
+# ${HBASE_HOME}/bin/hbase org.jruby.Main rename_table.rb
+#
+include Java
+import org.apache.hadoop.hbase.util.MetaUtils
+import org.apache.hadoop.hbase.util.FSUtils
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.HStoreKey
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.regionserver.HLogEdit
+import org.apache.hadoop.hbase.regionserver.HRegion
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.fs.FileSystem
+import org.apache.hadoop.fs.FileUtil
+import org.apache.commons.logging.Log
+import org.apache.commons.logging.LogFactory
+import java.util.TreeMap
+
+# Name of this script
+NAME = "copy_table"
+
+# Print usage for this script
+def usage
+ puts 'Usage: %s.rb <OLD_NAME> <NEW_NAME>' % NAME
+ exit!
+end
+
+# Passed 'dir' exists and is a directory else exception
+def isDirExists(fs, dir)
+ raise IOError.new("Does not exit: " + dir.toString()) unless fs.exists(dir)
+ raise IOError.new("Not a directory: " + dir.toString()) unless fs.isDirectory(dir)
+end
+
+# Returns true if the region belongs to passed table
+def isTableRegion(tableName, hri)
+ return Bytes.equals(hri.getTableDesc().getName(), tableName)
+end
+
+# Create new HRI based off passed 'oldHRI'
+def createHRI(tableName, oldHRI)
+ htd = oldHRI.getTableDesc()
+ newHtd = HTableDescriptor.new(tableName)
+ for family in htd.getFamilies()
+ newHtd.addFamily(family)
+ end
+ return HRegionInfo.new(newHtd, oldHRI.getStartKey(), oldHRI.getEndKey(),
+ oldHRI.isSplit())
+end
+
+# Check arguments
+if ARGV.size != 2
+ usage
+end
+
+# Check good table names were passed.
+oldTableName = HTableDescriptor.isLegalTableName(ARGV[0].to_java_bytes)
+newTableName = HTableDescriptor.isLegalTableName(ARGV[1].to_java_bytes)
+
+# Get configuration to use.
+c = HBaseConfiguration.new()
+
+# Set hadoop filesystem configuration using the hbase.rootdir.
+# Otherwise, we'll always use localhost though the hbase.rootdir
+# might be pointing at hdfs location.
+c.set("fs.default.name", c.get(HConstants::HBASE_DIR))
+fs = FileSystem.get(c)
+
+# If new table directory does not exit, create it. Keep going if already
+# exists because maybe we are rerunning script because it failed first
+# time.
+rootdir = FSUtils.getRootDir(c)
+oldTableDir = Path.new(rootdir, Path.new(Bytes.toString(oldTableName)))
+isDirExists(fs, oldTableDir)
+newTableDir = Path.new(rootdir, Bytes.toString(newTableName))
+if !fs.exists(newTableDir)
+ fs.mkdirs(newTableDir)
+end
+
+# Get a logger and a metautils instance.
+LOG = LogFactory.getLog(NAME)
+utils = MetaUtils.new(c)
+
+# Start. Get all meta rows.
+begin
+ # Get list of all .META. regions that contain old table name
+ metas = utils.getMETARows(oldTableName)
+ index = 0
+ for meta in metas
+ # For each row we find, move its region from old to new table.
+ # Need to update the encoded name in the hri as we move.
+ # After move, delete old entry and create a new.
+ LOG.info("Scanning " + meta.getRegionNameAsString())
+ metaRegion = utils.getMetaRegion(meta)
+ scanner = metaRegion.getScanner(HConstants::COL_REGIONINFO_ARRAY, oldTableName,
+ HConstants::LATEST_TIMESTAMP, nil)
+ begin
+ key = HStoreKey.new()
+ value = TreeMap.new(Bytes.BYTES_COMPARATOR)
+ while scanner.next(key, value)
+ index = index + 1
+ keyStr = key.toString()
+ oldHRI = Writables.getHRegionInfo(value.get(HConstants::COL_REGIONINFO))
+ if !oldHRI
+ raise IOError.new(index.to_s + " HRegionInfo is null for " + keyStr)
+ end
+ unless isTableRegion(oldTableName, oldHRI)
+ # If here, we passed out the table. Break.
+ break
+ end
+ oldRDir = Path.new(oldTableDir, Path.new(oldHRI.getEncodedName().to_s))
+ if !fs.exists(oldRDir)
+ LOG.warn(oldRDir.toString() + " does not exist -- region " +
+ oldHRI.getRegionNameAsString())
+ else
+ # Now make a new HRegionInfo to add to .META. for the new region.
+ newHRI = createHRI(newTableName, oldHRI)
+ newRDir = Path.new(newTableDir, Path.new(newHRI.getEncodedName().to_s))
+ # Move the region in filesystem
+ LOG.info("Copying " + oldRDir.toString() + " as " + newRDir.toString())
+ FileUtil.copy(fs, oldRDir, fs, newRDir, false, true, c)
+ # Create 'new' region
+ newR = HRegion.new(rootdir, utils.getLog(), fs, c, newHRI, nil)
+ # Add new row. NOTE: Presumption is that only one .META. region. If not,
+ # need to do the work to figure proper region to add this new region to.
+ LOG.info("Adding to meta: " + newR.toString())
+ HRegion.addRegionToMETA(metaRegion, newR)
+ LOG.info("Done copying: " + Bytes.toString(key.getRow()))
+ end
+ # Need to clear value else we keep appending values.
+ value.clear()
+ end
+ ensure
+ scanner.close()
+ end
+ end
+ensure
+ utils.shutdown()
+end
diff --git a/bin/hbase b/bin/hbase
new file mode 100755
index 0000000..e73e8c7
--- /dev/null
+++ b/bin/hbase
@@ -0,0 +1,219 @@
+#! /usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+#
+# The hbase command script. Based on the hadoop command script putting
+# in hbase classes, libs and configurations ahead of hadoop's.
+#
+# TODO: Narrow the amount of duplicated code.
+#
+# Environment Variables:
+#
+# JAVA_HOME The java implementation to use. Overrides JAVA_HOME.
+#
+# HBASE_CLASSPATH Extra Java CLASSPATH entries.
+#
+# HBASE_HEAPSIZE The maximum amount of heap to use, in MB.
+# Default is 1000.
+#
+# HBASE_OPTS Extra Java runtime options.
+#
+# HBASE_CONF_DIR Alternate conf dir. Default is ${HBASE_HOME}/conf.
+#
+# HBASE_ROOT_LOGGER The root appender. Default is INFO,console
+#
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+# This will set HBASE_HOME, etc.
+. "$bin"/hbase-config.sh
+
+cygwin=false
+case "`uname`" in
+CYGWIN*) cygwin=true;;
+esac
+
+# if no args specified, show usage
+if [ $# = 0 ]; then
+ echo "Usage: hbase <command>"
+ echo "where <command> is one of:"
+ echo " shell run the HBase shell"
+ echo " master run an HBase HMaster node"
+ echo " regionserver run an HBase HRegionServer node"
+ echo " rest run an HBase REST server"
+ echo " thrift run an HBase Thrift server"
+ echo " zookeeper run a Zookeeper server"
+ echo " migrate upgrade an hbase.rootdir"
+ echo " or"
+ echo " CLASSNAME run the class named CLASSNAME"
+ echo "Most commands print help when invoked w/o parameters."
+ exit 1
+fi
+
+# get arguments
+COMMAND=$1
+shift
+
+# Source the hbase-env.sh. Will have JAVA_HOME defined.
+if [ -f "${HBASE_CONF_DIR}/hbase-env.sh" ]; then
+ . "${HBASE_CONF_DIR}/hbase-env.sh"
+fi
+
+# some Java parameters
+if [ "$JAVA_HOME" != "" ]; then
+ #echo "run java in $JAVA_HOME"
+ JAVA_HOME=$JAVA_HOME
+fi
+
+if [ "$JAVA_HOME" = "" ]; then
+ echo "Error: JAVA_HOME is not set."
+ exit 1
+fi
+
+JAVA=$JAVA_HOME/bin/java
+JAVA_HEAP_MAX=-Xmx1000m
+
+# check envvars which might override default args
+if [ "$HBASE_HEAPSIZE" != "" ]; then
+ #echo "run with heapsize $HBASE_HEAPSIZE"
+ JAVA_HEAP_MAX="-Xmx""$HBASE_HEAPSIZE""m"
+ #echo $JAVA_HEAP_MAX
+fi
+
+# so that filenames w/ spaces are handled correctly in loops below
+IFS=
+
+# CLASSPATH initially contains $HBASE_CONF_DIR
+CLASSPATH="${HBASE_CONF_DIR}"
+
+
+CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar
+
+# for developers, add hbase classes to CLASSPATH
+if [ -d "$HBASE_HOME/build/classes" ]; then
+ CLASSPATH=${CLASSPATH}:$HBASE_HOME/build/classes
+fi
+if [ -d "$HBASE_HOME/build/test" ]; then
+ CLASSPATH=${CLASSPATH}:$HBASE_HOME/build/test
+fi
+if [ -d "$HBASE_HOME/build/webapps" ]; then
+ CLASSPATH=${CLASSPATH}:$HBASE_HOME/build
+fi
+
+# for releases, add hbase & webapps to CLASSPATH
+for f in $HBASE_HOME/hbase*.jar; do
+ if [ -f $f ]; then
+ CLASSPATH=${CLASSPATH}:$f;
+ fi
+done
+if [ -d "$HBASE_HOME/webapps" ]; then
+ CLASSPATH=${CLASSPATH}:$HBASE_HOME
+fi
+
+# Add libs to CLASSPATH
+for f in $HBASE_HOME/lib/*.jar; do
+ CLASSPATH=${CLASSPATH}:$f;
+done
+
+for f in $HBASE_HOME/lib/jsp-2.1/*.jar; do
+ CLASSPATH=${CLASSPATH}:$f;
+done
+
+# add user-specified CLASSPATH last
+if [ "$HBASE_CLASSPATH" != "" ]; then
+ CLASSPATH=${CLASSPATH}:${HBASE_CLASSPATH}
+fi
+
+# default log directory & file
+if [ "$HBASE_LOG_DIR" = "" ]; then
+ HBASE_LOG_DIR="$HBASE_HOME/logs"
+fi
+if [ "$HBASE_LOGFILE" = "" ]; then
+ HBASE_LOGFILE='hbase.log'
+fi
+
+# cygwin path translation
+if $cygwin; then
+ CLASSPATH=`cygpath -p -w "$CLASSPATH"`
+ HBASE_HOME=`cygpath -d "$HBASE_HOME"`
+ HBASE_LOG_DIR=`cygpath -d "$HBASE_LOG_DIR"`
+fi
+# setup 'java.library.path' for native-hadoop code if necessary
+JAVA_LIBRARY_PATH=''
+if [ -d "${HBASE_HOME}/build/native" -o -d "${HBASE_HOME}/lib/native" ]; then
+ JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
+
+ if [ -d "$HBASE_HOME/build/native" ]; then
+ JAVA_LIBRARY_PATH=${HBASE_HOME}/build/native/${JAVA_PLATFORM}/lib
+ fi
+
+ if [ -d "${HBASE_HOME}/lib/native" ]; then
+ if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+ JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:${HBASE_HOME}/lib/native/${JAVA_PLATFORM}
+ else
+ JAVA_LIBRARY_PATH=${HBASE_HOME}/lib/native/${JAVA_PLATFORM}
+ fi
+ fi
+fi
+
+# cygwin path translation
+if $cygwin; then
+ JAVA_LIBRARY_PATH=`cygpath -p "$JAVA_LIBRARY_PATH"`
+fi
+
+# restore ordinary behaviour
+unset IFS
+
+# figure out which class to run
+if [ "$COMMAND" = "shell" ] ; then
+ CLASS="org.jruby.Main ${HBASE_HOME}/bin/hirb.rb"
+elif [ "$COMMAND" = "master" ] ; then
+ CLASS='org.apache.hadoop.hbase.master.HMaster'
+elif [ "$COMMAND" = "regionserver" ] ; then
+ CLASS='org.apache.hadoop.hbase.regionserver.HRegionServer'
+elif [ "$COMMAND" = "rest" ] ; then
+ CLASS='org.apache.hadoop.hbase.rest.Dispatcher'
+elif [ "$COMMAND" = "thrift" ] ; then
+ CLASS='org.apache.hadoop.hbase.thrift.ThriftServer'
+elif [ "$COMMAND" = "migrate" ] ; then
+ CLASS='org.apache.hadoop.hbase.util.Migrate'
+elif [ "$COMMAND" = "zookeeper" ] ; then
+ CLASS='org.apache.hadoop.hbase.zookeeper.HQuorumPeer'
+else
+ CLASS=$COMMAND
+fi
+
+# Have JVM dump heap if we run out of memory. Files will be 'launch directory'
+# and are named like the following: java_pid21612.hprof. Apparently it doesn't
+# 'cost' to have this flag enabled. Its a 1.6 flag only. See:
+# http://blogs.sun.com/alanb/entry/outofmemoryerror_looks_a_bit_better
+HBASE_OPTS="$HBASE_OPTS -XX:+HeapDumpOnOutOfMemoryError"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.log.dir=$HBASE_LOG_DIR"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.log.file=$HBASE_LOGFILE"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.home.dir=$HBASE_HOME"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.id.str=$HBASE_IDENT_STRING"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.root.logger=${HBASE_ROOT_LOGGER:-INFO,console}"
+if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+ HBASE_OPTS="$HBASE_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
+fi
+
+# run it
+exec "$JAVA" $JAVA_HEAP_MAX $HBASE_OPTS -classpath "$CLASSPATH" $CLASS "$@"
diff --git a/bin/hbase-config.sh b/bin/hbase-config.sh
new file mode 100644
index 0000000..fad8039
--- /dev/null
+++ b/bin/hbase-config.sh
@@ -0,0 +1,73 @@
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# included in all the hbase scripts with source command
+# should not be executable directly
+# also should not be passed any arguments, since we need original $*
+# Modelled after $HADOOP_HOME/bin/hadoop-env.sh.
+
+# resolve links - $0 may be a softlink
+
+this="$0"
+while [ -h "$this" ]; do
+ ls=`ls -ld "$this"`
+ link=`expr "$ls" : '.*-> \(.*\)$'`
+ if expr "$link" : '.*/.*' > /dev/null; then
+ this="$link"
+ else
+ this=`dirname "$this"`/"$link"
+ fi
+done
+
+# convert relative path to absolute path
+bin=`dirname "$this"`
+script=`basename "$this"`
+bin=`cd "$bin"; pwd`
+this="$bin/$script"
+
+# the root of the hbase installation
+export HBASE_HOME=`dirname "$this"`/..
+
+#check to see if the conf dir or hbase home are given as an optional arguments
+while [ $# -gt 1 ]
+do
+ if [ "--config" = "$1" ]
+ then
+ shift
+ confdir=$1
+ shift
+ HBASE_CONF_DIR=$confdir
+ elif [ "--hosts" = "$1" ]
+ then
+ shift
+ hosts=$1
+ shift
+ HBASE_REGIONSERVERS=$hosts
+ else
+ # Presume we are at end of options and break
+ break
+ fi
+done
+
+# Allow alternate hbase conf dir location.
+HBASE_CONF_DIR="${HBASE_CONF_DIR:-$HBASE_HOME/conf}"
+# List of hbase regions servers.
+HBASE_REGIONSERVERS="${HBASE_REGIONSERVERS:-$HBASE_CONF_DIR/regionservers}"
diff --git a/bin/hbase-daemon.sh b/bin/hbase-daemon.sh
new file mode 100755
index 0000000..06b371e
--- /dev/null
+++ b/bin/hbase-daemon.sh
@@ -0,0 +1,168 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+#
+# Runs a Hadoop hbase command as a daemon.
+#
+# Environment Variables
+#
+# HBASE_CONF_DIR Alternate hbase conf dir. Default is ${HBASE_HOME}/conf.
+# HBASE_LOG_DIR Where log files are stored. PWD by default.
+# HBASE_PID_DIR The pid files are stored. /tmp by default.
+# HBASE_IDENT_STRING A string representing this instance of hadoop. $USER by default
+# HBASE_NICENESS The scheduling priority for daemons. Defaults to 0.
+#
+# Modelled after $HADOOP_HOME/bin/hadoop-daemon.sh
+
+usage="Usage: hbase-daemon.sh [--config <conf-dir>]\
+ (start|stop) <hbase-command> \
+ <args...>"
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+. "$bin"/hbase-config.sh
+
+# get arguments
+startStop=$1
+shift
+
+command=$1
+shift
+
+hbase_rotate_log ()
+{
+ log=$1;
+ num=5;
+ if [ -n "$2" ]; then
+ num=$2
+ fi
+ if [ -f "$log" ]; then # rotate logs
+ while [ $num -gt 1 ]; do
+ prev=`expr $num - 1`
+ [ -f "$log.$prev" ] && mv "$log.$prev" "$log.$num"
+ num=$prev
+ done
+ mv "$log" "$log.$num";
+ fi
+}
+
+if [ -f "${HBASE_CONF_DIR}/hbase-env.sh" ]; then
+ . "${HBASE_CONF_DIR}/hbase-env.sh"
+fi
+
+# get log directory
+if [ "$HBASE_LOG_DIR" = "" ]; then
+ export HBASE_LOG_DIR="$HBASE_HOME/logs"
+fi
+mkdir -p "$HBASE_LOG_DIR"
+
+if [ "$HBASE_PID_DIR" = "" ]; then
+ HBASE_PID_DIR=/tmp
+fi
+
+if [ "$HBASE_IDENT_STRING" = "" ]; then
+ export HBASE_IDENT_STRING="$USER"
+fi
+
+# Some variables
+# Work out java location so can print version into log.
+if [ "$JAVA_HOME" != "" ]; then
+ #echo "run java in $JAVA_HOME"
+ JAVA_HOME=$JAVA_HOME
+fi
+if [ "$JAVA_HOME" = "" ]; then
+ echo "Error: JAVA_HOME is not set."
+ exit 1
+fi
+JAVA=$JAVA_HOME/bin/java
+export HBASE_LOGFILE=hbase-$HBASE_IDENT_STRING-$command-$HOSTNAME.log
+export HBASE_ROOT_LOGGER="INFO,DRFA"
+logout=$HBASE_LOG_DIR/hbase-$HBASE_IDENT_STRING-$command-$HOSTNAME.out
+loglog="${HBASE_LOG_DIR}/${HBASE_LOGFILE}"
+pid=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.pid
+
+# Set default scheduling priority
+if [ "$HBASE_NICENESS" = "" ]; then
+ export HBASE_NICENESS=0
+fi
+
+case $startStop in
+
+ (start)
+ mkdir -p "$HBASE_PID_DIR"
+ if [ -f $pid ]; then
+ if kill -0 `cat $pid` > /dev/null 2>&1; then
+ echo $command running as process `cat $pid`. Stop it first.
+ exit 1
+ fi
+ fi
+
+ hbase_rotate_log $logout
+ echo starting $command, logging to $logout
+ # Add to the command log file vital stats on our environment.
+ echo "`date` Starting $command on `hostname`" >> $loglog
+ echo "ulimit -n `ulimit -n`" >> $loglog 2>&1
+ nohup nice -n $HBASE_NICENESS "$HBASE_HOME"/bin/hbase \
+ --config "${HBASE_CONF_DIR}" \
+ $command $startStop "$@" > "$logout" 2>&1 < /dev/null &
+ echo $! > $pid
+ sleep 1; head "$logout"
+ ;;
+
+ (stop)
+ if [ -f $pid ]; then
+ if kill -0 `cat $pid` > /dev/null 2>&1; then
+ echo -n stopping $command
+ echo "`date` Stopping $command" >> $loglog
+ if [ "$command" = "master" ]; then
+ nohup nice -n $HBASE_NICENESS "$HBASE_HOME"/bin/hbase \
+ --config "${HBASE_CONF_DIR}" \
+ $command $startStop "$@" > "$logout" 2>&1 < /dev/null &
+ else
+ echo "`date` Killing $command" >> $loglog
+ kill `cat $pid` > /dev/null 2>&1
+ fi
+ while kill -0 `cat $pid` > /dev/null 2>&1; do
+ echo -n "."
+ sleep 1;
+ done
+ echo
+ else
+ echo no $command to stop
+ fi
+ else
+ echo no $command to stop
+ fi
+ ;;
+
+ (*)
+ echo $usage
+ exit 1
+ ;;
+
+esac
diff --git a/bin/hbase-daemons.sh b/bin/hbase-daemons.sh
new file mode 100755
index 0000000..166af33
--- /dev/null
+++ b/bin/hbase-daemons.sh
@@ -0,0 +1,42 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+#
+# Run a hbase command on all slave hosts.
+# Modelled after $HADOOP_HOME/bin/hadoop-daemons.sh
+
+usage="Usage: hbase-daemons.sh [--config <hbase-confdir>] \
+ [--hosts regionserversfile] [start|stop] command args..."
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+. $bin/hbase-config.sh
+
+exec "$bin/regionservers.sh" --config "${HBASE_CONF_DIR}" \
+ cd "${HBASE_HOME}" \; \
+ "$bin/hbase-daemon.sh" --config "${HBASE_CONF_DIR}" "$@"
diff --git a/bin/hbase-zookeeper.sh b/bin/hbase-zookeeper.sh
new file mode 100755
index 0000000..859faab
--- /dev/null
+++ b/bin/hbase-zookeeper.sh
@@ -0,0 +1,42 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2009 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+#
+# Run a hbase command on all slave hosts.
+# Modelled after $HADOOP_HOME/bin/hadoop-daemons.sh
+
+usage="Usage: hbase-daemons.sh [--config <hbase-confdir>] \
+ [start|stop] command args..."
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+. $bin/hbase-config.sh
+
+exec "$bin/zookeeper.sh" --config "${HBASE_CONF_DIR}" \
+ cd "${HBASE_HOME}" \; \
+ "$bin/hbase-daemon.sh" --config "${HBASE_CONF_DIR}" "$@"
diff --git a/bin/hirb.rb b/bin/hirb.rb
new file mode 100644
index 0000000..73fcde4
--- /dev/null
+++ b/bin/hirb.rb
@@ -0,0 +1,444 @@
+# File passed to org.jruby.Main by bin/hbase. Pollutes jirb with hbase imports
+# and hbase commands and then loads jirb. Outputs a banner that tells user
+# where to find help, shell version, and loads up a custom hirb.
+
+# TODO: Add 'debug' support (client-side logs show in shell). Add it as
+# command-line option and as command.
+# TODO: Interrupt a table creation or a connection to a bad master. Currently
+# has to time out. Below we've set down the retries for rpc and hbase but
+# still can be annoying (And there seem to be times when we'll retry for
+# ever regardless)
+# TODO: Add support for listing and manipulating catalog tables, etc.
+# TODO: Encoding; need to know how to go from ruby String to UTF-8 bytes
+
+# Run the java magic include and import basic HBase types that will help ease
+# hbase hacking.
+include Java
+
+# Some goodies for hirb. Should these be left up to the user's discretion?
+require 'irb/completion'
+
+# Add the $HBASE_HOME/bin directory, the location of this script, to the ruby
+# load path so I can load up my HBase ruby modules
+$LOAD_PATH.unshift File.dirname($PROGRAM_NAME)
+# Require formatter and hbase
+require 'Formatter'
+require 'HBase'
+
+# See if there are args for this shell. If any, read and then strip from ARGV
+# so they don't go through to irb. Output shell 'usage' if user types '--help'
+cmdline_help = <<HERE # HERE document output as shell usage
+HBase Shell command-line options:
+ format Formatter for outputting results: console | html. Default: console
+ format-width Width of table outputs. Default: 110 characters.
+ master HBase master shell should connect to: e.g --master=example:60000
+HERE
+master = nil
+found = []
+format = 'console'
+format_width = 110
+for arg in ARGV
+ if arg =~ /^--master=(.+)/i
+ master = $1
+ found.push(arg)
+ elsif arg =~ /^--format=(.+)/i
+ format = $1
+ if format =~ /^html$/i
+ raise NoMethodError.new("Not yet implemented")
+ elsif format =~ /^console$/i
+ # This is default
+ else
+ raise ArgumentError.new("Unsupported format " + arg)
+ end
+ found.push(arg)
+ elsif arg =~ /^--format-width=(.+)/i
+ format_width = $1.to_i
+ found.push(arg)
+ elsif arg == '-h' || arg == '--help'
+ puts cmdline_help
+ exit
+ else
+ # Presume it a script and try running it. Will go on to run the shell unless
+ # script calls 'exit' or 'exit 0' or 'exit errcode'.
+ load(arg)
+ end
+end
+for arg in found
+ ARGV.delete(arg)
+end
+# Presume console format.
+@formatter = Formatter::Console.new(STDOUT, format_width)
+# TODO, etc. @formatter = Formatter::XHTML.new(STDOUT)
+
+# Setup the HBase module. Create a configuration. If a master, set it.
+# Turn off retries in hbase and ipc. Human doesn't want to wait on N retries.
+@configuration = org.apache.hadoop.hbase.HBaseConfiguration.new()
+@configuration.set("hbase.master", master) if master
+@configuration.setInt("hbase.client.retries.number", 5)
+@configuration.setInt("ipc.client.connect.max.retries", 3)
+
+# Do lazy create of admin because if we are pointed at bad master, it will hang
+# shell on startup trying to connect.
+@admin = nil
+
+# Promote hbase constants to be constants of this module so can
+# be used bare as keys in 'create', 'alter', etc. To see constants
+# in IRB, type 'Object.constants'. Don't promote defaults because
+# flattens all types to String. Can be confusing.
+def promoteConstants(constants)
+ # The constants to import are all in uppercase
+ for c in constants
+ if c == c.upcase
+ eval("%s = \"%s\"" % [c, c]) unless c =~ /DEFAULT_.*/
+ end
+ end
+end
+promoteConstants(org.apache.hadoop.hbase.HColumnDescriptor.constants)
+promoteConstants(org.apache.hadoop.hbase.HTableDescriptor.constants)
+promoteConstants(HBase.constants)
+
+# Start of the hbase shell commands.
+
+# General shell methods
+
+def tools
+ # Help for hbase shell surgery tools
+ h = <<HERE
+HBASE SURGERY TOOLS:
+ close_region Close a single region. Optionally specify regionserver.
+ Examples:
+
+ hbase> close_region 'REGIONNAME'
+ hbase> close_region 'REGIONNAME', 'REGIONSERVER_IP:PORT'
+
+ compact Compact all regions in passed table or pass a region row
+ to compact an individual region
+
+ disable_region Disable a single region
+
+ enable_region Enable a single region. For example:
+
+ hbase> enable_region 'REGIONNAME'
+
+ flush Flush all regions in passed table or pass a region row to
+ flush an individual region. For example:
+
+ hbase> flush 'TABLENAME'
+ hbase> flush 'REGIONNAME'
+
+ major_compact Run major compaction on passed table or pass a region row
+ to major compact an individual region
+
+ split Split table or pass a region row to split individual region
+
+Above commands are for 'experts'-only as misuse can damage an install
+HERE
+ puts h
+end
+
+def help
+ # Output help. Help used to be a dictionary of name to short and long
+ # descriptions emitted using Formatters but awkward getting it to show
+ # nicely on console; instead use a HERE document. Means we can't
+ # output help other than on console but not an issue at the moment.
+ # TODO: Add help to the commands themselves rather than keep it distinct
+ h = <<HERE
+HBASE SHELL COMMANDS:
+ alter Alter column family schema; pass table name and a dictionary
+ specifying new column family schema. Dictionaries are described
+ below in the GENERAL NOTES section. Dictionary must include name
+ of column family to alter. For example,
+
+ To change or add the 'f1' column family in table 't1' from defaults
+ to instead keep a maximum of 5 cell VERSIONS, do:
+ hbase> alter 't1', {NAME => 'f1', VERSIONS => 5}
+
+ To delete the 'f1' column family in table 't1', do:
+ hbase> alter 't1', {NAME => 'f1', METHOD => 'delete'}
+
+ You can also change table-scope attributes like MAX_FILESIZE
+ MEMCACHE_FLUSHSIZE and READONLY.
+
+ For example, to change the max size of a family to 128MB, do:
+ hbase> alter 't1', {METHOD => 'table_att', MAX_FILESIZE => '134217728'}
+
+ count Count the number of rows in a table. This operation may take a LONG
+ time (Run '$HADOOP_HOME/bin/hadoop jar hbase.jar rowcount' to run a
+ counting mapreduce job). Current count is shown every 1000 rows by
+ default. Count interval may be optionally specified. Examples:
+
+ hbase> count 't1'
+ hbase> count 't1', 100000
+
+ create Create table; pass table name, a dictionary of specifications per
+ column family, and optionally a dictionary of table configuration.
+ Dictionaries are described below in the GENERAL NOTES section.
+ Examples:
+
+ hbase> create 't1', {NAME => 'f1', VERSIONS => 5}
+ hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
+ hbase> # The above in shorthand would be the following:
+ hbase> create 't1', 'f1', 'f2', 'f3'
+ hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, \\
+ BLOCKCACHE => true}
+
+ describe Describe the named table: e.g. "hbase> describe 't1'"
+
+ delete Put a delete cell value at specified table/row/column and optionally
+ timestamp coordinates. Deletes must match the deleted cell's
+ coordinates exactly. When scanning, a delete cell suppresses older
+ versions. Takes arguments like the 'put' command described below
+
+ deleteall Delete all cells in a given row; pass a table name, row, and optionally
+ a column and timestamp
+
+ disable Disable the named table: e.g. "hbase> disable 't1'"
+
+ drop Drop the named table. Table must first be disabled. If table has
+ more than one region, run a major compaction on .META.:
+
+ hbase> major_compact ".META."
+
+ enable Enable the named table
+
+ exists Does the named table exist? e.g. "hbase> exists 't1'"
+
+ exit Type "hbase> exit" to leave the HBase Shell
+
+ get Get row or cell contents; pass table name, row, and optionally
+ a dictionary of column(s), timestamp and versions. Examples:
+
+ hbase> get 't1', 'r1'
+ hbase> get 't1', 'r1', {COLUMN => 'c1'}
+ hbase> get 't1', 'r1', {COLUMN => ['c1', 'c2', 'c3']}
+ hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}
+ hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, \\
+ VERSIONS => 4}
+
+ list List all tables in hbase
+
+ put Put a cell 'value' at specified table/row/column and optionally
+ timestamp coordinates. To put a cell value into table 't1' at
+ row 'r1' under column 'c1' marked with the time 'ts1', do:
+
+ hbase> put 't1', 'r1', 'c1', 'value', ts1
+
+ tools Listing of hbase surgery tools
+
+ scan Scan a table; pass table name and optionally a dictionary of scanner
+ specifications. Scanner specifications may include one or more of
+ the following: LIMIT, STARTROW, STOPROW, TIMESTAMP, or COLUMNS. If
+ no columns are specified, all columns will be scanned. To scan all
+ members of a column family, leave the qualifier empty as in
+ 'col_family:'. Examples:
+
+ hbase> scan '.META.'
+ hbase> scan '.META.', {COLUMNS => 'info:regioninfo'}
+ hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, \\
+ STARTROW => 'xyz'}
+
+ shutdown Shut down the cluster.
+
+ truncate Disables, drops and recreates the specified table.
+
+ version Output this HBase version
+
+GENERAL NOTES:
+Quote all names in the hbase shell such as table and column names. Don't
+forget commas delimit command parameters. Type <RETURN> after entering a
+command to run it. Dictionaries of configuration used in the creation and
+alteration of tables are ruby Hashes. They look like this:
+
+ {'key1' => 'value1', 'key2' => 'value2', ...}
+
+They are opened and closed with curley-braces. Key/values are delimited by
+the '=>' character combination. Usually keys are predefined constants such as
+NAME, VERSIONS, COMPRESSION, etc. Constants do not need to be quoted. Type
+'Object.constants' to see a (messy) list of all constants in the environment.
+
+This HBase shell is the JRuby IRB with the above HBase-specific commands added.
+For more on the HBase Shell, see http://wiki.apache.org/hadoop/Hbase/Shell
+HERE
+ puts h
+end
+
+def version
+ # Output version.
+ puts "Version: #{org.apache.hadoop.hbase.util.VersionInfo.getVersion()},\
+ r#{org.apache.hadoop.hbase.util.VersionInfo.getRevision()},\
+ #{org.apache.hadoop.hbase.util.VersionInfo.getDate()}"
+end
+
+def shutdown
+ admin().shutdown()
+end
+
+# DDL
+
+def admin()
+ @admin = HBase::Admin.new(@configuration, @formatter) unless @admin
+ @admin
+end
+
+def table(table)
+ # Create new one each time
+ HBase::Table.new(@configuration, table, @formatter)
+end
+
+def create(table, *args)
+ admin().create(table, args)
+end
+
+def drop(table)
+ admin().drop(table)
+end
+
+def alter(table, args)
+ admin().alter(table, args)
+end
+
+# Administration
+
+def list
+ admin().list()
+end
+
+def describe(table)
+ admin().describe(table)
+end
+
+def enable(table)
+ admin().enable(table)
+end
+
+def disable(table)
+ admin().disable(table)
+end
+
+def enable_region(regionName)
+ admin().enable_region(regionName)
+end
+
+def disable_region(regionName)
+ admin().disable_region(regionName)
+end
+
+def exists(table)
+ admin().exists(table)
+end
+
+def truncate(table)
+ admin().truncate(table)
+end
+
+def close_region(regionName, server = nil)
+ admin().close_region(regionName, server)
+end
+
+# CRUD
+
+def get(table, row, args = {})
+ table(table).get(row, args)
+end
+
+def put(table, row, column, value, timestamp = nil)
+ table(table).put(row, column, value, timestamp)
+end
+
+def scan(table, args = {})
+ table(table).scan(args)
+end
+
+def delete(table, row, column,
+ timestamp = org.apache.hadoop.hbase.HConstants::LATEST_TIMESTAMP)
+ table(table).delete(row, column, timestamp)
+end
+
+def deleteall(table, row, column = nil,
+ timestamp = org.apache.hadoop.hbase.HConstants::LATEST_TIMESTAMP)
+ table(table).deleteall(row, column, timestamp)
+end
+
+def count(table, interval = 1000)
+ table(table).count(interval)
+end
+
+def flush(tableNameOrRegionName)
+ admin().flush(tableNameOrRegionName)
+end
+
+def compact(tableNameOrRegionName)
+ admin().compact(tableNameOrRegionName)
+end
+
+def major_compact(tableNameOrRegionName)
+ admin().major_compact(tableNameOrRegionName)
+end
+
+def split(tableNameOrRegionName)
+ admin().split(tableNameOrRegionName)
+end
+
+# Output a banner message that tells users where to go for help
+puts <<HERE
+HBase Shell; enter 'help<RETURN>' for list of supported commands.
+HERE
+version
+
+require "irb"
+
+module IRB
+ # Subclass of IRB so can intercept methods
+ class HIRB < Irb
+ def initialize
+ # This is ugly. Our 'help' method above provokes the following message
+ # on irb construction: 'irb: warn: can't alias help from irb_help.'
+ # Below, we reset the output so its pointed at /dev/null during irb
+ # construction just so this message does not come out after we emit
+ # the banner. Other attempts at playing with the hash of methods
+ # down in IRB didn't seem to work. I think the worst thing that can
+ # happen is the shell exiting because of failed IRB construction with
+ # no error (though we're not blanking STDERR)
+ begin
+ f = File.open("/dev/null", "w")
+ $stdout = f
+ super
+ ensure
+ f.close()
+ $stdout = STDOUT
+ end
+ end
+
+ def output_value
+ # Suppress output if last_value is 'nil'
+ # Otherwise, when user types help, get ugly 'nil'
+ # after all output.
+ if @context.last_value != nil
+ super
+ end
+ end
+ end
+
+ def IRB.start(ap_path = nil)
+ $0 = File::basename(ap_path, ".rb") if ap_path
+
+ IRB.setup(ap_path)
+ @CONF[:IRB_NAME] = 'hbase'
+ @CONF[:AP_NAME] = 'hbase'
+
+ if @CONF[:SCRIPT]
+ hirb = HIRB.new(nil, @CONF[:SCRIPT])
+ else
+ hirb = HIRB.new
+ end
+
+ @CONF[:IRB_RC].call(hirb.context) if @CONF[:IRB_RC]
+ @CONF[:MAIN_CONTEXT] = hirb.context
+
+ catch(:IRB_EXIT) do
+ hirb.eval_input
+ end
+ end
+end
+
+IRB.start
diff --git a/bin/regionservers.sh b/bin/regionservers.sh
new file mode 100755
index 0000000..b88ad45
--- /dev/null
+++ b/bin/regionservers.sh
@@ -0,0 +1,74 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+#
+# Run a shell command on all regionserver hosts.
+#
+# Environment Variables
+#
+# HBASE_REGIONSERVERS File naming remote hosts.
+# Default is ${HADOOP_CONF_DIR}/regionservers
+# HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_HOME}/conf.
+# HBASE_CONF_DIR Alternate hbase conf dir. Default is ${HBASE_HOME}/conf.
+# HADOOP_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
+# HADOOP_SSH_OPTS Options passed to ssh when running remote commands.
+#
+# Modelled after $HADOOP_HOME/bin/slaves.sh.
+
+usage="Usage: regionservers [--config <hbase-confdir>] command..."
+
+# if no args specified, show usage
+if [ $# -le 0 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+. "$bin"/hbase-config.sh
+
+# If the regionservers file is specified in the command line,
+# then it takes precedence over the definition in
+# hbase-env.sh. Save it here.
+HOSTLIST=$HBASE_REGIONSERVERS
+
+if [ -f "${HBASE_CONF_DIR}/hbase-env.sh" ]; then
+ . "${HBASE_CONF_DIR}/hbase-env.sh"
+fi
+
+if [ "$HOSTLIST" = "" ]; then
+ if [ "$HBASE_REGIONSERVERS" = "" ]; then
+ export HOSTLIST="${HBASE_CONF_DIR}/regionservers"
+ else
+ export HOSTLIST="${HBASE_REGIONSERVERS}"
+ fi
+fi
+
+for regionserver in `cat "$HOSTLIST"`; do
+ ssh $HBASE_SSH_OPTS $regionserver $"${@// /\\ }" \
+ 2>&1 | sed "s/^/$regionserver: /" &
+ if [ "$HBASE_SLAVE_SLEEP" != "" ]; then
+ sleep $HBASE_SLAVE_SLEEP
+ fi
+done
+
+wait
diff --git a/bin/rename_table.rb b/bin/rename_table.rb
new file mode 100644
index 0000000..df468aa
--- /dev/null
+++ b/bin/rename_table.rb
@@ -0,0 +1,154 @@
+# Script that renames table in hbase. As written, will not work for rare
+# case where there is more than one region in .META. table. Does the
+# update of the hbase .META. and moves the directories in filesystem.
+# HBase MUST be shutdown when you run this script. On successful rename,
+# DOES NOT remove old directory from filesystem because was afraid this
+# script could remove the original table on error.
+#
+# To see usage for this script, run:
+#
+# ${HBASE_HOME}/bin/hbase org.jruby.Main rename_table.rb
+#
+include Java
+import org.apache.hadoop.hbase.util.MetaUtils
+import org.apache.hadoop.hbase.util.FSUtils
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.HStoreKey
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.regionserver.HLogEdit
+import org.apache.hadoop.hbase.regionserver.HRegion
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.fs.FileSystem
+import org.apache.commons.logging.Log
+import org.apache.commons.logging.LogFactory
+import java.util.TreeMap
+
+# Name of this script
+NAME = "rename_table"
+
+# Print usage for this script
+def usage
+ puts 'Usage: %s.rb <OLD_NAME> <NEW_NAME>' % NAME
+ exit!
+end
+
+# Passed 'dir' exists and is a directory else exception
+def isDirExists(fs, dir)
+ raise IOError.new("Does not exit: " + dir.toString()) unless fs.exists(dir)
+ raise IOError.new("Not a directory: " + dir.toString()) unless fs.isDirectory(dir)
+end
+
+# Returns true if the region belongs to passed table
+def isTableRegion(tableName, hri)
+ return Bytes.equals(hri.getTableDesc().getName(), tableName)
+end
+
+# Create new HRI based off passed 'oldHRI'
+def createHRI(tableName, oldHRI)
+ htd = oldHRI.getTableDesc()
+ newHtd = HTableDescriptor.new(tableName)
+ for family in htd.getFamilies()
+ newHtd.addFamily(family)
+ end
+ return HRegionInfo.new(newHtd, oldHRI.getStartKey(), oldHRI.getEndKey(),
+ oldHRI.isSplit())
+end
+
+# Check arguments
+if ARGV.size != 2
+ usage
+end
+
+# Check good table names were passed.
+oldTableName = HTableDescriptor.isLegalTableName(ARGV[0].to_java_bytes)
+newTableName = HTableDescriptor.isLegalTableName(ARGV[1].to_java_bytes)
+
+# Get configuration to use.
+c = HBaseConfiguration.new()
+
+# Set hadoop filesystem configuration using the hbase.rootdir.
+# Otherwise, we'll always use localhost though the hbase.rootdir
+# might be pointing at hdfs location.
+c.set("fs.default.name", c.get(HConstants::HBASE_DIR))
+fs = FileSystem.get(c)
+
+# If new table directory does not exit, create it. Keep going if already
+# exists because maybe we are rerunning script because it failed first
+# time.
+rootdir = FSUtils.getRootDir(c)
+oldTableDir = Path.new(rootdir, Path.new(Bytes.toString(oldTableName)))
+isDirExists(fs, oldTableDir)
+newTableDir = Path.new(rootdir, Bytes.toString(newTableName))
+if !fs.exists(newTableDir)
+ fs.mkdirs(newTableDir)
+end
+
+# Get a logger and a metautils instance.
+LOG = LogFactory.getLog(NAME)
+utils = MetaUtils.new(c)
+
+# Start. Get all meta rows.
+begin
+ # Get list of all .META. regions that contain old table name
+ metas = utils.getMETARows(oldTableName)
+ index = 0
+ for meta in metas
+ # For each row we find, move its region from old to new table.
+ # Need to update the encoded name in the hri as we move.
+ # After move, delete old entry and create a new.
+ LOG.info("Scanning " + meta.getRegionNameAsString())
+ metaRegion = utils.getMetaRegion(meta)
+ scanner = metaRegion.getScanner(HConstants::COL_REGIONINFO_ARRAY, oldTableName,
+ HConstants::LATEST_TIMESTAMP, nil)
+ begin
+ key = HStoreKey.new()
+ value = TreeMap.new(Bytes.BYTES_COMPARATOR)
+ while scanner.next(key, value)
+ index = index + 1
+ keyStr = key.toString()
+ oldHRI = Writables.getHRegionInfo(value.get(HConstants::COL_REGIONINFO))
+ if !oldHRI
+ raise IOError.new(index.to_s + " HRegionInfo is null for " + keyStr)
+ end
+ unless isTableRegion(oldTableName, oldHRI)
+ # If here, we passed out the table. Break.
+ break
+ end
+ oldRDir = Path.new(oldTableDir, Path.new(oldHRI.getEncodedName().to_s))
+ if !fs.exists(oldRDir)
+ LOG.warn(oldRDir.toString() + " does not exist -- region " +
+ oldHRI.getRegionNameAsString())
+ else
+ # Now make a new HRegionInfo to add to .META. for the new region.
+ newHRI = createHRI(newTableName, oldHRI)
+ newRDir = Path.new(newTableDir, Path.new(newHRI.getEncodedName().to_s))
+ # Move the region in filesystem
+ LOG.info("Renaming " + oldRDir.toString() + " as " + newRDir.toString())
+ fs.rename(oldRDir, newRDir)
+ # Removing old region from meta
+ LOG.info("Removing " + Bytes.toString(key.getRow()) + " from .META.")
+ metaRegion.deleteAll(key.getRow(), HConstants::LATEST_TIMESTAMP)
+ # Create 'new' region
+ newR = HRegion.new(rootdir, utils.getLog(), fs, c, newHRI, nil)
+ # Add new row. NOTE: Presumption is that only one .META. region. If not,
+ # need to do the work to figure proper region to add this new region to.
+ LOG.info("Adding to meta: " + newR.toString())
+ HRegion.addRegionToMETA(metaRegion, newR)
+ LOG.info("Done moving: " + Bytes.toString(key.getRow()))
+ end
+ # Need to clear value else we keep appending values.
+ value.clear()
+ end
+ ensure
+ scanner.close()
+ end
+ end
+ LOG.info("Renamed table -- manually delete " + oldTableDir.toString());
+ensure
+ utils.shutdown()
+end
diff --git a/bin/start-hbase.sh b/bin/start-hbase.sh
new file mode 100755
index 0000000..324fa64
--- /dev/null
+++ b/bin/start-hbase.sh
@@ -0,0 +1,45 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# Modelled after $HADOOP_HOME/bin/start-hbase.sh.
+
+# Start hadoop hbase daemons.
+# Run this on master node.
+usage="Usage: start-hbase.sh"
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+. "$bin"/hbase-config.sh
+
+# start hbase daemons
+# TODO: PUT BACK !!! "${HADOOP_HOME}"/bin/hadoop dfsadmin -safemode wait
+errCode=$?
+if [ $errCode -ne 0 ]
+then
+ exit $errCode
+fi
+"$bin"/hbase-zookeeper.sh --config "${HBASE_CONF_DIR}" \
+ start zookeeper
+"$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" start master
+"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
+ --hosts "${HBASE_REGIONSERVERS}" start regionserver
diff --git a/bin/stop-hbase.sh b/bin/stop-hbase.sh
new file mode 100755
index 0000000..071ae55
--- /dev/null
+++ b/bin/stop-hbase.sh
@@ -0,0 +1,34 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# Modelled after $HADOOP_HOME/bin/stop-hbase.sh.
+
+# Stop hadoop hbase daemons. Run this on master node.
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+. "$bin"/hbase-config.sh
+
+"$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" stop master
+"$bin"/hbase-zookeeper.sh --config "${HBASE_CONF_DIR}" \
+ stop zookeeper
diff --git a/bin/zookeeper.sh b/bin/zookeeper.sh
new file mode 100755
index 0000000..583f2bd
--- /dev/null
+++ b/bin/zookeeper.sh
@@ -0,0 +1,62 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2009 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+#
+# Run a shell command on all regionserver hosts.
+#
+# Environment Variables
+#
+# HADOOP_CONF_DIR Alternate conf dir. Default is ${HADOOP_HOME}/conf.
+# HBASE_CONF_DIR Alternate hbase conf dir. Default is ${HBASE_HOME}/conf.
+# HADOOP_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
+# HADOOP_SSH_OPTS Options passed to ssh when running remote commands.
+#
+# Modelled after $HADOOP_HOME/bin/slaves.sh.
+
+usage="Usage: zookeeper [--config <hbase-confdir>] command..."
+
+# if no args specified, show usage
+if [ $# -le 0 ]; then
+ echo $usage
+ exit 1
+fi
+
+bin=`dirname "$0"`
+bin=`cd "$bin"; pwd`
+
+. "$bin"/hbase-config.sh
+
+if [ -f "${HBASE_CONF_DIR}/hbase-env.sh" ]; then
+ . "${HBASE_CONF_DIR}/hbase-env.sh"
+fi
+
+if [ "$HBASE_MANAGES_ZK" = "" ]; then
+ HBASE_MANAGES_ZK=true
+fi
+
+if [ "$HBASE_MANAGES_ZK" = "true" ]; then
+ ssh $HBASE_SSH_OPTS 127.0.0.1 $"${@// /\\ }" 2>&1 | sed "s/^/localhost: /" &
+ if [ "$HBASE_SLAVE_SLEEP" != "" ]; then
+ sleep $HBASE_SLAVE_SLEEP
+ fi
+fi
+
+wait
diff --git a/build.xml b/build.xml
new file mode 100644
index 0000000..7febacd
--- /dev/null
+++ b/build.xml
@@ -0,0 +1,466 @@
+<?xml version="1.0"?>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project name="hbase" default="jar">
+ <property name="version" value="0.20.0-dev-0.18.3"/>
+ <property name="Name" value="HBase"/>
+ <property name="final.name" value="hbase-${version}"/>
+ <property name="year" value="2008"/>
+
+ <!-- Load all the default properties, and any the user wants -->
+ <!-- to contribute (without having to type -D or edit this file -->
+ <property file="${user.home}/${name}.build.properties" />
+ <property file="${basedir}/build.properties" />
+
+ <property name="src.dir" location="${basedir}/src/java"/>
+ <property name="src.test" location="${basedir}/src/test"/>
+ <property name="src.testdata" location="${basedir}/src/testdata"/>
+ <property name="src.examples" location="${basedir}/src/examples"/>
+ <property name="src.webapps" location="${basedir}/src/webapps"/>
+
+ <property name="lib.dir" value="${basedir}/lib"/>
+ <property name="conf.dir" value="${basedir}/conf"/>
+ <property name="docs.dir" value="${basedir}/docs"/>
+ <property name="docs.src" value="${basedir}/src/docs"/>
+
+ <property name="test.output" value="no"/>
+ <property name="test.timeout" value="600000"/>
+
+ <property name="build.dir" location="${basedir}/build"/>
+ <property name="build.bin" location="${build.dir}/bin"/>
+ <property name="build.conf" location="${build.dir}/conf"/>
+ <property name="build.webapps" location="${build.dir}/webapps"/>
+ <property name="build.lib" location="${build.dir}/lib"/>
+ <property name="build.classes" location="${build.dir}/classes"/>
+ <property name="build.test" location="${build.dir}/test"/>
+ <property name="build.examples" location="${build.dir}/examples"/>
+ <property name="build.docs" value="${build.dir}/docs"/>
+ <property name="build.javadoc" value="${build.docs}/api"/>
+ <property name="build.encoding" value="ISO-8859-1"/>
+ <property name="build.src" value="${build.dir}/src"/>
+ <property name="generated.webapps.src" value="${build.src}"/>
+
+ <property name="test.build.dir" value="${build.dir}/test"/>
+ <property name="test.log.dir" value="${test.build.dir}/logs"/>
+ <property name="test.junit.output.format" value="plain"/>
+
+ <property name="dist.dir" value="${build.dir}/${final.name}"/>
+
+ <property name="javac.deprecation" value="off"/>
+ <property name="javac.debug" value="on"/>
+ <property name="javac.version" value="1.6"/>
+
+ <property name="clover.db.dir" location="${build.dir}/test/clover/db"/>
+ <property name="clover.report.dir" location="${build.dir}/test/clover/reports"/>
+
+
+ <property name="javadoc.link.java"
+ value="http://java.sun.com/javase/6/docs/api/"/>
+ <property name="javadoc.packages" value="org.apache.hadoop.hbase.*"/>
+ <property name="jarfile" value="${build.dir}/${final.name}.jar" />
+
+ <property name="clover.jar" location="${clover.home}/lib/clover.jar"/>
+ <available property="clover.present" file="${clover.jar}"/>
+
+ <!-- check if clover reports should be generated -->
+ <condition property="clover.enabled">
+ <and>
+ <isset property="run.clover"/>
+ <isset property="clover.present"/>
+ </and>
+ </condition>
+
+ <!--We need to have the hadoop jars ride in front of the hbase classes or we
+ get the below exceptions:
+
+ [junit] java.io.FileNotFoundException: file:/Users/stack/Documents/checkouts/hbase/trunk/build/webapps/dfs
+
+ When we move off 0.16.0 hadoop, fix HttpStatusServer
+ -->
+ <fileset id="lib.jars" dir="${basedir}" includes="lib/*.jar"/>
+ <path id="classpath">
+ <fileset refid="lib.jars"/>
+ <fileset dir="${lib.dir}/jetty-ext/">
+ <include name="*jar" />
+ </fileset>
+ <pathelement location="${build.classes}"/>
+ <pathelement location="${conf.dir}"/>
+ </path>
+
+ <target name="init">
+ <mkdir dir="${build.dir}"/>
+ <mkdir dir="${build.classes}"/>
+ <mkdir dir="${build.test}"/>
+ <mkdir dir="${build.examples}"/>
+
+ <!--Copy webapps over to build dir. Exclude jsp and generated-src java
+ classes -->
+ <mkdir dir="${build.webapps}"/>
+ <copy todir="${build.webapps}">
+ <fileset dir="${src.webapps}">
+ <exclude name="**/*.jsp" />
+ <exclude name="**/.*" />
+ <exclude name="**/*~" />
+ </fileset>
+ </copy>
+ <!--Copy bin, lib, and conf. too-->
+ <mkdir dir="${build.lib}"/>
+ <copy todir="${build.lib}">
+ <fileset dir="${lib.dir}" />
+ </copy>
+ <mkdir dir="${build.conf}"/>
+ <copy todir="${build.conf}">
+ <fileset dir="${basedir}/conf" />
+ </copy>
+ <mkdir dir="${build.bin}"/>
+ <copy todir="${build.bin}">
+ <fileset dir="${basedir}/bin" />
+ </copy>
+ <chmod perm="ugo+x" type="file">
+ <fileset dir="${build.bin}" />
+ </chmod>
+ <exec executable="sh">
+ <arg line="src/saveVersion.sh ${version}"/>
+ </exec>
+ </target>
+
+ <target name="compile" depends="clover,init,jspc">
+ <!--Compile whats under src and generated java classes made from jsp-->
+ <javac
+ encoding="${build.encoding}"
+ srcdir="${src.dir};${build.src}"
+ includes="**/*.java"
+ destdir="${build.classes}"
+ debug="${javac.debug}"
+ target="${javac.version}"
+ source="${javac.version}"
+ deprecation="${javac.deprecation}">
+ <classpath refid="classpath"/>
+ </javac>
+ </target>
+
+ <target name="jar" depends="compile" description="Build jar">
+ <!--Copy over any properties under src-->
+ <copy todir="${build.classes}">
+ <fileset dir="${src.dir}">
+ <include name="**/*.properties" />
+ </fileset>
+ </copy>
+ <jar jarfile="${jarfile}"
+ basedir="${build.classes}" >
+ <fileset file="${basedir}/conf/hbase-default.xml"/>
+ <zipfileset dir="${build.webapps}" prefix="webapps"/>
+ <manifest>
+ <attribute name="Main-Class" value="org/apache/hadoop/hbase/mapred/Driver" />
+ </manifest>
+ </jar>
+ </target>
+
+ <!--Conditionally generate the jsp java pages.
+ We do it once per ant invocation. See hbase-593.
+ -->
+ <target name="jspc" depends="init" unless="jspc.not.required">
+ <path id="jspc.classpath">
+ <fileset dir="${basedir}/lib/">
+ <include name="servlet-api*jar" />
+ <include name="commons-logging*jar" />
+ <include name="jetty-*jar" />
+ <include name="jetty-ext/*jar" />
+ </fileset>
+ </path>
+ <taskdef classname="org.apache.jasper.JspC" name="jspcompiler" >
+ <classpath refid="jspc.classpath"/>
+ </taskdef>
+ <mkdir dir="${build.webapps}/master/WEB-INF"/>
+ <jspcompiler
+ uriroot="${src.webapps}/master"
+ outputdir="${generated.webapps.src}"
+ package="org.apache.hadoop.hbase.generated.master"
+ webxml="${build.webapps}/master/WEB-INF/web.xml">
+ </jspcompiler>
+ <mkdir dir="${build.webapps}/regionserver/WEB-INF"/>
+ <jspcompiler
+ uriroot="${src.webapps}/regionserver"
+ outputdir="${generated.webapps.src}"
+ package="org.apache.hadoop.hbase.generated.regionserver"
+ webxml="${build.webapps}/regionserver/WEB-INF/web.xml">
+ </jspcompiler>
+ <property name="jspc.not.required" value="true" />
+ <echo message="Setting jspc.notRequired property. jsp pages generated once per ant session only" />
+ </target>
+
+ <target name="clover" depends="clover.setup, clover.info" description="Instrument the Unit tests using Clover. To use, specify -Dclover.home=<base of clover installation> -Drun.clover=true on the command line."/>
+
+ <target name="clover.setup" if="clover.enabled">
+ <taskdef resource="cloverlib.xml" classpath="${clover.jar}"/>
+ <mkdir dir="${clover.db.dir}"/>
+ <clover-setup initString="${clover.db.dir}/hbase_coverage.db">
+ <fileset dir="src" includes="java/**/*"/>
+ </clover-setup>
+ </target>
+
+ <target name="clover.info" unless="clover.present">
+ <echo>
+ Clover not found. Code coverage reports disabled.
+ </echo>
+ </target>
+
+ <target name="clover.check">
+ <fail unless="clover.present">
+ ##################################################################
+ Clover not found.
+ Please specify -Dclover.home=<base of clover installation>
+ on the command line.
+ ##################################################################
+ </fail>
+ </target>
+
+ <target name="generate-clover-reports" depends="clover.check, clover">
+ <mkdir dir="${clover.report.dir}"/>
+ <clover-report>
+ <current outfile="${clover.report.dir}" title="${final.name}">
+ <format type="html"/>
+ </current>
+ </clover-report>
+ <clover-report>
+ <current outfile="${clover.report.dir}/clover.xml" title="${final.name}">
+ <format type="xml"/>
+ </current>
+ </clover-report>
+ </target>
+
+ <!-- ================================================================== -->
+ <!-- Package -->
+ <!-- ================================================================== -->
+ <target name="package" depends="jar,javadoc,compile-test"
+ description="Build distribution">
+ <mkdir dir="${dist.dir}"/>
+ <copy todir="${dist.dir}" includeEmptyDirs="false" flatten="true">
+ <fileset dir="${build.dir}">
+ <include name="${final.name}.jar" />
+ <include name="${final.name}-test.jar" />
+ </fileset>
+ </copy>
+ <mkdir dir="${dist.dir}/webapps"/>
+ <copy todir="${dist.dir}/webapps">
+ <fileset dir="${build.webapps}" />
+ </copy>
+ <mkdir dir="${dist.dir}/lib"/>
+ <copy todir="${dist.dir}/lib">
+ <fileset dir="${build.lib}" />
+ </copy>
+ <mkdir dir="${dist.dir}/conf" />
+ <copy todir="${dist.dir}/conf">
+ <fileset dir="${build.conf}" />
+ </copy>
+ <mkdir dir="${dist.dir}/bin" />
+ <copy todir="${dist.dir}/bin">
+ <fileset dir="${build.bin}" />
+ </copy>
+ <chmod perm="ugo+x" type="file">
+ <fileset dir="${dist.dir}/bin" />
+ </chmod>
+ <mkdir dir="${dist.dir}/docs" />
+ <copy todir="${dist.dir}/docs">
+ <fileset dir="${docs.dir}" />
+ <fileset dir="${build.docs}"/>
+ </copy>
+ <copy todir="${dist.dir}">
+ <fileset dir=".">
+ <include name="*.txt" />
+ <include name="build.xml" />
+ </fileset>
+ </copy>
+ <mkdir dir="${dist.dir}/src" />
+ <copy todir="${dist.dir}/src" includeEmptyDirs="true">
+ <fileset dir="src" excludes="**/*.template **/docs/build/**/*"/>
+ </copy>
+ </target>
+
+ <!-- ================================================================== -->
+ <!-- Make release tarball -->
+ <!-- ================================================================== -->
+ <macrodef name="macro_tar" description="Worker Macro for tar">
+ <attribute name="param.destfile"/>
+ <element name="param.listofitems"/>
+ <sequential>
+ <tar compression="gzip" longfile="gnu"
+ destfile="@{param.destfile}">
+ <param.listofitems/>
+ </tar>
+ </sequential>
+ </macrodef>
+ <target name="tar" depends="package" description="Make release tarball">
+ <macro_tar param.destfile="${build.dir}/${final.name}.tar.gz">
+ <param.listofitems>
+ <tarfileset dir="${build.dir}" mode="664">
+ <exclude name="${final.name}/bin/*" />
+ <include name="${final.name}/**" />
+ </tarfileset>
+ <tarfileset dir="${build.dir}" mode="755">
+ <include name="${final.name}/bin/*" />
+ </tarfileset>
+ </param.listofitems>
+ </macro_tar>
+ </target>
+
+ <target name="binary" depends="package" description="Make tarball without source and documentation">
+ <macro_tar param.destfile="${build.dir}/${final.name}-bin.tar.gz">
+ <param.listofitems>
+ <tarfileset dir="${build.dir}" mode="664">
+ <exclude name="${final.name}/bin/*" />
+ <exclude name="${final.name}/src/**" />
+ <exclude name="${final.name}/docs/**" />
+ <include name="${final.name}/**" />
+ </tarfileset>
+ <tarfileset dir="${build.dir}" mode="755">
+ <include name="${final.name}/bin/*" />
+ </tarfileset>
+ </param.listofitems>
+ </macro_tar>
+ </target>
+
+ <!-- ================================================================== -->
+ <!-- Doc -->
+ <!-- ================================================================== -->
+ <target name="docs" depends="forrest.check" description="Generate forrest-based documentation. To use, specify -Dforrest.home=<base of Apache Forrest installation> on the command line." if="forrest.home">
+ <exec dir="${docs.src}" executable="${forrest.home}/bin/forrest" failonerror="true" >
+ <env key="JAVA_HOME" value="${java5.home}"/>
+ </exec>
+ <copy todir="${docs.dir}">
+ <fileset dir="${docs.src}/build/site/" />
+ </copy>
+ <style basedir="${conf.dir}" destdir="${docs.dir}"
+ includes="hadoop-default.xml" style="conf/configuration.xsl"/>
+ </target>
+
+ <target name="forrest.check" unless="forrest.home" depends="java5.check">
+ <fail message="'forrest.home' is not defined. Please pass -Dforrest.home=<base of Apache Forrest installation> to Ant on the command-line." />
+ </target>
+
+ <target name="java5.check" unless="java5.home">
+ <fail message="'java5.home' is not defined. Forrest requires Java 5. Please pass -Djava5.home=<base of Java 5 distribution> to Ant on the command-line." />
+ </target>
+
+ <!-- Javadoc -->
+ <target name="javadoc" description="Generate javadoc">
+ <mkdir dir="${build.javadoc}"/>
+ <javadoc
+ overview="${src.dir}/overview.html"
+ packagenames="org.apache.hadoop.hbase.*"
+ destdir="${build.javadoc}"
+ author="true"
+ version="true"
+ use="true"
+ windowtitle="${Name} ${version} API"
+ doctitle="${Name} ${version} API"
+ bottom="Copyright &copy; ${year} The Apache Software Foundation"
+ >
+ <packageset dir="${src.dir}">
+ <include name="org/apache/**"/>
+ <exclude name="org/onelab/**"/>
+ </packageset>
+ <link href="${javadoc.link.java}"/>
+ <classpath >
+ <path refid="classpath" />
+ <pathelement path="${java.class.path}"/>
+ </classpath>
+ </javadoc>
+ </target>
+
+ <!-- ================================================================== -->
+ <!-- Run unit tests -->
+ <!-- ================================================================== -->
+ <path id="test.classpath">
+ <!-- ============ * * * * * N O T E * * * * * ============
+ ${src.test} *must* come before rest of class path. Otherwise
+ the test hbase-site.xml will not be found.
+ ============ * * * * * N O T E * * * * * ============ -->
+ <pathelement location="${src.test}"/>
+ <pathelement location="${build.test}" />
+ <path refid="classpath"/>
+ <pathelement location="${build.dir}"/>
+ <pathelement path="${clover.jar}"/>
+ </path>
+
+ <!--'compile-test' used to depend on 'compile' but removed it. Hudson doesn't like
+ redoing init and jscpc at this stage of the game; i.e. the prereqs
+ for compile. TODO: Investigate why. For now, test will fail
+ if not preceeded by manual 'jar' or 'compile' invokation -->
+ <target name="compile-test" depends="compile" description="Build test jar">
+ <javac encoding="${build.encoding}"
+ srcdir="${src.test}"
+ includes="**/*.java"
+ destdir="${build.test}"
+ debug="${javac.debug}"
+ target="${javac.version}"
+ source="${javac.version}"
+ deprecation="${javac.deprecation}">
+ <classpath refid="test.classpath"/>
+ </javac>
+ <jar jarfile="${build.dir}/${final.name}-test.jar" >
+ <fileset dir="${build.test}" includes="org/**" />
+ <fileset dir="${build.classes}" />
+ <fileset dir="${src.test}" includes="**/*.properties" />
+ <manifest>
+ <attribute name="Main-Class"
+ value="org/apache/hadoop/hbase/PerformanceEvaluation"/>
+ </manifest>
+ </jar>
+ </target>
+
+ <target name="test" depends="compile-test"
+ description="Build test jar and run tests">
+ <delete dir="${test.log.dir}"/>
+ <mkdir dir="${test.log.dir}"/>
+ <junit
+ printsummary="yes" showoutput="${test.output}"
+ haltonfailure="no" fork="yes" maxmemory="512m"
+ errorProperty="tests.failed" failureProperty="tests.failed"
+ timeout="${test.timeout}">
+
+ <sysproperty key="test.build.data" value="${build.test}/data"/>
+ <sysproperty key="build.test" value="${build.test}"/>
+ <sysproperty key="src.testdata" value="${src.testdata}"/>
+ <sysproperty key="contrib.name" value="${name}"/>
+
+ <sysproperty key="user.dir" value="${build.test}/data"/>
+ <sysproperty key="fs.default.name" value="${fs.default.name}"/>
+ <sysproperty key="hadoop.test.localoutputfile" value="${hadoop.test.localoutputfile}"/>
+ <sysproperty key="test.log.dir" value="${hadoop.log.dir}"/>
+ <classpath refid="test.classpath"/>
+ <formatter type="${test.junit.output.format}" />
+ <batchtest todir="${build.test}" unless="testcase">
+ <fileset dir="${src.test}"
+ includes="**/Test*.java" excludes="**/${test.exclude}.java" />
+ </batchtest>
+ <batchtest todir="${build.test}" if="testcase">
+ <fileset dir="${src.test}" includes="**/${testcase}.java"/>
+ </batchtest>
+ </junit>
+ <fail if="tests.failed">Tests failed!</fail>
+ </target>
+
+ <!-- ================================================================== -->
+ <!-- Clean. Delete the build files, and their directories -->
+ <!-- ================================================================== -->
+ <target name="clean" description="Clean all old builds">
+ <delete dir="${build.dir}"/>
+ </target>
+</project>
diff --git a/conf/hadoop-metrics.properties b/conf/hadoop-metrics.properties
new file mode 100644
index 0000000..aeadbea
--- /dev/null
+++ b/conf/hadoop-metrics.properties
@@ -0,0 +1,44 @@
+# See http://wiki.apache.org/hadoop/GangliaMetrics
+# And, yes, this file is named hadoop-metrics.properties rather than
+# hbase-metrics.properties because we're leveraging the hadoop metrics
+# package and hadoop-metrics.properties is an hardcoded-name, at least
+# for the moment.
+
+# Configuration of the "hbase" context for null
+hbase.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "hbase" context for file
+# hbase.class=org.apache.hadoop.hbase.metrics.file.TimeStampingFileContext
+# hbase.period=10
+# hbase.fileName=/tmp/metrics_hbase.log
+
+# Configuration of the "hbase" context for ganglia
+# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# hbase.period=10
+# hbase.servers=GMETADHOST_IP:8649
+
+# Configuration of the "jvm" context for null
+jvm.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "jvm" context for file
+# jvm.class=org.apache.hadoop.hbase.metrics.file.TimeStampingFileContext
+# jvm.period=10
+# jvm.fileName=/tmp/metrics_jvm.log
+
+# Configuration of the "jvm" context for ganglia
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# jvm.period=10
+# jvm.servers=GMETADHOST_IP:8649
+
+# Configuration of the "rpc" context for null
+rpc.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "rpc" context for file
+# rpc.class=org.apache.hadoop.hbase.metrics.file.TimeStampingFileContext
+# rpc.period=10
+# rpc.fileName=/tmp/metrics_rpc.log
+
+# Configuration of the "rpc" context for ganglia
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# rpc.period=10
+# rpc.servers=GMETADHOST_IP:8649
diff --git a/conf/hbase-default.xml b/conf/hbase-default.xml
new file mode 100644
index 0000000..20f4e99
--- /dev/null
+++ b/conf/hbase-default.xml
@@ -0,0 +1,409 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+ <property>
+ <name>hbase.rootdir</name>
+ <value>file:///tmp/hbase-${user.name}/hbase</value>
+ <description>The directory shared by region servers.
+ Should be fully-qualified to include the filesystem to use.
+ E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
+ </description>
+ </property>
+ <property>
+ <name>hbase.master.hostname</name>
+ <value>local</value>
+ <description>The host that the HBase master runs at.
+ A value of 'local' runs the master and regionserver in a single process.
+ </description>
+ </property>
+ <property>
+ <name>hbase.master.port</name>
+ <value>60000</value>
+ <description>The port master should bind to.</description>
+ </property>
+ <property>
+ <name>hbase.tmp.dir</name>
+ <value>/tmp/hbase-${user.name}</value>
+ <description>Temporary directory on the local filesystem.</description>
+ </property>
+ <property>
+ <name>hbase.master.info.port</name>
+ <value>60010</value>
+ <description>The port for the hbase master web UI
+ Set to -1 if you do not want the info server to run.
+ </description>
+ </property>
+ <property>
+ <name>hbase.master.info.bindAddress</name>
+ <value>0.0.0.0</value>
+ <description>The address for the hbase master web UI
+ </description>
+ </property>
+ <property>
+ <name>hbase.client.write.buffer</name>
+ <value>2097152</value>
+ <description>Size of the write buffer in bytes. A bigger buffer takes more
+ memory -- on both the client and server side since server instantiates
+ the passed write buffer to process it -- but reduces the number of RPC.
+ For an estimate of server-side memory-used, evaluate
+ hbase.client.write.buffer * hbase.regionserver.handler.count
+ </description>
+ </property>
+ <property>
+ <name>hbase.master.meta.thread.rescanfrequency</name>
+ <value>60000</value>
+ <description>How long the HMaster sleeps (in milliseconds) between scans of
+ the root and meta tables.
+ </description>
+ </property>
+ <property>
+ <name>hbase.master.lease.period</name>
+ <value>120000</value>
+ <description>HMaster server lease period in milliseconds. Default is
+ 120 seconds. Region servers must report in within this period else
+ they are considered dead. On loaded cluster, may need to up this
+ period.</description>
+ </property>
+ <property>
+ <name>hbase.regionserver</name>
+ <value>0.0.0.0:60020</value>
+ <description>The host and port a HBase region server runs at.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.dns.interface</name>
+ <value>default</value>
+ <description>Name of the network interface which a regionserver
+ should use to determine it's "real" IP address. This lookup
+ prevents strings like "localhost" and "127.0.0.1" from being
+ reported back to the master.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.info.port</name>
+ <value>60030</value>
+ <description>The port for the hbase regionserver web UI
+ Set to -1 if you do not want the info server to run.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.info.port.auto</name>
+ <value>false</value>
+ <description>Info server auto port bind. Enables automatic port
+ search if hbase.regionserver.info.port is already in use.
+ Useful for testing, turned off by default.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.info.bindAddress</name>
+ <value>0.0.0.0</value>
+ <description>The address for the hbase regionserver web UI
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.class</name>
+ <value>org.apache.hadoop.hbase.ipc.HRegionInterface</value>
+ <description>An interface that is assignable to HRegionInterface. Used in HClient for
+ opening proxy to remote region server.
+ </description>
+ </property>
+ <property>
+ <name>hbase.client.pause</name>
+ <value>2000</value>
+ <description>General client pause value. Used mostly as value to wait
+ before running a retry of a failed get, region lookup, etc.</description>
+ </property>
+ <property>
+ <name>hbase.client.retries.number</name>
+ <value>10</value>
+ <description>Maximum retries. Used as maximum for all retryable
+ operations such as fetching of the root region from root region
+ server, getting a cell's value, starting a row update, etc.
+ Default: 10.
+ </description>
+ </property>
+ <property>
+ <name>hbase.client.scanner.caching</name>
+ <value>1</value>
+ <description>Number of rows that will be fetched when calling next
+ on a scanner if it is not served from memory. Higher caching values
+ will enable faster scanners but will eat up more memory and some
+ calls of next may take longer and longer times when the cache is empty.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.lease.period</name>
+ <value>60000</value>
+ <description>HRegion server lease period in milliseconds. Default is
+ 60 seconds. Clients must report in within this period else they are
+ considered dead.</description>
+ </property>
+ <property>
+ <name>hbase.regionserver.handler.count</name>
+ <value>10</value>
+ <description>Count of RPC Server instances spun up on RegionServers
+ Same property is used by the HMaster for count of master handlers.
+ Default is 10.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.msginterval</name>
+ <value>3000</value>
+ <description>Interval between messages from the RegionServer to HMaster
+ in milliseconds. Default is 3 seconds.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.maxlogentries</name>
+ <value>100000</value>
+ <description>Rotate the HRegion HLogs when count of entries exceeds this
+ value. Default: 100,000. Value is checked by a thread that runs every
+ hbase.server.thread.wakefrequency.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.flushlogentries</name>
+ <value>100</value>
+ <description>Sync the HLog to the HDFS when it has accumulated this many
+ entries. Default 100. Value is checked on every HLog.append
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.optionallogflushinterval</name>
+ <value>10000</value>
+ <description>Sync the HLog to the HDFS after this interval if it has not
+ accumulated enough entries to trigger a sync. Default 10 seconds. Units:
+ milliseconds.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.thread.splitcompactcheckfrequency</name>
+ <value>20000</value>
+ <description>How often a region server runs the split/compaction check.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.nbreservationblocks</name>
+ <value>4</value>
+ <description>The number of reservation blocks which are used to prevent
+ unstable region servers caused by an OOME.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.dns.interface</name>
+ <value>default</value>
+ <description>The name of the Network Interface from which a region server
+ should report its IP address.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.dns.nameserver</name>
+ <value>default</value>
+ <description>The host name or IP address of the name server (DNS)
+ which a region server should use to determine the host name used by the
+ master for communication and display purposes.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.globalMemcache.upperLimit</name>
+ <value>0.4</value>
+ <description>Maximum size of all memcaches in a region server before new
+ updates are blocked and flushes are forced. Defaults to 40% of heap.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.globalMemcache.lowerLimit</name>
+ <value>0.25</value>
+ <description>When memcaches are being forced to flush to make room in
+ memory, keep flushing until we hit this mark. Defaults to 30% of heap.
+ This value equal to hbase.regionserver.globalmemcache.upperLimit causes
+ the minimum possible flushing to occur when updates are blocked due to
+ memcache limiting.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hbasemaster.maxregionopen</name>
+ <value>120000</value>
+ <description>Period to wait for a region open. If regionserver
+ takes longer than this interval, assign to a new regionserver.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regions.percheckin</name>
+ <value>10</value>
+ <description>Maximum number of regions that can be assigned in a single go
+ to a region server.
+ </description>
+ </property>
+ <property>
+ <name>hbase.server.thread.wakefrequency</name>
+ <value>10000</value>
+ <description>Time to sleep in between searches for work (in milliseconds).
+ Used as sleep interval by service threads such as META scanner and log roller.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hregion.memcache.flush.size</name>
+ <value>67108864</value>
+ <description>
+ A HRegion memcache will be flushed to disk if size of the memcache
+ exceeds this number of bytes. Value is checked by a thread that runs
+ every hbase.server.thread.wakefrequency.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hregion.memcache.block.multiplier</name>
+ <value>2</value>
+ <description>
+ Block updates if memcache has hbase.hregion.block.memcache
+ time hbase.hregion.flush.size bytes. Useful preventing
+ runaway memcache during spikes in update traffic. Without an
+ upper-bound, memcache fills such that when it flushes the
+ resultant flush files take a long time to compact or split, or
+ worse, we OOME.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hregion.max.filesize</name>
+ <value>268435456</value>
+ <description>
+ Maximum HStoreFile size. If any one of a column families' HStoreFiles has
+ grown to exceed this value, the hosting HRegion is split in two.
+ Default: 256M.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hstore.compactionThreshold</name>
+ <value>3</value>
+ <description>
+ If more than this number of HStoreFiles in any one HStore
+ (one HStoreFile is written per flush of memcache) then a compaction
+ is run to rewrite all HStoreFiles files as one. Larger numbers
+ put off compaction but when it runs, it takes longer to complete.
+ During a compaction, updates cannot be flushed to disk. Long
+ compactions require memory sufficient to carry the logging of
+ all updates across the duration of the compaction.
+
+ If too large, clients timeout during compaction.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hstore.compaction.max</name>
+ <value>10</value>
+ <description>Max number of HStoreFiles to compact per 'minor' compaction.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hregion.majorcompaction</name>
+ <value>86400000</value>
+ <description>The time (in miliseconds) between 'major' compactions of all
+ HStoreFiles in a region. Default: 1 day.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regions.slop</name>
+ <value>0.1</value>
+ <description>Rebalance if regionserver has average + (average * slop) regions.
+ Default is 10% slop.
+ </description>
+ </property>
+ <property>
+ <name>hfile.min.blocksize.size</name>
+ <value>65536</value>
+ <description>Minimum store file block size. The smaller you make this, the
+ bigger your index and the less you fetch on a random-access. Set size down
+ if you have small cells and want faster random-access of individual cells.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hstore.blockCache.blockSize</name>
+ <value>16384</value>
+ <description>The size of each block in the block cache.
+ Enable blockcaching on a per column family basis; see the BLOCKCACHE setting
+ in HColumnDescriptor. Blocks are kept in a java Soft Reference cache so are
+ let go when high pressure on memory. Block caching is not enabled by default.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hash.type</name>
+ <value>murmur</value>
+ <description>The hashing algorithm for use in HashFunction. Two values are
+ supported now: murmur (MurmurHash) and jenkins (JenkinsHash).
+ </description>
+ </property>
+ <property>
+ <name>zookeeper.session.timeout</name>
+ <value>10000</value>
+ <description>ZooKeeper session timeout. This option is not used by HBase
+ directly, it is for the internals of ZooKeeper. HBase merely passes it in
+ whenever a connection is established to ZooKeeper. It is used by ZooKeeper
+ for hearbeats. In milliseconds.
+ </description>
+ </property>
+ <property>
+ <name>zookeeper.retries</name>
+ <value>5</value>
+ <description>How many times to retry connections to ZooKeeper. Used for
+ reading/writing root region location, checking/writing out of safe mode.
+ Used together with ${zookeeper.pause} in an exponential backoff fashion
+ when making queries to ZooKeeper.
+ </description>
+ </property>
+ <property>
+ <name>zookeeper.pause</name>
+ <value>2000</value>
+ <description>Sleep time between retries to ZooKeeper. In milliseconds. Used
+ together with ${zookeeper.retries} in an exponential backoff fashion when
+ making queries to ZooKeeper.
+ </description>
+ </property>
+ <property>
+ <name>zookeeper.znode.parent</name>
+ <value>/hbase</value>
+ <description>Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
+ files that are configured with a relative path will go under this node.
+ By default, all of HBase's ZooKeeper file patsh are configured with a
+ relative path, so they will all go under this directory unless changed.
+ </description>
+ </property>
+ <property>
+ <name>zookeeper.znode.rootserver</name>
+ <value>root-region-server</value>
+ <description>Path to ZNode holding root region location. This is written by
+ the master and read by clients and region servers. If a relative path is
+ given, the parent folder will be ${zookeeper.znode.parent}. By default,
+ this means the root location is stored at /hbase/root-region-server.
+ </description>
+ </property>
+ <property>
+ <name>zookeeper.znode.safemode</name>
+ <value>safe-mode</value>
+ <description>Path to ephemeral ZNode signifying cluster is out of safe mode.
+ This is created by the master when scanning is done. Clients wait for this
+ node before querying the cluster. If a relative path is given, the parent
+ folder will be ${zookeeper.znode.parent}. By default, this means the safe
+ mode flag is stored at /hbase/safe-mode.
+ </description>
+ </property>
+</configuration>
diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh
new file mode 100644
index 0000000..64fa4c5
--- /dev/null
+++ b/conf/hbase-env.sh
@@ -0,0 +1,60 @@
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# Set environment variables here.
+
+# The java implementation to use. Java 1.6 required.
+# export JAVA_HOME=/usr/java/jdk1.6.0/
+
+# Extra Java CLASSPATH elements. Optional.
+# export HBASE_CLASSPATH=
+
+# The maximum amount of heap to use, in MB. Default is 1000.
+# export HBASE_HEAPSIZE=1000
+
+# Extra Java runtime options. Empty by default.
+# export HBASE_OPTS=-server
+
+# File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.
+# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
+
+# Extra ssh options. Empty by default.
+# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
+
+# Where log files are stored. $HBASE_HOME/logs by default.
+# export HBASE_LOG_DIR=${HBASE_HOME}/logs
+
+# A string representing this instance of hbase. $USER by default.
+# export HBASE_IDENT_STRING=$USER
+
+# The scheduling priority for daemon processes. See 'man nice'.
+# export HBASE_NICENESS=10
+
+# The directory where pid files are stored. /tmp by default.
+# export HBASE_PID_DIR=/var/hadoop/pids
+
+# Seconds to sleep between slave commands. Unset by default. This
+# can be useful in large clusters, where, e.g., slave rsyncs can
+# otherwise arrive faster than the master can service them.
+# export HBASE_SLAVE_SLEEP=0.1
+
+# Tell HBase whether it should manage it's own instance of Zookeeper or not.
+# export HBASE_MANAGES_ZK=true
diff --git a/conf/hbase-site.xml b/conf/hbase-site.xml
new file mode 100644
index 0000000..dbccd1d
--- /dev/null
+++ b/conf/hbase-site.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+</configuration>
diff --git a/conf/log4j.properties b/conf/log4j.properties
new file mode 100644
index 0000000..ed8fab8
--- /dev/null
+++ b/conf/log4j.properties
@@ -0,0 +1,46 @@
+# Define some default values that can be overridden by system properties
+hbase.root.logger=INFO,console
+hbase.log.dir=.
+hbase.log.file=hbase.log
+
+# Define the root logger to the system property "hbase.root.logger".
+log4j.rootLogger=${hbase.root.logger}
+
+# Logging Threshold
+log4j.threshhold=ALL
+
+#
+# Daily Rolling File Appender
+#
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+
+# Debugging Pattern format
+#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this
+#
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
+
+# Custom Logging levels
+
+log4j.logger.org.apache.zookeeper=ERROR
+#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
+#log4j.logger.org.apache.hadoop.hbase=DEBUG
+#log4j.logger.org.apache.hadoop.dfs=DEBUG
diff --git a/conf/regionservers b/conf/regionservers
new file mode 100644
index 0000000..2fbb50c
--- /dev/null
+++ b/conf/regionservers
@@ -0,0 +1 @@
+localhost
diff --git a/conf/zoo.cfg b/conf/zoo.cfg
new file mode 100644
index 0000000..63db592
--- /dev/null
+++ b/conf/zoo.cfg
@@ -0,0 +1,14 @@
+# The number of milliseconds of each tick
+tickTime=2000
+# The number of ticks that the initial
+# synchronization phase can take
+initLimit=10
+# The number of ticks that can pass between
+# sending a request and getting an acknowledgement
+syncLimit=5
+# the directory where the snapshot is stored.
+dataDir=${hbase.tmp.dir}/zookeeper
+# the port at which the clients will connect
+clientPort=2181
+
+server.0=${hbase.master.hostname}:2888:3888
\ No newline at end of file
diff --git a/docs/broken-links.xml b/docs/broken-links.xml
new file mode 100644
index 0000000..f95aa9b
--- /dev/null
+++ b/docs/broken-links.xml
@@ -0,0 +1,2 @@
+<broken-links>
+</broken-links>
diff --git a/docs/images/built-with-forrest-button.png b/docs/images/built-with-forrest-button.png
new file mode 100644
index 0000000..4a787ab
--- /dev/null
+++ b/docs/images/built-with-forrest-button.png
Binary files differ
diff --git a/docs/images/favicon.ico b/docs/images/favicon.ico
new file mode 100644
index 0000000..161bcf7
--- /dev/null
+++ b/docs/images/favicon.ico
Binary files differ
diff --git a/docs/images/hadoop-logo.jpg b/docs/images/hadoop-logo.jpg
new file mode 100644
index 0000000..809525d
--- /dev/null
+++ b/docs/images/hadoop-logo.jpg
Binary files differ
diff --git a/docs/images/hbase_logo_med.gif b/docs/images/hbase_logo_med.gif
new file mode 100644
index 0000000..36d3e3c
--- /dev/null
+++ b/docs/images/hbase_logo_med.gif
Binary files differ
diff --git a/docs/images/hbase_small.gif b/docs/images/hbase_small.gif
new file mode 100644
index 0000000..3275765
--- /dev/null
+++ b/docs/images/hbase_small.gif
Binary files differ
diff --git a/docs/images/instruction_arrow.png b/docs/images/instruction_arrow.png
new file mode 100644
index 0000000..0fbc724
--- /dev/null
+++ b/docs/images/instruction_arrow.png
Binary files differ
diff --git a/docs/index.html b/docs/index.html
new file mode 100644
index 0000000..15796e5
--- /dev/null
+++ b/docs/index.html
@@ -0,0 +1,204 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
+<html>
+<head>
+<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<meta content="Apache Forrest" name="Generator">
+<meta name="Forrest-version" content="0.8">
+<meta name="Forrest-skin-name" content="pelt">
+<title>HBase Documentation</title>
+<link type="text/css" href="skin/basic.css" rel="stylesheet">
+<link media="screen" type="text/css" href="skin/screen.css" rel="stylesheet">
+<link media="print" type="text/css" href="skin/print.css" rel="stylesheet">
+<link type="text/css" href="skin/profile.css" rel="stylesheet">
+<script src="skin/getBlank.js" language="javascript" type="text/javascript"></script><script src="skin/getMenu.js" language="javascript" type="text/javascript"></script><script src="skin/fontsize.js" language="javascript" type="text/javascript"></script>
+<link rel="shortcut icon" href="images/favicon.ico">
+</head>
+<body onload="init()">
+<script type="text/javascript">ndeSetTextSize();</script>
+<div id="top">
+<!--+
+ |breadtrail
+ +-->
+<div class="breadtrail">
+<a href="http://www.apache.org/">Apache</a> > <a href="http://hadoop.apache.org/">Hadoop</a> > <a href="http://hadoop.apache.org/hbase/">HBase</a><script src="skin/breadcrumbs.js" language="JavaScript" type="text/javascript"></script>
+</div>
+<!--+
+ |header
+ +-->
+<div class="header">
+<!--+
+ |start group logo
+ +-->
+<div class="grouplogo">
+<a href="http://hadoop.apache.org/"><img class="logoImage" alt="Hadoop" src="images/hadoop-logo.jpg" title="Apache Hadoop"></a>
+</div>
+<!--+
+ |end group logo
+ +-->
+<!--+
+ |start Project Logo
+ +-->
+<div class="projectlogo">
+<a href="http://hadoop.apache.org/hbase/"><img class="logoImage" alt="HBase" src="images/hbase_small.gif" title="The Hadoop database"></a>
+</div>
+<!--+
+ |end Project Logo
+ +-->
+<!--+
+ |start Search
+ +-->
+<div class="searchbox">
+<form action="http://www.google.com/search" method="get" class="roundtopsmall">
+<input value="hadoop.apache.org" name="sitesearch" type="hidden"><input onFocus="getBlank (this, 'Search the site with google');" size="25" name="q" id="query" type="text" value="Search the site with google">
+ <input name="Search" value="Search" type="submit">
+</form>
+</div>
+<!--+
+ |end search
+ +-->
+<!--+
+ |start Tabs
+ +-->
+<ul id="tabs">
+<li>
+<a class="unselected" href="http://hadoop.apache.org/hbase/">Project</a>
+</li>
+<li>
+<a class="unselected" href="http://wiki.apache.org/hadoop/Hbase">Wiki</a>
+</li>
+<li class="current">
+<a class="selected" href="index.html">HBase Documentation</a>
+</li>
+</ul>
+<!--+
+ |end Tabs
+ +-->
+</div>
+</div>
+<div id="main">
+<div id="publishedStrip">
+<!--+
+ |start Subtabs
+ +-->
+<div id="level2tabs"></div>
+<!--+
+ |end Endtabs
+ +-->
+<script type="text/javascript"><!--
+document.write("Last Published: " + document.lastModified);
+// --></script>
+</div>
+<!--+
+ |breadtrail
+ +-->
+<div class="breadtrail">
+
+
+ </div>
+<!--+
+ |start Menu, mainarea
+ +-->
+<!--+
+ |start Menu
+ +-->
+<div id="menu">
+<div onclick="SwitchMenu('menu_selected_1.1', 'skin/')" id="menu_selected_1.1Title" class="menutitle" style="background-image: url('skin/images/chapter_open.gif');">Documentation</div>
+<div id="menu_selected_1.1" class="selectedmenuitemgroup" style="display: block;">
+<div class="menupage">
+<div class="menupagetitle">Overview</div>
+</div>
+<div class="menuitem">
+<a href="api/overview-summary.html#overview_description">Getting Started</a>
+</div>
+<div class="menuitem">
+<a href="api/index.html">API Docs</a>
+</div>
+<div class="menuitem">
+<a href="metrics.html">HBase Metrics</a>
+</div>
+<div class="menuitem">
+<a href="http://wiki.apache.org/hadoop/Hbase">Wiki</a>
+</div>
+<div class="menuitem">
+<a href="http://wiki.apache.org/hadoop/Hbase/FAQ">FAQ</a>
+</div>
+<div class="menuitem">
+<a href="http://hadoop.apache.org/hbase/mailing_lists.html">Mailing Lists</a>
+</div>
+</div>
+<div id="credit">
+<hr>
+<a href="http://forrest.apache.org/"><img border="0" title="Built with Apache Forrest" alt="Built with Apache Forrest - logo" src="images/built-with-forrest-button.png" style="width: 88px;height: 31px;"></a>
+</div>
+<div id="roundbottom">
+<img style="display: none" class="corner" height="15" width="15" alt="" src="skin/images/rc-b-l-15-1body-2menu-3menu.png"></div>
+<!--+
+ |alternative credits
+ +-->
+<div id="credit2"></div>
+</div>
+<!--+
+ |end Menu
+ +-->
+<!--+
+ |start content
+ +-->
+<div id="content">
+<div title="Portable Document Format" class="pdflink">
+<a class="dida" href="index.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
+ PDF</a>
+</div>
+<h1>HBase Documentation</h1>
+
+<p>
+ The following documents provide concepts and procedures that will help you
+ get started using HBase. If you have more questions, you can ask the
+ <a href="http://hadoop.apache.org/hbase/mailing_lists.html">mailing list</a> or browse the archives.
+ </p>
+
+<ul>
+
+<li>
+<a href="api/overview-summary.html#overview_description">Getting Started</a>
+</li>
+
+<li>
+<a href="api/index.html">API Docs</a>
+</li>
+
+<li>
+<a href="http://wiki.apache.org/hadoop/Hbase">Wiki</a>
+</li>
+
+<li>
+<a href="http://wiki.apache.org/hadoop/Hbase/FAQ">FAQ</a>
+</li>
+
+</ul>
+
+</div>
+<!--+
+ |end content
+ +-->
+<div class="clearboth"> </div>
+</div>
+<div id="footer">
+<!--+
+ |start bottomstrip
+ +-->
+<div class="lastmodified">
+<script type="text/javascript"><!--
+document.write("Last Published: " + document.lastModified);
+// --></script>
+</div>
+<div class="copyright">
+ Copyright ©
+ 2008 <a href="http://www.apache.org/licenses/">The Apache Software Foundation.</a>
+</div>
+<div id="logos"></div>
+<!--+
+ |end bottomstrip
+ +-->
+</div>
+</body>
+</html>
diff --git a/docs/index.pdf b/docs/index.pdf
new file mode 100644
index 0000000..835d108
--- /dev/null
+++ b/docs/index.pdf
@@ -0,0 +1,157 @@
+%PDF-1.3
+%ª«¬
+4 0 obj
+<< /Type /Info
+/Producer (FOP 0.20.5) >>
+endobj
+5 0 obj
+<< /Length 810 /Filter [ /ASCII85Decode /FlateDecode ]
+ >>
+stream
+Gat=*c#T:-(qfGS3[QYb.YNK5BX`,QSsl+9*1@osaM7p'#D#$`qo[ng_@@WJ`s6sF^>8`g8FeNV^8sGI#t,oi39^.&3!a&T$]t4p+?Y\["@:?E5dU]EmX?]sam[&YbQ@J$m[Hc$/[(VQ.7A:hrds@3ihXB/*Tr*31'W.p3he%S2"^S$DBm;%*/Z4%6YXjtc-WU/q]`Gj&lBs&B2=AbF>?)>Bj5Xo6BfHL6&20c%E`t,1[&,O#%ZMhP"f1?4RLmo[ceAqL)*uRkt";L%=[*VLun0`C2th>$b`rQ*j_0)Ar;e^ZJptLG3cfMr&o)WSBa"er;[gu+W+*a\b@4Im^I]h[8]+&7nS;>5T)P3[I`H1US59E^_hT1YqjtO7B'5\;#"!Z$+oE,n/92:JYKpoHKs*5`F.>_>)(3e2SE=A-P.9iUt/dJRrn\\iZS%gg>b6D8RCboK]<hfB=8*9+U`Vm(Q@TPE%VLofgtl(&n9h+NpSFo_9*4'9]r7[+*c8YYM3%E.*jNfT9lluH(a#ad%;[&NV8'ldIh$+g;0@$ACU2n]fo[3_k=>k)>$$RNnbJT7s&fUIIZWC:AdLRaa<6Wr#*IFJYJf*"akl1ZoXLVB%8%XYK@Z^ip62Y6[]mn)oNN+0jUUr%#1C4fA\9tlM,A&2iN-togcn_^<nX?0#Q6!Be#Z6g7YJn0a`_#71dBOe/W9/'s*u;=*K04TiQXqV(hNj#h:HO3g$UTpKeY=hO*X+"]O"qN)3nYImJ9dZ&987YJmA!`3hJ.&oEp!^B)\2l`ShLBPhHl/pY.n.5\1,#8!Wb:B~>
+endstream
+endobj
+6 0 obj
+<< /Type /Page
+/Parent 1 0 R
+/MediaBox [ 0 0 612 792 ]
+/Resources 3 0 R
+/Contents 5 0 R
+/Annots 7 0 R
+>>
+endobj
+7 0 obj
+[
+8 0 R
+9 0 R
+10 0 R
+11 0 R
+12 0 R
+]
+endobj
+8 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 370.956 572.6 425.304 560.6 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (http://hadoop.apache.org/hbase/mailing_lists.html)
+/S /URI >>
+/H /I
+>>
+endobj
+9 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 108.0 555.4 180.996 543.4 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (api/overview-summary.html#overview_description)
+/S /URI >>
+/H /I
+>>
+endobj
+10 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 108.0 542.2 154.992 530.2 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (api/index.html)
+/S /URI >>
+/H /I
+>>
+endobj
+11 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 108.0 529.0 132.0 517.0 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (http://wiki.apache.org/hadoop/Hbase)
+/S /URI >>
+/H /I
+>>
+endobj
+12 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 108.0 515.8 132.0 503.8 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (http://wiki.apache.org/hadoop/Hbase/FAQ)
+/S /URI >>
+/H /I
+>>
+endobj
+13 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F3
+/BaseFont /Helvetica-Bold
+/Encoding /WinAnsiEncoding >>
+endobj
+14 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F5
+/BaseFont /Times-Roman
+/Encoding /WinAnsiEncoding >>
+endobj
+15 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F1
+/BaseFont /Helvetica
+/Encoding /WinAnsiEncoding >>
+endobj
+16 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F2
+/BaseFont /Helvetica-Oblique
+/Encoding /WinAnsiEncoding >>
+endobj
+1 0 obj
+<< /Type /Pages
+/Count 1
+/Kids [6 0 R ] >>
+endobj
+2 0 obj
+<< /Type /Catalog
+/Pages 1 0 R
+ >>
+endobj
+3 0 obj
+<<
+/Font << /F3 13 0 R /F5 14 0 R /F1 15 0 R /F2 16 0 R >>
+/ProcSet [ /PDF /ImageC /Text ] >>
+endobj
+xref
+0 17
+0000000000 65535 f
+0000002510 00000 n
+0000002568 00000 n
+0000002618 00000 n
+0000000015 00000 n
+0000000071 00000 n
+0000000972 00000 n
+0000001092 00000 n
+0000001144 00000 n
+0000001342 00000 n
+0000001535 00000 n
+0000001697 00000 n
+0000001878 00000 n
+0000002063 00000 n
+0000002176 00000 n
+0000002286 00000 n
+0000002394 00000 n
+trailer
+<<
+/Size 17
+/Root 2 0 R
+/Info 4 0 R
+>>
+startxref
+2730
+%%EOF
diff --git a/docs/linkmap.html b/docs/linkmap.html
new file mode 100644
index 0000000..5f2f788
--- /dev/null
+++ b/docs/linkmap.html
@@ -0,0 +1,239 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
+<html>
+<head>
+<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<meta content="Apache Forrest" name="Generator">
+<meta name="Forrest-version" content="0.8">
+<meta name="Forrest-skin-name" content="pelt">
+<title>Site Linkmap Table of Contents</title>
+<link type="text/css" href="skin/basic.css" rel="stylesheet">
+<link media="screen" type="text/css" href="skin/screen.css" rel="stylesheet">
+<link media="print" type="text/css" href="skin/print.css" rel="stylesheet">
+<link type="text/css" href="skin/profile.css" rel="stylesheet">
+<script src="skin/getBlank.js" language="javascript" type="text/javascript"></script><script src="skin/getMenu.js" language="javascript" type="text/javascript"></script><script src="skin/fontsize.js" language="javascript" type="text/javascript"></script>
+<link rel="shortcut icon" href="images/favicon.ico">
+</head>
+<body onload="init()">
+<script type="text/javascript">ndeSetTextSize();</script>
+<div id="top">
+<!--+
+ |breadtrail
+ +-->
+<div class="breadtrail">
+<a href="http://www.apache.org/">Apache</a> > <a href="http://hadoop.apache.org/">Hadoop</a> > <a href="http://hadoop.apache.org/hbase/">HBase</a><script src="skin/breadcrumbs.js" language="JavaScript" type="text/javascript"></script>
+</div>
+<!--+
+ |header
+ +-->
+<div class="header">
+<!--+
+ |start group logo
+ +-->
+<div class="grouplogo">
+<a href="http://hadoop.apache.org/"><img class="logoImage" alt="Hadoop" src="images/hadoop-logo.jpg" title="Apache Hadoop"></a>
+</div>
+<!--+
+ |end group logo
+ +-->
+<!--+
+ |start Project Logo
+ +-->
+<div class="projectlogo">
+<a href="http://hadoop.apache.org/hbase/"><img class="logoImage" alt="HBase" src="images/hbase_small.gif" title="The Hadoop database"></a>
+</div>
+<!--+
+ |end Project Logo
+ +-->
+<!--+
+ |start Search
+ +-->
+<div class="searchbox">
+<form action="http://www.google.com/search" method="get" class="roundtopsmall">
+<input value="hadoop.apache.org" name="sitesearch" type="hidden"><input onFocus="getBlank (this, 'Search the site with google');" size="25" name="q" id="query" type="text" value="Search the site with google">
+ <input name="Search" value="Search" type="submit">
+</form>
+</div>
+<!--+
+ |end search
+ +-->
+<!--+
+ |start Tabs
+ +-->
+<ul id="tabs">
+<li>
+<a class="unselected" href="http://hadoop.apache.org/hbase/">Project</a>
+</li>
+<li>
+<a class="unselected" href="http://wiki.apache.org/hadoop/Hbase">Wiki</a>
+</li>
+<li class="current">
+<a class="selected" href="index.html">HBase Documentation</a>
+</li>
+</ul>
+<!--+
+ |end Tabs
+ +-->
+</div>
+</div>
+<div id="main">
+<div id="publishedStrip">
+<!--+
+ |start Subtabs
+ +-->
+<div id="level2tabs"></div>
+<!--+
+ |end Endtabs
+ +-->
+<script type="text/javascript"><!--
+document.write("Last Published: " + document.lastModified);
+// --></script>
+</div>
+<!--+
+ |breadtrail
+ +-->
+<div class="breadtrail">
+
+
+ </div>
+<!--+
+ |start Menu, mainarea
+ +-->
+<!--+
+ |start Menu
+ +-->
+<div id="menu">
+<div onclick="SwitchMenu('menu_1.1', 'skin/')" id="menu_1.1Title" class="menutitle">Documentation</div>
+<div id="menu_1.1" class="menuitemgroup">
+<div class="menuitem">
+<a href="index.html">Overview</a>
+</div>
+<div class="menuitem">
+<a href="api/overview-summary.html#overview_description">Getting Started</a>
+</div>
+<div class="menuitem">
+<a href="api/index.html">API Docs</a>
+</div>
+<div class="menuitem">
+<a href="metrics.html">HBase Metrics</a>
+</div>
+<div class="menuitem">
+<a href="http://wiki.apache.org/hadoop/Hbase">Wiki</a>
+</div>
+<div class="menuitem">
+<a href="http://wiki.apache.org/hadoop/Hbase/FAQ">FAQ</a>
+</div>
+<div class="menuitem">
+<a href="http://hadoop.apache.org/hbase/mailing_lists.html">Mailing Lists</a>
+</div>
+</div>
+<div id="credit"></div>
+<div id="roundbottom">
+<img style="display: none" class="corner" height="15" width="15" alt="" src="skin/images/rc-b-l-15-1body-2menu-3menu.png"></div>
+<!--+
+ |alternative credits
+ +-->
+<div id="credit2"></div>
+</div>
+<!--+
+ |end Menu
+ +-->
+<!--+
+ |start content
+ +-->
+<div id="content">
+<div title="Portable Document Format" class="pdflink">
+<a class="dida" href="linkmap.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
+ PDF</a>
+</div>
+<h1>Site Linkmap Table of Contents</h1>
+<p>
+ This is a map of the complete site and its structure.
+ </p>
+<ul>
+<li>
+<a>Hadoop</a> ___________________ <em>site</em>
+</li>
+<ul>
+
+
+<ul>
+<li>
+<a>Documentation</a> ___________________ <em>docs</em>
+</li>
+<ul>
+
+<ul>
+<li>
+<a href="index.html">Overview</a> ___________________ <em>overview</em>
+</li>
+</ul>
+
+<ul>
+<li>
+<a href="api/overview-summary.html#overview_description">Getting Started</a> ___________________ <em>started</em>
+</li>
+</ul>
+
+<ul>
+<li>
+<a href="api/index.html">API Docs</a> ___________________ <em>api</em>
+</li>
+</ul>
+
+<ul>
+<li>
+<a href="metrics.html">HBase Metrics</a> ___________________ <em>api</em>
+</li>
+</ul>
+
+<ul>
+<li>
+<a href="http://wiki.apache.org/hadoop/Hbase">Wiki</a> ___________________ <em>wiki</em>
+</li>
+</ul>
+
+<ul>
+<li>
+<a href="http://wiki.apache.org/hadoop/Hbase/FAQ">FAQ</a> ___________________ <em>faq</em>
+</li>
+</ul>
+
+<ul>
+<li>
+<a href="http://hadoop.apache.org/hbase/mailing_lists.html">Mailing Lists</a> ___________________ <em>lists</em>
+</li>
+</ul>
+
+</ul>
+</ul>
+
+
+
+
+</ul>
+</ul>
+</div>
+<!--+
+ |end content
+ +-->
+<div class="clearboth"> </div>
+</div>
+<div id="footer">
+<!--+
+ |start bottomstrip
+ +-->
+<div class="lastmodified">
+<script type="text/javascript"><!--
+document.write("Last Published: " + document.lastModified);
+// --></script>
+</div>
+<div class="copyright">
+ Copyright ©
+ 2008 <a href="http://www.apache.org/licenses/">The Apache Software Foundation.</a>
+</div>
+<!--+
+ |end bottomstrip
+ +-->
+</div>
+</body>
+</html>
diff --git a/docs/linkmap.pdf b/docs/linkmap.pdf
new file mode 100644
index 0000000..7cb9ee8
--- /dev/null
+++ b/docs/linkmap.pdf
@@ -0,0 +1,94 @@
+%PDF-1.3
+%ª«¬
+4 0 obj
+<< /Type /Info
+/Producer (FOP 0.20.5) >>
+endobj
+5 0 obj
+<< /Length 695 /Filter [ /ASCII85Decode /FlateDecode ]
+ >>
+stream
+GatUqd;I\]'Sc)T'O7LOAUjg9QV"fjRp'Z//G3JC.V=kIeM4$t)XZp)lmCXiDQ4tE&e<"_ruK#99[_kB1^&Iu"O_RGlsCMNYQupJbnl#Nc>e_LaCC=7_!fZE)0l688JF*6UK^_^YJf-hUuH=OXm<M:?g6"[daT)k%nM1,c4"+/3VVlB'L@amB9t5ND'bQT)>Re$L2R3#crkO%Z]m)`oP.Z&j3?)rO$oC_:<'W?9sr*eVNPoENX+3>!gED=rK.Zb6*_?\f>9M!FI2BSaL/Ie0[q"*^&RF&q)^kT@@/=90A5GH$phCWTGS<k_nO(!9E&+-RoQe5_C/]]MAFXE6h?:XSUGSs0Xh^S1qs-+5@8!pWoWka.Mfl4o=g":M]O*1"NCOShkW!,>du"81SQW[5FJl-Td8TFc[)L7"I-p"/#WQsnZVBko"*Y=;r8m8GIV94f'548Vs^o5d+qA`m8`A@1mL*(d9\A_oGp/M8D.Br4K)n81/Nc:e[M[90cjQ+"$G'&rL#XZpftHOVa5^7%do!"H?F2-C3.B!2N<O@^15P>[L[d\K9s#r[?iU;&W@R"XYYQ^JP(N>E$%bih]pDBY4'=TU<ID"QnatI#*KP'W3a%_l[P1'/<+FYO+^pX&VOVs%kbOJ3*qNeO(WcV[\E>)VKu2'm0sH:a,7_;X`s0OZ>T(ZR7!>;IqK&8'E~>
+endstream
+endobj
+6 0 obj
+<< /Type /Page
+/Parent 1 0 R
+/MediaBox [ 0 0 612 792 ]
+/Resources 3 0 R
+/Contents 5 0 R
+>>
+endobj
+7 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F3
+/BaseFont /Helvetica-Bold
+/Encoding /WinAnsiEncoding >>
+endobj
+8 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F5
+/BaseFont /Times-Roman
+/Encoding /WinAnsiEncoding >>
+endobj
+9 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F6
+/BaseFont /Times-Italic
+/Encoding /WinAnsiEncoding >>
+endobj
+10 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F1
+/BaseFont /Helvetica
+/Encoding /WinAnsiEncoding >>
+endobj
+11 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F2
+/BaseFont /Helvetica-Oblique
+/Encoding /WinAnsiEncoding >>
+endobj
+1 0 obj
+<< /Type /Pages
+/Count 1
+/Kids [6 0 R ] >>
+endobj
+2 0 obj
+<< /Type /Catalog
+/Pages 1 0 R
+ >>
+endobj
+3 0 obj
+<<
+/Font << /F3 7 0 R /F5 8 0 R /F1 10 0 R /F6 9 0 R /F2 11 0 R >>
+/ProcSet [ /PDF /ImageC /Text ] >>
+endobj
+xref
+0 12
+0000000000 65535 f
+0000001518 00000 n
+0000001576 00000 n
+0000001626 00000 n
+0000000015 00000 n
+0000000071 00000 n
+0000000857 00000 n
+0000000963 00000 n
+0000001075 00000 n
+0000001184 00000 n
+0000001294 00000 n
+0000001402 00000 n
+trailer
+<<
+/Size 12
+/Root 2 0 R
+/Info 4 0 R
+>>
+startxref
+1746
+%%EOF
diff --git a/docs/metrics.html b/docs/metrics.html
new file mode 100644
index 0000000..82ee910
--- /dev/null
+++ b/docs/metrics.html
@@ -0,0 +1,227 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
+<html>
+<head>
+<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<meta content="Apache Forrest" name="Generator">
+<meta name="Forrest-version" content="0.8">
+<meta name="Forrest-skin-name" content="pelt">
+<title>
+ HBase Metrics
+ </title>
+<link type="text/css" href="skin/basic.css" rel="stylesheet">
+<link media="screen" type="text/css" href="skin/screen.css" rel="stylesheet">
+<link media="print" type="text/css" href="skin/print.css" rel="stylesheet">
+<link type="text/css" href="skin/profile.css" rel="stylesheet">
+<script src="skin/getBlank.js" language="javascript" type="text/javascript"></script><script src="skin/getMenu.js" language="javascript" type="text/javascript"></script><script src="skin/fontsize.js" language="javascript" type="text/javascript"></script>
+<link rel="shortcut icon" href="images/favicon.ico">
+</head>
+<body onload="init()">
+<script type="text/javascript">ndeSetTextSize();</script>
+<div id="top">
+<!--+
+ |breadtrail
+ +-->
+<div class="breadtrail">
+<a href="http://www.apache.org/">Apache</a> > <a href="http://hadoop.apache.org/">Hadoop</a> > <a href="http://hadoop.apache.org/hbase/">HBase</a><script src="skin/breadcrumbs.js" language="JavaScript" type="text/javascript"></script>
+</div>
+<!--+
+ |header
+ +-->
+<div class="header">
+<!--+
+ |start group logo
+ +-->
+<div class="grouplogo">
+<a href="http://hadoop.apache.org/"><img class="logoImage" alt="Hadoop" src="images/hadoop-logo.jpg" title="Apache Hadoop"></a>
+</div>
+<!--+
+ |end group logo
+ +-->
+<!--+
+ |start Project Logo
+ +-->
+<div class="projectlogo">
+<a href="http://hadoop.apache.org/hbase/"><img class="logoImage" alt="HBase" src="images/hbase_small.gif" title="The Hadoop database"></a>
+</div>
+<!--+
+ |end Project Logo
+ +-->
+<!--+
+ |start Search
+ +-->
+<div class="searchbox">
+<form action="http://www.google.com/search" method="get" class="roundtopsmall">
+<input value="hadoop.apache.org" name="sitesearch" type="hidden"><input onFocus="getBlank (this, 'Search the site with google');" size="25" name="q" id="query" type="text" value="Search the site with google">
+ <input name="Search" value="Search" type="submit">
+</form>
+</div>
+<!--+
+ |end search
+ +-->
+<!--+
+ |start Tabs
+ +-->
+<ul id="tabs">
+<li>
+<a class="unselected" href="http://hadoop.apache.org/hbase/">Project</a>
+</li>
+<li>
+<a class="unselected" href="http://wiki.apache.org/hadoop/Hbase">Wiki</a>
+</li>
+<li class="current">
+<a class="selected" href="index.html">HBase Documentation</a>
+</li>
+</ul>
+<!--+
+ |end Tabs
+ +-->
+</div>
+</div>
+<div id="main">
+<div id="publishedStrip">
+<!--+
+ |start Subtabs
+ +-->
+<div id="level2tabs"></div>
+<!--+
+ |end Endtabs
+ +-->
+<script type="text/javascript"><!--
+document.write("Last Published: " + document.lastModified);
+// --></script>
+</div>
+<!--+
+ |breadtrail
+ +-->
+<div class="breadtrail">
+
+
+ </div>
+<!--+
+ |start Menu, mainarea
+ +-->
+<!--+
+ |start Menu
+ +-->
+<div id="menu">
+<div onclick="SwitchMenu('menu_selected_1.1', 'skin/')" id="menu_selected_1.1Title" class="menutitle" style="background-image: url('skin/images/chapter_open.gif');">Documentation</div>
+<div id="menu_selected_1.1" class="selectedmenuitemgroup" style="display: block;">
+<div class="menuitem">
+<a href="index.html">Overview</a>
+</div>
+<div class="menuitem">
+<a href="api/overview-summary.html#overview_description">Getting Started</a>
+</div>
+<div class="menuitem">
+<a href="api/index.html">API Docs</a>
+</div>
+<div class="menupage">
+<div class="menupagetitle">HBase Metrics</div>
+</div>
+<div class="menuitem">
+<a href="http://wiki.apache.org/hadoop/Hbase">Wiki</a>
+</div>
+<div class="menuitem">
+<a href="http://wiki.apache.org/hadoop/Hbase/FAQ">FAQ</a>
+</div>
+<div class="menuitem">
+<a href="http://hadoop.apache.org/hbase/mailing_lists.html">Mailing Lists</a>
+</div>
+</div>
+<div id="credit"></div>
+<div id="roundbottom">
+<img style="display: none" class="corner" height="15" width="15" alt="" src="skin/images/rc-b-l-15-1body-2menu-3menu.png"></div>
+<!--+
+ |alternative credits
+ +-->
+<div id="credit2"></div>
+</div>
+<!--+
+ |end Menu
+ +-->
+<!--+
+ |start content
+ +-->
+<div id="content">
+<div title="Portable Document Format" class="pdflink">
+<a class="dida" href="metrics.pdf"><img alt="PDF -icon" src="skin/images/pdfdoc.gif" class="skin"><br>
+ PDF</a>
+</div>
+<h1>
+ HBase Metrics
+ </h1>
+<div id="minitoc-area">
+<ul class="minitoc">
+<li>
+<a href="#Introduction"> Introduction </a>
+</li>
+<li>
+<a href="#HOWTO">HOWTO</a>
+</li>
+</ul>
+</div>
+
+<a name="N1000D"></a><a name="Introduction"></a>
+<h2 class="h3"> Introduction </h2>
+<div class="section">
+<p>
+ HBase emits Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
+ </p>
+</div>
+
+<a name="N1001B"></a><a name="HOWTO"></a>
+<h2 class="h3">HOWTO</h2>
+<div class="section">
+<p>First read up on Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
+ If you are using ganglia, the <a href="http://wiki.apache.org/hadoop/GangliaMetrics">GangliaMetrics</a>
+ wiki page is useful read.</p>
+<p>To have HBase emit metrics, edit <span class="codefrag">$HBASE_HOME/conf/hadoop-metrics.properties</span>
+ and enable metric 'contexts' per plugin. As of this writing, hadoop supports
+ <strong>file</strong> and <strong>ganglia</strong> plugins.
+ Yes, the hbase metrics files is named hadoop-metrics rather than
+ <em>hbase-metrics</em> because currently at least the hadoop metrics system has the
+ properties filename hardcoded. Per metrics <em>context</em>,
+ comment out the NullContext and enable one or more plugins instead.
+ </p>
+<p>
+ If you enable the <em>hbase</em> context, on regionservers you'll see total requests since last
+ metric emission, count of regions and storefiles as well as a count of memcache size.
+ On the master, you'll see a count of the cluster's requests.
+ </p>
+<p>
+ Enabling the <em>rpc</em> context is good if you are interested in seeing
+ metrics on each hbase rpc method invocation (counts and time taken).
+ </p>
+<p>
+ The <em>jvm</em> context is
+ useful for long-term stats on running hbase jvms -- memory used, thread counts, etc.
+ As of this writing, if more than one jvm is running emitting metrics, at least
+ in ganglia, the stats are aggregated rather than reported per instance.
+ </p>
+</div>
+
+</div>
+<!--+
+ |end content
+ +-->
+<div class="clearboth"> </div>
+</div>
+<div id="footer">
+<!--+
+ |start bottomstrip
+ +-->
+<div class="lastmodified">
+<script type="text/javascript"><!--
+document.write("Last Published: " + document.lastModified);
+// --></script>
+</div>
+<div class="copyright">
+ Copyright ©
+ 2008 <a href="http://www.apache.org/licenses/">The Apache Software Foundation.</a>
+</div>
+<!--+
+ |end bottomstrip
+ +-->
+</div>
+</body>
+</html>
diff --git a/docs/metrics.pdf b/docs/metrics.pdf
new file mode 100644
index 0000000..df17f0f
--- /dev/null
+++ b/docs/metrics.pdf
@@ -0,0 +1,240 @@
+%PDF-1.3
+%ª«¬
+4 0 obj
+<< /Type /Info
+/Producer (FOP 0.20.5) >>
+endobj
+5 0 obj
+<< /Length 397 /Filter [ /ASCII85Decode /FlateDecode ]
+ >>
+stream
+Gaua9]hZI!'SU`q`?!EYMgod-FJO_m+JB-NM#eG[lElgK;A'eagCpJbmBUi81H<Fh55&W]M.,"7"YrLVVIJ>)-0)4?!dO_O)#LQhjf#I)bU:%MZUrQt&R7nupJ]2/FmZd,7dpoMTm+s.%W3X0U^;atp(jP#fF/PVVimnas(,+j>J&f`+S_q.i6iR-^f9e$6sM>D<=d:5lMK;Cc!jbcF]4'TnM7425d!rZQ9c_@@k39(!$]a!8(FsAMj]XaP-@Geb4`n3+Q"(12'd*rMr5L25ruXf7nM'2W7VcQpr(>8j`&G?:+6OO+Mr+f0NL^Lo%RdRgTHkn(K1,W8H^>"c\"!R.,Ho;U5\bd;n8NpNCKeiY$Mf+9GfA9Ck<MtmNQL0gO@l\#4t,VcG&'~>
+endstream
+endobj
+6 0 obj
+<< /Type /Page
+/Parent 1 0 R
+/MediaBox [ 0 0 612 792 ]
+/Resources 3 0 R
+/Contents 5 0 R
+/Annots 7 0 R
+>>
+endobj
+7 0 obj
+[
+8 0 R
+10 0 R
+]
+endobj
+8 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 102.0 559.666 169.328 547.666 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A 9 0 R
+/H /I
+>>
+endobj
+10 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 102.0 541.466 154.652 529.466 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A 11 0 R
+/H /I
+>>
+endobj
+12 0 obj
+<< /Length 1720 /Filter [ /ASCII85Decode /FlateDecode ]
+ >>
+stream
+Gatm<>Ap;q'RlZ]iJ1%I8f7jQbiLQ^jjHcUCn$QeW`rJVnIW8?bhA.8rUi<YIE]J`Q)^*M0uLPMhmo0@c0FjX+O$Sjg$D`L8QUj1n;i^QJS2^[;WdgdZR<MY1\u[(\S.S,IpZ0Tgt#&^Q,O"!45'P9]%O1!JTR[GDUh=j*Y!OiGcf2Y1WkQ;jD";9lFJ]$#ihIagF0_u"eEp8lW?ei<pm:dS)\q0/^"7)hl.b`hiSY34;rnsW,j?taI[A&Te>ql#<?d;-j&s3*]o.'=d>/t+/?``[$R'#'&?d2O;S/_D;)a[@CDmFn:0ZkmVRt,^45oho<=%Q#WWVP+0cRL"BYA\@IuHV\hY%dN47qK+3`UOamT;%3t_,h#;ZC?^^Qn(2%YcYhLiC:Jp]]^qd9H<NDjt4rdHA5F,1RRXBK7*C6!"5Uj3__Qm^Fj>_q&B6gE]n4l:qrj8URj;$dThY&iUO@cX-LpP+dZC\BVADBIjP7@=Tgm$nW]os5C:[G9<CSJ60H$E?0)Uc,'3G`H-madEJoVk-ijZeRIRpE^J)j=o)M(bC/\Isq!7o>ggIVjkl*/9[B+D18X7eb#NWK2bhtor.So1u(Vucdj$OW"rnU5NQqN[lL(3j\#MPl*FNgS<1=7?>p+RPLjMLTOb`&h6#Jkm*!a/Y:?Rq>q[no>u;)E>osQQW\j--e!O9'>oq_VmBTug_TDI:`j95[UmB72^FNnId[/QW7XQ+CrWKhK/rfAs<U+:XNfZB2M0DXK#@npcPsMK;5hd#0r*@6f\Xd3D516*5n3^WfiaRkrT>;5]&^2*8Xk*<Uh=Kc:nf3G]c:%G!-?5$'/ub,!7?tW]k(frjQSBN`VUO5&L!4XqBtbU=T\ji;pD>Hp^_8=s9L=@iQ<2h:gr_2ms"d,iMUaH.Sej]ng^^,5gf"c>/eng[=:?K];XX8ZmOVmtBNeqL:GEn#5Lt,ZEU[H`kZ%R2$[f`<P^'/"on'K[!*0m3\`X-nq^i<&W@'#qjre;f9;B;3FlU?/".VG,cXEhV69S*\(]umr;HQ>p<%h-`bO`%[bq^lUk@GB4F7*6a2;Tro,=p2Y\Q'om"IR";k2P$u@,,H*Y$(Qu(J#/+npM>_8Y@RVGLE(%<P;!E/,giDQhA&I-ZO_O;eLSQ<,W;dK_J=VTTSMK((HY1@q.RAa8;C6lm@#6(d?Eh1sVu.)b$WWDjjA.QG.jjA`:'2a\niCab`]'!='3SM^6bDa!2o_9>l)KY15B3I0sH'ZRICUPcZX#H8ViN#oR\=6Uu)+ZPi4d^1hY]rRm(%GU\id]-e*7^2sc:-$kj8c<nOtAb"m"4JBJC/Ridi5/o#+74j?\?hUDMF'3kcZU@Qo2r1$K\2EG\Ofj6a1_b'I<sq-h\gT,\-<,N]892DPb.(DhQ;_"'8*[7&1LLQ-)HTE??HfJ2J(R\>7bb8LgQH.R>YENchFA\G[u4uRfoZXi2P7(<`Pk9TX+cJj!i&ee`b>bh5.9`uITJANXBFQQqpt/.RH?6J..`FLP7[.:PNBdR)$u[')36*d<J$m3cCCEZ=3UYqRs&-WQkRh3oH+uZ<>EbkL>D?eJP@^'YiLNkM*)[\]tr%n-$6N7OiP]1aq!j\4R`KbhQl9Ml3e5Ni&g7(I4PnI.SHC<eq1-n-<r6Gif4\@e#"@f4'We\n^a6Qb?aep;q*t@Krj1?c&*^MDIdmFf!\%\fkqhj8d3lG"5qjhrr~>
+endstream
+endobj
+13 0 obj
+<< /Type /Page
+/Parent 1 0 R
+/MediaBox [ 0 0 612 792 ]
+/Resources 3 0 R
+/Contents 12 0 R
+/Annots 14 0 R
+>>
+endobj
+14 0 obj
+[
+15 0 R
+16 0 R
+17 0 R
+]
+endobj
+15 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 194.988 629.666 230.316 617.666 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html)
+/S /URI >>
+/H /I
+>>
+endobj
+16 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 209.652 577.332 244.98 565.332 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html)
+/S /URI >>
+/H /I
+>>
+endobj
+17 0 obj
+<< /Type /Annot
+/Subtype /Link
+/Rect [ 388.62 577.332 463.272 565.332 ]
+/C [ 0 0 0 ]
+/Border [ 0 0 0 ]
+/A << /URI (http://wiki.apache.org/hadoop/GangliaMetrics)
+/S /URI >>
+/H /I
+>>
+endobj
+19 0 obj
+<<
+ /Title (\376\377\0\61\0\40\0\111\0\156\0\164\0\162\0\157\0\144\0\165\0\143\0\164\0\151\0\157\0\156)
+ /Parent 18 0 R
+ /Next 20 0 R
+ /A 9 0 R
+>> endobj
+20 0 obj
+<<
+ /Title (\376\377\0\62\0\40\0\110\0\117\0\127\0\124\0\117)
+ /Parent 18 0 R
+ /Prev 19 0 R
+ /A 11 0 R
+>> endobj
+21 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F3
+/BaseFont /Helvetica-Bold
+/Encoding /WinAnsiEncoding >>
+endobj
+22 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F5
+/BaseFont /Times-Roman
+/Encoding /WinAnsiEncoding >>
+endobj
+23 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F6
+/BaseFont /Times-Italic
+/Encoding /WinAnsiEncoding >>
+endobj
+24 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F1
+/BaseFont /Helvetica
+/Encoding /WinAnsiEncoding >>
+endobj
+25 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F9
+/BaseFont /Courier
+/Encoding /WinAnsiEncoding >>
+endobj
+26 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F2
+/BaseFont /Helvetica-Oblique
+/Encoding /WinAnsiEncoding >>
+endobj
+27 0 obj
+<< /Type /Font
+/Subtype /Type1
+/Name /F7
+/BaseFont /Times-Bold
+/Encoding /WinAnsiEncoding >>
+endobj
+1 0 obj
+<< /Type /Pages
+/Count 2
+/Kids [6 0 R 13 0 R ] >>
+endobj
+2 0 obj
+<< /Type /Catalog
+/Pages 1 0 R
+ /Outlines 18 0 R
+ /PageMode /UseOutlines
+ >>
+endobj
+3 0 obj
+<<
+/Font << /F3 21 0 R /F5 22 0 R /F1 24 0 R /F6 23 0 R /F9 25 0 R /F2 26 0 R /F7 27 0 R >>
+/ProcSet [ /PDF /ImageC /Text ] >>
+endobj
+9 0 obj
+<<
+/S /GoTo
+/D [13 0 R /XYZ 85.0 659.0 null]
+>>
+endobj
+11 0 obj
+<<
+/S /GoTo
+/D [13 0 R /XYZ 85.0 606.666 null]
+>>
+endobj
+18 0 obj
+<<
+ /First 19 0 R
+ /Last 20 0 R
+>> endobj
+xref
+0 28
+0000000000 65535 f
+0000004708 00000 n
+0000004773 00000 n
+0000004865 00000 n
+0000000015 00000 n
+0000000071 00000 n
+0000000559 00000 n
+0000000679 00000 n
+0000000711 00000 n
+0000005010 00000 n
+0000000846 00000 n
+0000005073 00000 n
+0000000983 00000 n
+0000002796 00000 n
+0000002919 00000 n
+0000002960 00000 n
+0000003207 00000 n
+0000003453 00000 n
+0000005139 00000 n
+0000003650 00000 n
+0000003813 00000 n
+0000003935 00000 n
+0000004048 00000 n
+0000004158 00000 n
+0000004269 00000 n
+0000004377 00000 n
+0000004483 00000 n
+0000004599 00000 n
+trailer
+<<
+/Size 28
+/Root 2 0 R
+/Info 4 0 R
+>>
+startxref
+5190
+%%EOF
diff --git a/docs/skin/CommonMessages_de.xml b/docs/skin/CommonMessages_de.xml
new file mode 100644
index 0000000..bc46119
--- /dev/null
+++ b/docs/skin/CommonMessages_de.xml
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<catalogue>
+ <message key="Font size:">Schriftgrösse:</message>
+ <message key="Last Published:">Zuletzt veröffentlicht:</message>
+ <message key="Search">Suche:</message>
+ <message key="Search the site with">Suche auf der Seite mit</message>
+</catalogue>
diff --git a/docs/skin/CommonMessages_en_US.xml b/docs/skin/CommonMessages_en_US.xml
new file mode 100644
index 0000000..88dfe14
--- /dev/null
+++ b/docs/skin/CommonMessages_en_US.xml
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<catalogue>
+ <message key="Font size:">Font size:</message>
+ <message key="Last Published:">Last Published:</message>
+ <message key="Search">Search</message>
+ <message key="Search the site with">Search site with</message>
+</catalogue>
diff --git a/docs/skin/CommonMessages_es.xml b/docs/skin/CommonMessages_es.xml
new file mode 100644
index 0000000..63be671
--- /dev/null
+++ b/docs/skin/CommonMessages_es.xml
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<catalogue>
+ <message key="Font size:">Tamaño del texto:</message>
+ <message key="Last Published:">Fecha de publicación:</message>
+ <message key="Search">Buscar</message>
+ <message key="Search the site with">Buscar en</message>
+</catalogue>
diff --git a/docs/skin/CommonMessages_fr.xml b/docs/skin/CommonMessages_fr.xml
new file mode 100644
index 0000000..622569a
--- /dev/null
+++ b/docs/skin/CommonMessages_fr.xml
@@ -0,0 +1,23 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<catalogue>
+ <message key="Font size:">Taille :</message>
+ <message key="Last Published:">Dernière publication :</message>
+ <message key="Search">Rechercher</message>
+ <message key="Search the site with">Rechercher sur le site avec</message>
+</catalogue>
diff --git a/docs/skin/basic.css b/docs/skin/basic.css
new file mode 100644
index 0000000..eb24c32
--- /dev/null
+++ b/docs/skin/basic.css
@@ -0,0 +1,166 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+/**
+ * General
+ */
+
+img { border: 0; }
+
+#content table {
+ border: 0;
+ width: 100%;
+}
+/*Hack to get IE to render the table at 100%*/
+* html #content table { margin-left: -3px; }
+
+#content th,
+#content td {
+ margin: 0;
+ padding: 0;
+ vertical-align: top;
+}
+
+.clearboth {
+ clear: both;
+}
+
+.note, .warning, .fixme {
+ border: solid black 1px;
+ margin: 1em 3em;
+}
+
+.note .label {
+ background: #369;
+ color: white;
+ font-weight: bold;
+ padding: 5px 10px;
+}
+.note .content {
+ background: #F0F0FF;
+ color: black;
+ line-height: 120%;
+ font-size: 90%;
+ padding: 5px 10px;
+}
+.warning .label {
+ background: #C00;
+ color: white;
+ font-weight: bold;
+ padding: 5px 10px;
+}
+.warning .content {
+ background: #FFF0F0;
+ color: black;
+ line-height: 120%;
+ font-size: 90%;
+ padding: 5px 10px;
+}
+.fixme .label {
+ background: #C6C600;
+ color: black;
+ font-weight: bold;
+ padding: 5px 10px;
+}
+.fixme .content {
+ padding: 5px 10px;
+}
+
+/**
+ * Typography
+ */
+
+body {
+ font-family: verdana, "Trebuchet MS", arial, helvetica, sans-serif;
+ font-size: 100%;
+}
+
+#content {
+ font-family: Georgia, Palatino, Times, serif;
+ font-size: 95%;
+}
+#tabs {
+ font-size: 70%;
+}
+#menu {
+ font-size: 80%;
+}
+#footer {
+ font-size: 70%;
+}
+
+h1, h2, h3, h4, h5, h6 {
+ font-family: "Trebuchet MS", verdana, arial, helvetica, sans-serif;
+ font-weight: bold;
+ margin-top: 1em;
+ margin-bottom: .5em;
+}
+
+h1 {
+ margin-top: 0;
+ margin-bottom: 1em;
+ font-size: 1.4em;
+}
+#content h1 {
+ font-size: 160%;
+ margin-bottom: .5em;
+}
+#menu h1 {
+ margin: 0;
+ padding: 10px;
+ background: #336699;
+ color: white;
+}
+h2 { font-size: 120%; }
+h3 { font-size: 100%; }
+h4 { font-size: 90%; }
+h5 { font-size: 80%; }
+h6 { font-size: 75%; }
+
+p {
+ line-height: 120%;
+ text-align: left;
+ margin-top: .5em;
+ margin-bottom: 1em;
+}
+
+#content li,
+#content th,
+#content td,
+#content li ul,
+#content li ol{
+ margin-top: .5em;
+ margin-bottom: .5em;
+}
+
+
+#content li li,
+#minitoc-area li{
+ margin-top: 0em;
+ margin-bottom: 0em;
+}
+
+#content .attribution {
+ text-align: right;
+ font-style: italic;
+ font-size: 85%;
+ margin-top: 1em;
+}
+
+.codefrag {
+ font-family: "Courier New", Courier, monospace;
+ font-size: 110%;
+}
\ No newline at end of file
diff --git a/docs/skin/breadcrumbs-optimized.js b/docs/skin/breadcrumbs-optimized.js
new file mode 100644
index 0000000..507612a
--- /dev/null
+++ b/docs/skin/breadcrumbs-optimized.js
@@ -0,0 +1,90 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+var PREPREND_CRUMBS=new Array();
+var link1="@skinconfig.trail.link1.name@";
+var link2="@skinconfig.trail.link2.name@";
+var link3="@skinconfig.trail.link3.name@";
+if(!(link1=="")&&!link1.indexOf( "@" ) == 0){
+ PREPREND_CRUMBS.push( new Array( link1, @skinconfig.trail.link1.href@ ) ); }
+if(!(link2=="")&&!link2.indexOf( "@" ) == 0){
+ PREPREND_CRUMBS.push( new Array( link2, @skinconfig.trail.link2.href@ ) ); }
+if(!(link3=="")&&!link3.indexOf( "@" ) == 0){
+ PREPREND_CRUMBS.push( new Array( link3, @skinconfig.trail.link3.href@ ) ); }
+var DISPLAY_SEPARATOR=" > ";
+var DISPLAY_PREPREND=" > ";
+var DISPLAY_POSTPREND=":";
+var CSS_CLASS_CRUMB="breadcrumb";
+var CSS_CLASS_TRAIL="breadcrumbTrail";
+var CSS_CLASS_SEPARATOR="crumbSeparator";
+var FILE_EXTENSIONS=new Array( ".html", ".htm", ".jsp", ".php", ".php3", ".php4" );
+var PATH_SEPARATOR="/";
+
+function sc(s) {
+ var l=s.toLowerCase();
+ return l.substr(0,1).toUpperCase()+l.substr(1);
+}
+function getdirs() {
+ var t=document.location.pathname.split(PATH_SEPARATOR);
+ var lc=t[t.length-1];
+ for(var i=0;i < FILE_EXTENSIONS.length;i++)
+ {
+ if(lc.indexOf(FILE_EXTENSIONS[i]))
+ return t.slice(1,t.length-1); }
+ return t.slice(1,t.length);
+}
+function getcrumbs( d )
+{
+ var pre = "/";
+ var post = "/";
+ var c = new Array();
+ if( d != null )
+ {
+ for(var i=0;i < d.length;i++) {
+ pre+=d[i]+postfix;
+ c.push(new Array(d[i],pre)); }
+ }
+ if(PREPREND_CRUMBS.length > 0 )
+ return PREPREND_CRUMBS.concat( c );
+ return c;
+}
+function gettrail( c )
+{
+ var h=DISPLAY_PREPREND;
+ for(var i=0;i < c.length;i++)
+ {
+ h+='<a href="'+c[i][1]+'" >'+sc(c[i][0])+'</a>';
+ if(i!=(c.length-1))
+ h+=DISPLAY_SEPARATOR; }
+ return h+DISPLAY_POSTPREND;
+}
+
+function gettrailXHTML( c )
+{
+ var h='<span class="'+CSS_CLASS_TRAIL+'">'+DISPLAY_PREPREND;
+ for(var i=0;i < c.length;i++)
+ {
+ h+='<a href="'+c[i][1]+'" class="'+CSS_CLASS_CRUMB+'">'+sc(c[i][0])+'</a>';
+ if(i!=(c.length-1))
+ h+='<span class="'+CSS_CLASS_SEPARATOR+'">'+DISPLAY_SEPARATOR+'</span>'; }
+ return h+DISPLAY_POSTPREND+'</span>';
+}
+
+if(document.location.href.toLowerCase().indexOf("http://")==-1)
+ document.write(gettrail(getcrumbs()));
+else
+ document.write(gettrail(getcrumbs(getdirs())));
+
diff --git a/docs/skin/breadcrumbs.js b/docs/skin/breadcrumbs.js
new file mode 100644
index 0000000..aea80ec
--- /dev/null
+++ b/docs/skin/breadcrumbs.js
@@ -0,0 +1,237 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+/**
+ * This script, when included in a html file, builds a neat breadcrumb trail
+ * based on its url. That is, if it doesn't contains bugs (I'm relatively
+ * sure it does).
+ *
+ * Typical usage:
+ * <script type="text/javascript" language="JavaScript" src="breadcrumbs.js"></script>
+ */
+
+/**
+ * IE 5 on Mac doesn't know Array.push.
+ *
+ * Implement it - courtesy to fritz.
+ */
+var abc = new Array();
+if (!abc.push) {
+ Array.prototype.push = function(what){this[this.length]=what}
+}
+
+/* ========================================================================
+ CONSTANTS
+ ======================================================================== */
+
+/**
+ * Two-dimensional array containing extra crumbs to place at the front of
+ * the trail. Specify first the name of the crumb, then the URI that belongs
+ * to it. You'll need to modify this for every domain or subdomain where
+ * you use this script (you can leave it as an empty array if you wish)
+ */
+var PREPREND_CRUMBS = new Array();
+
+var link1 = "@skinconfig.trail.link1.name@";
+var link2 = "@skinconfig.trail.link2.name@";
+var link3 = "@skinconfig.trail.link3.name@";
+
+var href1 = "@skinconfig.trail.link1.href@";
+var href2 = "@skinconfig.trail.link2.href@";
+var href3 = "@skinconfig.trail.link3.href@";
+
+ if(!(link1=="")&&!link1.indexOf( "@" ) == 0){
+ PREPREND_CRUMBS.push( new Array( link1, href1 ) );
+ }
+ if(!(link2=="")&&!link2.indexOf( "@" ) == 0){
+ PREPREND_CRUMBS.push( new Array( link2, href2 ) );
+ }
+ if(!(link3=="")&&!link3.indexOf( "@" ) == 0){
+ PREPREND_CRUMBS.push( new Array( link3, href3 ) );
+ }
+
+/**
+ * String to include between crumbs:
+ */
+var DISPLAY_SEPARATOR = " > ";
+/**
+ * String to include at the beginning of the trail
+ */
+var DISPLAY_PREPREND = " > ";
+/**
+ * String to include at the end of the trail
+ */
+var DISPLAY_POSTPREND = "";
+
+/**
+ * CSS Class to use for a single crumb:
+ */
+var CSS_CLASS_CRUMB = "breadcrumb";
+
+/**
+ * CSS Class to use for the complete trail:
+ */
+var CSS_CLASS_TRAIL = "breadcrumbTrail";
+
+/**
+ * CSS Class to use for crumb separator:
+ */
+var CSS_CLASS_SEPARATOR = "crumbSeparator";
+
+/**
+ * Array of strings containing common file extensions. We use this to
+ * determine what part of the url to ignore (if it contains one of the
+ * string specified here, we ignore it).
+ */
+var FILE_EXTENSIONS = new Array( ".html", ".htm", ".jsp", ".php", ".php3", ".php4" );
+
+/**
+ * String that separates parts of the breadcrumb trail from each other.
+ * When this is no longer a slash, I'm sure I'll be old and grey.
+ */
+var PATH_SEPARATOR = "/";
+
+/* ========================================================================
+ UTILITY FUNCTIONS
+ ======================================================================== */
+/**
+ * Capitalize first letter of the provided string and return the modified
+ * string.
+ */
+function sentenceCase( string )
+{ return string;
+ //var lower = string.toLowerCase();
+ //return lower.substr(0,1).toUpperCase() + lower.substr(1);
+}
+
+/**
+ * Returns an array containing the names of all the directories in the
+ * current document URL
+ */
+function getDirectoriesInURL()
+{
+ var trail = document.location.pathname.split( PATH_SEPARATOR );
+
+ // check whether last section is a file or a directory
+ var lastcrumb = trail[trail.length-1];
+ for( var i = 0; i < FILE_EXTENSIONS.length; i++ )
+ {
+ if( lastcrumb.indexOf( FILE_EXTENSIONS[i] ) )
+ {
+ // it is, remove it and send results
+ return trail.slice( 1, trail.length-1 );
+ }
+ }
+
+ // it's not; send the trail unmodified
+ return trail.slice( 1, trail.length );
+}
+
+/* ========================================================================
+ BREADCRUMB FUNCTIONALITY
+ ======================================================================== */
+/**
+ * Return a two-dimensional array describing the breadcrumbs based on the
+ * array of directories passed in.
+ */
+function getBreadcrumbs( dirs )
+{
+ var prefix = "/";
+ var postfix = "/";
+
+ // the array we will return
+ var crumbs = new Array();
+
+ if( dirs != null )
+ {
+ for( var i = 0; i < dirs.length; i++ )
+ {
+ prefix += dirs[i] + postfix;
+ crumbs.push( new Array( dirs[i], prefix ) );
+ }
+ }
+
+ // preprend the PREPREND_CRUMBS
+ if(PREPREND_CRUMBS.length > 0 )
+ {
+ return PREPREND_CRUMBS.concat( crumbs );
+ }
+
+ return crumbs;
+}
+
+/**
+ * Return a string containing a simple text breadcrumb trail based on the
+ * two-dimensional array passed in.
+ */
+function getCrumbTrail( crumbs )
+{
+ var xhtml = DISPLAY_PREPREND;
+
+ for( var i = 0; i < crumbs.length; i++ )
+ {
+ xhtml += '<a href="' + crumbs[i][1] + '" >';
+ xhtml += unescape( crumbs[i][0] ) + '</a>';
+ if( i != (crumbs.length-1) )
+ {
+ xhtml += DISPLAY_SEPARATOR;
+ }
+ }
+
+ xhtml += DISPLAY_POSTPREND;
+
+ return xhtml;
+}
+
+/**
+ * Return a string containing an XHTML breadcrumb trail based on the
+ * two-dimensional array passed in.
+ */
+function getCrumbTrailXHTML( crumbs )
+{
+ var xhtml = '<span class="' + CSS_CLASS_TRAIL + '">';
+ xhtml += DISPLAY_PREPREND;
+
+ for( var i = 0; i < crumbs.length; i++ )
+ {
+ xhtml += '<a href="' + crumbs[i][1] + '" class="' + CSS_CLASS_CRUMB + '">';
+ xhtml += unescape( crumbs[i][0] ) + '</a>';
+ if( i != (crumbs.length-1) )
+ {
+ xhtml += '<span class="' + CSS_CLASS_SEPARATOR + '">' + DISPLAY_SEPARATOR + '</span>';
+ }
+ }
+
+ xhtml += DISPLAY_POSTPREND;
+ xhtml += '</span>';
+
+ return xhtml;
+}
+
+/* ========================================================================
+ PRINT BREADCRUMB TRAIL
+ ======================================================================== */
+
+// check if we're local; if so, only print the PREPREND_CRUMBS
+if( document.location.href.toLowerCase().indexOf( "http://" ) == -1 )
+{
+ document.write( getCrumbTrail( getBreadcrumbs() ) );
+}
+else
+{
+ document.write( getCrumbTrail( getBreadcrumbs( getDirectoriesInURL() ) ) );
+}
+
diff --git a/docs/skin/fontsize.js b/docs/skin/fontsize.js
new file mode 100644
index 0000000..11722bf
--- /dev/null
+++ b/docs/skin/fontsize.js
@@ -0,0 +1,166 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+function init()
+{ //embedded in the doc
+ //ndeSetTextSize();
+}
+
+function checkBrowser(){
+ if (!document.getElementsByTagName){
+ return true;
+ }
+ else{
+ return false;
+ }
+}
+
+
+function ndeSetTextSize(chgsize,rs)
+{
+ var startSize;
+ var newSize;
+
+ if (!checkBrowser)
+ {
+ return;
+ }
+
+ startSize = parseInt(ndeGetDocTextSize());
+
+ if (!startSize)
+ {
+ startSize = 16;
+ }
+
+ switch (chgsize)
+ {
+ case 'incr':
+ newSize = startSize + 2;
+ break;
+
+ case 'decr':
+ newSize = startSize - 2;
+ break;
+
+ case 'reset':
+ if (rs) {newSize = rs;} else {newSize = 16;}
+ break;
+
+ default:
+ try{
+ newSize = parseInt(ndeReadCookie("nde-textsize"));
+ }
+ catch(e){
+ alert(e);
+ }
+
+ if (!newSize || newSize == 'NaN')
+ {
+ newSize = startSize;
+ }
+ break;
+
+ }
+
+ if (newSize < 10)
+ {
+ newSize = 10;
+ }
+
+ newSize += 'px';
+
+ document.getElementsByTagName('html')[0].style.fontSize = newSize;
+ document.getElementsByTagName('body')[0].style.fontSize = newSize;
+
+ ndeCreateCookie("nde-textsize", newSize, 365);
+}
+
+function ndeGetDocTextSize()
+{
+ if (!checkBrowser)
+ {
+ return 0;
+ }
+
+ var size = 0;
+ var body = document.getElementsByTagName('body')[0];
+
+ if (body.style && body.style.fontSize)
+ {
+ size = body.style.fontSize;
+ }
+ else if (typeof(getComputedStyle) != 'undefined')
+ {
+ size = getComputedStyle(body,'').getPropertyValue('font-size');
+ }
+ else if (body.currentStyle)
+ {
+ size = body.currentStyle.fontSize;
+ }
+
+ //fix IE bug
+ if( isNaN(size)){
+ if(size.substring(size.length-1)=="%"){
+ return
+ }
+
+ }
+
+ return size;
+
+}
+
+
+
+function ndeCreateCookie(name,value,days)
+{
+ var cookie = name + "=" + value + ";";
+
+ if (days)
+ {
+ var date = new Date();
+ date.setTime(date.getTime()+(days*24*60*60*1000));
+ cookie += " expires=" + date.toGMTString() + ";";
+ }
+ cookie += " path=/";
+
+ document.cookie = cookie;
+
+}
+
+function ndeReadCookie(name)
+{
+ var nameEQ = name + "=";
+ var ca = document.cookie.split(';');
+
+
+ for(var i = 0; i < ca.length; i++)
+ {
+ var c = ca[i];
+ while (c.charAt(0) == ' ')
+ {
+ c = c.substring(1, c.length);
+ }
+
+ ctest = c.substring(0,name.length);
+
+ if(ctest == name){
+ return c.substring(nameEQ.length,c.length);
+ }
+ }
+ return null;
+}
diff --git a/docs/skin/getBlank.js b/docs/skin/getBlank.js
new file mode 100644
index 0000000..d9978c0
--- /dev/null
+++ b/docs/skin/getBlank.js
@@ -0,0 +1,40 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+/**
+ * getBlank script - when included in a html file and called from a form text field, will set the value of this field to ""
+ * if the text value is still the standard value.
+ * getPrompt script - when included in a html file and called from a form text field, will set the value of this field to the prompt
+ * if the text value is empty.
+ *
+ * Typical usage:
+ * <script type="text/javascript" language="JavaScript" src="getBlank.js"></script>
+ * <input type="text" id="query" value="Search the site:" onFocus="getBlank (this, 'Search the site:');" onBlur="getBlank (this, 'Search the site:');"/>
+ */
+<!--
+function getBlank (form, stdValue){
+if (form.value == stdValue){
+ form.value = '';
+ }
+return true;
+}
+function getPrompt (form, stdValue){
+if (form.value == ''){
+ form.value = stdValue;
+ }
+return true;
+}
+//-->
diff --git a/docs/skin/getMenu.js b/docs/skin/getMenu.js
new file mode 100644
index 0000000..b17aad6
--- /dev/null
+++ b/docs/skin/getMenu.js
@@ -0,0 +1,45 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+/**
+ * This script, when included in a html file, can be used to make collapsible menus
+ *
+ * Typical usage:
+ * <script type="text/javascript" language="JavaScript" src="menu.js"></script>
+ */
+
+if (document.getElementById){
+ document.write('<style type="text/css">.menuitemgroup{display: none;}</style>')
+}
+
+
+function SwitchMenu(obj, thePath)
+{
+var open = 'url("'+thePath + 'images/chapter_open.gif")';
+var close = 'url("'+thePath + 'images/chapter.gif")';
+ if(document.getElementById) {
+ var el = document.getElementById(obj);
+ var title = document.getElementById(obj+'Title');
+
+ if(el.style.display != "block"){
+ title.style.backgroundImage = open;
+ el.style.display = "block";
+ }else{
+ title.style.backgroundImage = close;
+ el.style.display = "none";
+ }
+ }// end - if(document.getElementById)
+}//end - function SwitchMenu(obj)
diff --git a/docs/skin/images/README.txt b/docs/skin/images/README.txt
new file mode 100644
index 0000000..e0932f4
--- /dev/null
+++ b/docs/skin/images/README.txt
@@ -0,0 +1 @@
+The images in this directory are used if the current skin lacks them.
diff --git a/docs/skin/images/add.jpg b/docs/skin/images/add.jpg
new file mode 100644
index 0000000..06831ee
--- /dev/null
+++ b/docs/skin/images/add.jpg
Binary files differ
diff --git a/docs/skin/images/built-with-forrest-button.png b/docs/skin/images/built-with-forrest-button.png
new file mode 100644
index 0000000..4a787ab
--- /dev/null
+++ b/docs/skin/images/built-with-forrest-button.png
Binary files differ
diff --git a/docs/skin/images/chapter.gif b/docs/skin/images/chapter.gif
new file mode 100644
index 0000000..d3d8245
--- /dev/null
+++ b/docs/skin/images/chapter.gif
Binary files differ
diff --git a/docs/skin/images/chapter_open.gif b/docs/skin/images/chapter_open.gif
new file mode 100644
index 0000000..eecce18
--- /dev/null
+++ b/docs/skin/images/chapter_open.gif
Binary files differ
diff --git a/docs/skin/images/current.gif b/docs/skin/images/current.gif
new file mode 100644
index 0000000..fd82c08
--- /dev/null
+++ b/docs/skin/images/current.gif
Binary files differ
diff --git a/docs/skin/images/error.png b/docs/skin/images/error.png
new file mode 100644
index 0000000..b4fe06e
--- /dev/null
+++ b/docs/skin/images/error.png
Binary files differ
diff --git a/docs/skin/images/external-link.gif b/docs/skin/images/external-link.gif
new file mode 100644
index 0000000..ff2f7b2
--- /dev/null
+++ b/docs/skin/images/external-link.gif
Binary files differ
diff --git a/docs/skin/images/fix.jpg b/docs/skin/images/fix.jpg
new file mode 100644
index 0000000..1d6820b
--- /dev/null
+++ b/docs/skin/images/fix.jpg
Binary files differ
diff --git a/docs/skin/images/forrest-credit-logo.png b/docs/skin/images/forrest-credit-logo.png
new file mode 100644
index 0000000..8a63e42
--- /dev/null
+++ b/docs/skin/images/forrest-credit-logo.png
Binary files differ
diff --git a/docs/skin/images/hack.jpg b/docs/skin/images/hack.jpg
new file mode 100644
index 0000000..f38d50f
--- /dev/null
+++ b/docs/skin/images/hack.jpg
Binary files differ
diff --git a/docs/skin/images/header_white_line.gif b/docs/skin/images/header_white_line.gif
new file mode 100644
index 0000000..369cae8
--- /dev/null
+++ b/docs/skin/images/header_white_line.gif
Binary files differ
diff --git a/docs/skin/images/info.png b/docs/skin/images/info.png
new file mode 100644
index 0000000..2e53447
--- /dev/null
+++ b/docs/skin/images/info.png
Binary files differ
diff --git a/docs/skin/images/instruction_arrow.png b/docs/skin/images/instruction_arrow.png
new file mode 100644
index 0000000..0fbc724
--- /dev/null
+++ b/docs/skin/images/instruction_arrow.png
Binary files differ
diff --git a/docs/skin/images/label.gif b/docs/skin/images/label.gif
new file mode 100644
index 0000000..c83a389
--- /dev/null
+++ b/docs/skin/images/label.gif
Binary files differ
diff --git a/docs/skin/images/page.gif b/docs/skin/images/page.gif
new file mode 100644
index 0000000..a144d32
--- /dev/null
+++ b/docs/skin/images/page.gif
Binary files differ
diff --git a/docs/skin/images/pdfdoc.gif b/docs/skin/images/pdfdoc.gif
new file mode 100644
index 0000000..ec13eb5
--- /dev/null
+++ b/docs/skin/images/pdfdoc.gif
Binary files differ
diff --git a/docs/skin/images/poddoc.png b/docs/skin/images/poddoc.png
new file mode 100644
index 0000000..a393df7
--- /dev/null
+++ b/docs/skin/images/poddoc.png
Binary files differ
diff --git a/docs/skin/images/printer.gif b/docs/skin/images/printer.gif
new file mode 100644
index 0000000..a8d0d41
--- /dev/null
+++ b/docs/skin/images/printer.gif
Binary files differ
diff --git a/docs/skin/images/rc-b-l-15-1body-2menu-3menu.png b/docs/skin/images/rc-b-l-15-1body-2menu-3menu.png
new file mode 100644
index 0000000..cdb460a
--- /dev/null
+++ b/docs/skin/images/rc-b-l-15-1body-2menu-3menu.png
Binary files differ
diff --git a/docs/skin/images/rc-b-r-15-1body-2menu-3menu.png b/docs/skin/images/rc-b-r-15-1body-2menu-3menu.png
new file mode 100644
index 0000000..3eff254
--- /dev/null
+++ b/docs/skin/images/rc-b-r-15-1body-2menu-3menu.png
Binary files differ
diff --git a/docs/skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png b/docs/skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
new file mode 100644
index 0000000..b175f27
--- /dev/null
+++ b/docs/skin/images/rc-b-r-5-1header-2tab-selected-3tab-selected.png
Binary files differ
diff --git a/docs/skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png b/docs/skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
new file mode 100644
index 0000000..e9f4440
--- /dev/null
+++ b/docs/skin/images/rc-t-l-5-1header-2searchbox-3searchbox.png
Binary files differ
diff --git a/docs/skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png b/docs/skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
new file mode 100644
index 0000000..f1e015b
--- /dev/null
+++ b/docs/skin/images/rc-t-l-5-1header-2tab-selected-3tab-selected.png
Binary files differ
diff --git a/docs/skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png b/docs/skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
new file mode 100644
index 0000000..e9f4440
--- /dev/null
+++ b/docs/skin/images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png
Binary files differ
diff --git a/docs/skin/images/rc-t-r-15-1body-2menu-3menu.png b/docs/skin/images/rc-t-r-15-1body-2menu-3menu.png
new file mode 100644
index 0000000..29388b5
--- /dev/null
+++ b/docs/skin/images/rc-t-r-15-1body-2menu-3menu.png
Binary files differ
diff --git a/docs/skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png b/docs/skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
new file mode 100644
index 0000000..944ed73
--- /dev/null
+++ b/docs/skin/images/rc-t-r-5-1header-2searchbox-3searchbox.png
Binary files differ
diff --git a/docs/skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png b/docs/skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
new file mode 100644
index 0000000..c4d4a8c
--- /dev/null
+++ b/docs/skin/images/rc-t-r-5-1header-2tab-selected-3tab-selected.png
Binary files differ
diff --git a/docs/skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png b/docs/skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
new file mode 100644
index 0000000..944ed73
--- /dev/null
+++ b/docs/skin/images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png
Binary files differ
diff --git a/docs/skin/images/remove.jpg b/docs/skin/images/remove.jpg
new file mode 100644
index 0000000..8c9b9ef
--- /dev/null
+++ b/docs/skin/images/remove.jpg
Binary files differ
diff --git a/docs/skin/images/rss.png b/docs/skin/images/rss.png
new file mode 100644
index 0000000..f0796ac
--- /dev/null
+++ b/docs/skin/images/rss.png
Binary files differ
diff --git a/docs/skin/images/spacer.gif b/docs/skin/images/spacer.gif
new file mode 100644
index 0000000..35d42e8
--- /dev/null
+++ b/docs/skin/images/spacer.gif
Binary files differ
diff --git a/docs/skin/images/success.png b/docs/skin/images/success.png
new file mode 100644
index 0000000..96fcfea
--- /dev/null
+++ b/docs/skin/images/success.png
Binary files differ
diff --git a/docs/skin/images/txtdoc.png b/docs/skin/images/txtdoc.png
new file mode 100644
index 0000000..bf8b374
--- /dev/null
+++ b/docs/skin/images/txtdoc.png
Binary files differ
diff --git a/docs/skin/images/update.jpg b/docs/skin/images/update.jpg
new file mode 100644
index 0000000..beb9207
--- /dev/null
+++ b/docs/skin/images/update.jpg
Binary files differ
diff --git a/docs/skin/images/valid-html401.png b/docs/skin/images/valid-html401.png
new file mode 100644
index 0000000..3855210
--- /dev/null
+++ b/docs/skin/images/valid-html401.png
Binary files differ
diff --git a/docs/skin/images/vcss.png b/docs/skin/images/vcss.png
new file mode 100644
index 0000000..9b2f596
--- /dev/null
+++ b/docs/skin/images/vcss.png
Binary files differ
diff --git a/docs/skin/images/warning.png b/docs/skin/images/warning.png
new file mode 100644
index 0000000..b81b2ce
--- /dev/null
+++ b/docs/skin/images/warning.png
Binary files differ
diff --git a/docs/skin/images/xmldoc.gif b/docs/skin/images/xmldoc.gif
new file mode 100644
index 0000000..c92d9b9
--- /dev/null
+++ b/docs/skin/images/xmldoc.gif
Binary files differ
diff --git a/docs/skin/menu.js b/docs/skin/menu.js
new file mode 100644
index 0000000..06ea471
--- /dev/null
+++ b/docs/skin/menu.js
@@ -0,0 +1,48 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+/**
+ * This script, when included in a html file, can be used to make collapsible menus
+ *
+ * Typical usage:
+ * <script type="text/javascript" language="JavaScript" src="menu.js"></script>
+ */
+
+if (document.getElementById){
+ document.write('<style type="text/css">.menuitemgroup{display: none;}</style>')
+}
+
+function SwitchMenu(obj)
+{
+ if(document.getElementById) {
+ var el = document.getElementById(obj);
+ var title = document.getElementById(obj+'Title');
+
+ if(obj.indexOf("_selected_")==0&&el.style.display == ""){
+ el.style.display = "block";
+ title.className = "pagegroupselected";
+ }
+
+ if(el.style.display != "block"){
+ el.style.display = "block";
+ title.className = "pagegroupopen";
+ }
+ else{
+ el.style.display = "none";
+ title.className = "pagegroup";
+ }
+ }// end - if(document.getElementById)
+}//end - function SwitchMenu(obj)
diff --git a/docs/skin/note.txt b/docs/skin/note.txt
new file mode 100644
index 0000000..d34c8db
--- /dev/null
+++ b/docs/skin/note.txt
@@ -0,0 +1,50 @@
+Notes for developer:
+
+--Legend-------------------
+TODO -> blocker
+DONE -> blocker
+ToDo -> enhancement bug
+done -> enhancement bug
+
+--Issues-------------------
+- the corner images should be rendered through svg with the header color.
+-> DONE
+-> ToDo: get rid of the images and use only divs!
+
+- the menu points should be displayed "better".
+-> DONE
+-- Use the krysalis-site menu approach for the overall menu display.
+-> DONE
+-- Use the old lenya innermenu approch to further enhance the menu .
+-> DONE
+
+- the content area needs some attention.
+-> DONE
+-- introduce the heading scheme from krysalis (<headings type="clean|box|underlined"/>)
+-> DONE
+-> ToDo: make box with round corners
+-> done: make underlined with variable border height
+-> ToDo: make underline with bottom round corner
+-- introduce the toc for each html-page
+-> DONE
+-- introduce the external-link-images.
+-> DONE
+
+- the publish note should be where now only a border is.
+Like <div id="published"/>
+-> DONE
+, but make it configurable.
+-> DONE
+- footer needs some attention
+-> DONE
+-- the footer do not have the color profile! Enable it!
+-> DONE
+-- the footer should as well contain a feedback link.
+See http://issues.apache.org/eyebrowse/ReadMsg?listName=forrest-user@xml.apache.org&msgNo=71
+-> DONE
+
+- introduce credits alternativ location
+-> DONE
+
+- border for published / breadtrail / menu /tab divs
+-> ToDo
\ No newline at end of file
diff --git a/docs/skin/print.css b/docs/skin/print.css
new file mode 100644
index 0000000..aaa9931
--- /dev/null
+++ b/docs/skin/print.css
@@ -0,0 +1,54 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+body {
+ font-family: Georgia, Palatino, serif;
+ font-size: 12pt;
+ background: white;
+}
+
+#tabs,
+#menu,
+#content .toc {
+ display: none;
+}
+
+#content {
+ width: auto;
+ padding: 0;
+ float: none !important;
+ color: black;
+ background: inherit;
+}
+
+a:link, a:visited {
+ color: #336699;
+ background: inherit;
+ text-decoration: underline;
+}
+
+#top .logo {
+ padding: 0;
+ margin: 0 0 2em 0;
+}
+
+#footer {
+ margin-top: 4em;
+}
+
+acronym {
+ border: 0;
+}
\ No newline at end of file
diff --git a/docs/skin/profile.css b/docs/skin/profile.css
new file mode 100644
index 0000000..00d3eb4
--- /dev/null
+++ b/docs/skin/profile.css
@@ -0,0 +1,158 @@
+
+
+/* ==================== aural ============================ */
+
+@media aural {
+ h1, h2, h3, h4, h5, h6 { voice-family: paul, male; stress: 20; richness: 90 }
+ h1 { pitch: x-low; pitch-range: 90 }
+ h2 { pitch: x-low; pitch-range: 80 }
+ h3 { pitch: low; pitch-range: 70 }
+ h4 { pitch: medium; pitch-range: 60 }
+ h5 { pitch: medium; pitch-range: 50 }
+ h6 { pitch: medium; pitch-range: 40 }
+ li, dt, dd { pitch: medium; richness: 60 }
+ dt { stress: 80 }
+ pre, code, tt { pitch: medium; pitch-range: 0; stress: 0; richness: 80 }
+ em { pitch: medium; pitch-range: 60; stress: 60; richness: 50 }
+ strong { pitch: medium; pitch-range: 60; stress: 90; richness: 90 }
+ dfn { pitch: high; pitch-range: 60; stress: 60 }
+ s, strike { richness: 0 }
+ i { pitch: medium; pitch-range: 60; stress: 60; richness: 50 }
+ b { pitch: medium; pitch-range: 60; stress: 90; richness: 90 }
+ u { richness: 0 }
+
+ :link { voice-family: harry, male }
+ :visited { voice-family: betty, female }
+ :active { voice-family: betty, female; pitch-range: 80; pitch: x-high }
+}
+
+a.external {
+ padding: 0 20px 0px 0px;
+ display:inline;
+ background-repeat: no-repeat;
+ background-position: center right;
+ background-image: url(images/external-link.gif);
+}
+
+#top { background-color: #FFFFFF;}
+
+#top .header .current { background-color: #4C6C8F;}
+#top .header .current a:link { color: #ffffff; }
+#top .header .current a:visited { color: #ffffff; }
+#top .header .current a:hover { color: #ffffff; }
+
+#tabs li { background-color: #E5E4D9 ;}
+#tabs li a:link { color: #000000; }
+#tabs li a:visited { color: #000000; }
+#tabs li a:hover { color: #000000; }
+
+#level2tabs a.selected { background-color: #4C6C8F ;}
+#level2tabs a:link { color: #ffffff; }
+#level2tabs a:visited { color: #ffffff; }
+#level2tabs a:hover { color: #ffffff; }
+
+#level2tabs { background-color: #E5E4D9;}
+#level2tabs a.unselected:link { color: #000000; }
+#level2tabs a.unselected:visited { color: #000000; }
+#level2tabs a.unselected:hover { color: #000000; }
+
+.heading { background-color: #E5E4D9;}
+
+.boxed { background-color: #E5E4D9;}
+.underlined_5 {border-bottom: solid 5px #E5E4D9;}
+.underlined_10 {border-bottom: solid 10px #E5E4D9;}
+table caption {
+background-color: #E5E4D9;
+color: #000000;
+}
+
+#feedback {
+color: #FFFFFF;
+background: #4C6C8F;
+text-align: center;
+}
+#feedback #feedbackto {
+color: #FFFFFF;
+}
+
+#publishedStrip {
+color: #FFFFFF;
+background: #4C6C8F;
+}
+
+#publishedStrip {
+color: #000000;
+background: #E5E4D9;
+}
+
+#menu .menupagetitle { background-color: #CFDCED;
+ color: #000000;}
+
+#menu { border-color: #999999;}
+#menu .menupagetitle { border-color: #999999;}
+#menu .menupageitemgroup { border-color: #999999;}
+
+#menu { background-color: #4C6C8F;}
+#menu { color: #ffffff;}
+#menu a:link { color: #ffffff;}
+#menu a:visited { color: #ffffff;}
+#menu a:hover {
+background-color: #4C6C8F;
+color: #ffffff;}
+
+#menu h1 {
+color: #000000;
+background-color: #cfdced;
+}
+
+#top .searchbox {
+background-color: #E5E4D9 ;
+color: #000000;
+}
+
+#menu .menupageitemgroup {
+background-color: #E5E4D9;
+}
+#menu .menupageitem {
+color: #000000;
+}
+#menu .menupageitem a:link { color: #000000;}
+#menu .menupageitem a:visited { color: #000000;}
+#menu .menupageitem a:hover {
+background-color: #E5E4D9;
+color: #000000;
+}
+
+body{
+background-color: #ffffff;
+color: #000000;
+}
+a:link { color:#0000ff}
+a:visited { color:#009999}
+a:hover { color:#6587ff}
+
+
+.ForrestTable { background-color: #ccc;}
+
+.ForrestTable td { background-color: #ffffff;}
+
+.highlight { background-color: #ffff00;}
+
+.fixme { border-color: #c60;}
+
+.note { border-color: #069;}
+
+.warning { border-color: #900;}
+
+.code { border-color: #a5b6c6;}
+
+#footer { background-color: #E5E4D9;}
+/* extra-css */
+
+ p.quote {
+ margin-left: 2em;
+ padding: .5em;
+ background-color: #f0f0f0;
+ font-family: monospace;
+ }
+
\ No newline at end of file
diff --git a/docs/skin/prototype.js b/docs/skin/prototype.js
new file mode 100644
index 0000000..ed7d920
--- /dev/null
+++ b/docs/skin/prototype.js
@@ -0,0 +1,1257 @@
+/* Prototype JavaScript framework, version 1.4.0_pre4
+ * (c) 2005 Sam Stephenson <sam@conio.net>
+ *
+ * THIS FILE IS AUTOMATICALLY GENERATED. When sending patches, please diff
+ * against the source tree, available from the Prototype darcs repository.
+ *
+ * Prototype is freely distributable under the terms of an MIT-style license.
+ *
+ * For details, see the Prototype web site: http://prototype.conio.net/
+ *
+/*--------------------------------------------------------------------------*/
+
+var Prototype = {
+ Version: '1.4.0_pre4',
+
+ emptyFunction: function() {},
+ K: function(x) {return x}
+}
+
+var Class = {
+ create: function() {
+ return function() {
+ this.initialize.apply(this, arguments);
+ }
+ }
+}
+
+var Abstract = new Object();
+
+Object.extend = function(destination, source) {
+ for (property in source) {
+ destination[property] = source[property];
+ }
+ return destination;
+}
+
+Function.prototype.bind = function(object) {
+ var __method = this;
+ return function() {
+ return __method.apply(object, arguments);
+ }
+}
+
+Function.prototype.bindAsEventListener = function(object) {
+ var __method = this;
+ return function(event) {
+ return __method.call(object, event || window.event);
+ }
+}
+
+Number.prototype.toColorPart = function() {
+ var digits = this.toString(16);
+ if (this < 16) return '0' + digits;
+ return digits;
+}
+
+var Try = {
+ these: function() {
+ var returnValue;
+
+ for (var i = 0; i < arguments.length; i++) {
+ var lambda = arguments[i];
+ try {
+ returnValue = lambda();
+ break;
+ } catch (e) {}
+ }
+
+ return returnValue;
+ }
+}
+
+/*--------------------------------------------------------------------------*/
+
+var PeriodicalExecuter = Class.create();
+PeriodicalExecuter.prototype = {
+ initialize: function(callback, frequency) {
+ this.callback = callback;
+ this.frequency = frequency;
+ this.currentlyExecuting = false;
+
+ this.registerCallback();
+ },
+
+ registerCallback: function() {
+ setInterval(this.onTimerEvent.bind(this), this.frequency * 1000);
+ },
+
+ onTimerEvent: function() {
+ if (!this.currentlyExecuting) {
+ try {
+ this.currentlyExecuting = true;
+ this.callback();
+ } finally {
+ this.currentlyExecuting = false;
+ }
+ }
+ }
+}
+
+/*--------------------------------------------------------------------------*/
+
+function $() {
+ var elements = new Array();
+
+ for (var i = 0; i < arguments.length; i++) {
+ var element = arguments[i];
+ if (typeof element == 'string')
+ element = document.getElementById(element);
+
+ if (arguments.length == 1)
+ return element;
+
+ elements.push(element);
+ }
+
+ return elements;
+}
+
+if (!Array.prototype.push) {
+ Array.prototype.push = function() {
+ var startLength = this.length;
+ for (var i = 0; i < arguments.length; i++)
+ this[startLength + i] = arguments[i];
+ return this.length;
+ }
+}
+
+if (!Function.prototype.apply) {
+ // Based on code from http://www.youngpup.net/
+ Function.prototype.apply = function(object, parameters) {
+ var parameterStrings = new Array();
+ if (!object) object = window;
+ if (!parameters) parameters = new Array();
+
+ for (var i = 0; i < parameters.length; i++)
+ parameterStrings[i] = 'parameters[' + i + ']';
+
+ object.__apply__ = this;
+ var result = eval('object.__apply__(' +
+ parameterStrings.join(', ') + ')');
+ object.__apply__ = null;
+
+ return result;
+ }
+}
+
+Object.extend(String.prototype, {
+ stripTags: function() {
+ return this.replace(/<\/?[^>]+>/gi, '');
+ },
+
+ escapeHTML: function() {
+ var div = document.createElement('div');
+ var text = document.createTextNode(this);
+ div.appendChild(text);
+ return div.innerHTML;
+ },
+
+ unescapeHTML: function() {
+ var div = document.createElement('div');
+ div.innerHTML = this.stripTags();
+ return div.childNodes[0].nodeValue;
+ },
+
+ parseQuery: function() {
+ var str = this;
+ if (str.substring(0,1) == '?') {
+ str = this.substring(1);
+ }
+ var result = {};
+ var pairs = str.split('&');
+ for (var i = 0; i < pairs.length; i++) {
+ var pair = pairs[i].split('=');
+ result[pair[0]] = pair[1];
+ }
+ return result;
+ }
+});
+
+
+var _break = new Object();
+var _continue = new Object();
+
+var Enumerable = {
+ each: function(iterator) {
+ var index = 0;
+ try {
+ this._each(function(value) {
+ try {
+ iterator(value, index++);
+ } catch (e) {
+ if (e != _continue) throw e;
+ }
+ });
+ } catch (e) {
+ if (e != _break) throw e;
+ }
+ },
+
+ all: function(iterator) {
+ var result = true;
+ this.each(function(value, index) {
+ if (!(result &= (iterator || Prototype.K)(value, index)))
+ throw _break;
+ });
+ return result;
+ },
+
+ any: function(iterator) {
+ var result = true;
+ this.each(function(value, index) {
+ if (result &= (iterator || Prototype.K)(value, index))
+ throw _break;
+ });
+ return result;
+ },
+
+ collect: function(iterator) {
+ var results = [];
+ this.each(function(value, index) {
+ results.push(iterator(value, index));
+ });
+ return results;
+ },
+
+ detect: function (iterator) {
+ var result;
+ this.each(function(value, index) {
+ if (iterator(value, index)) {
+ result = value;
+ throw _break;
+ }
+ });
+ return result;
+ },
+
+ findAll: function(iterator) {
+ var results = [];
+ this.each(function(value, index) {
+ if (iterator(value, index))
+ results.push(value);
+ });
+ return results;
+ },
+
+ grep: function(pattern, iterator) {
+ var results = [];
+ this.each(function(value, index) {
+ var stringValue = value.toString();
+ if (stringValue.match(pattern))
+ results.push((iterator || Prototype.K)(value, index));
+ })
+ return results;
+ },
+
+ include: function(object) {
+ var found = false;
+ this.each(function(value) {
+ if (value == object) {
+ found = true;
+ throw _break;
+ }
+ });
+ return found;
+ },
+
+ inject: function(memo, iterator) {
+ this.each(function(value, index) {
+ memo = iterator(memo, value, index);
+ });
+ return memo;
+ },
+
+ invoke: function(method) {
+ var args = $A(arguments).slice(1);
+ return this.collect(function(value) {
+ return value[method].apply(value, args);
+ });
+ },
+
+ max: function(iterator) {
+ var result;
+ this.each(function(value, index) {
+ value = (iterator || Prototype.K)(value, index);
+ if (value >= (result || value))
+ result = value;
+ });
+ return result;
+ },
+
+ min: function(iterator) {
+ var result;
+ this.each(function(value, index) {
+ value = (iterator || Prototype.K)(value, index);
+ if (value <= (result || value))
+ result = value;
+ });
+ return result;
+ },
+
+ partition: function(iterator) {
+ var trues = [], falses = [];
+ this.each(function(value, index) {
+ ((iterator || Prototype.K)(value, index) ?
+ trues : falses).push(value);
+ });
+ return [trues, falses];
+ },
+
+ pluck: function(property) {
+ var results = [];
+ this.each(function(value, index) {
+ results.push(value[property]);
+ });
+ return results;
+ },
+
+ reject: function(iterator) {
+ var results = [];
+ this.each(function(value, index) {
+ if (!iterator(value, index))
+ results.push(value);
+ });
+ return results;
+ },
+
+ sortBy: function(iterator) {
+ return this.collect(function(value, index) {
+ return {value: value, criteria: iterator(value, index)};
+ }).sort(function(left, right) {
+ var a = left.criteria, b = right.criteria;
+ return a < b ? -1 : a > b ? 1 : 0;
+ }).pluck('value');
+ },
+
+ toArray: function() {
+ return this.collect(Prototype.K);
+ },
+
+ zip: function() {
+ var iterator = Prototype.K, args = $A(arguments);
+ if (typeof args.last() == 'function')
+ iterator = args.pop();
+
+ var collections = [this].concat(args).map($A);
+ return this.map(function(value, index) {
+ iterator(value = collections.pluck(index));
+ return value;
+ });
+ }
+}
+
+Object.extend(Enumerable, {
+ map: Enumerable.collect,
+ find: Enumerable.detect,
+ select: Enumerable.findAll,
+ member: Enumerable.include,
+ entries: Enumerable.toArray
+});
+
+$A = Array.from = function(iterable) {
+ var results = [];
+ for (var i = 0; i < iterable.length; i++)
+ results.push(iterable[i]);
+ return results;
+}
+
+Object.extend(Array.prototype, {
+ _each: function(iterator) {
+ for (var i = 0; i < this.length; i++)
+ iterator(this[i]);
+ },
+
+ first: function() {
+ return this[0];
+ },
+
+ last: function() {
+ return this[this.length - 1];
+ }
+});
+
+Object.extend(Array.prototype, Enumerable);
+
+
+var Ajax = {
+ getTransport: function() {
+ return Try.these(
+ function() {return new ActiveXObject('Msxml2.XMLHTTP')},
+ function() {return new ActiveXObject('Microsoft.XMLHTTP')},
+ function() {return new XMLHttpRequest()}
+ ) || false;
+ }
+}
+
+Ajax.Base = function() {};
+Ajax.Base.prototype = {
+ setOptions: function(options) {
+ this.options = {
+ method: 'post',
+ asynchronous: true,
+ parameters: ''
+ }
+ Object.extend(this.options, options || {});
+ },
+
+ responseIsSuccess: function() {
+ return this.transport.status == undefined
+ || this.transport.status == 0
+ || (this.transport.status >= 200 && this.transport.status < 300);
+ },
+
+ responseIsFailure: function() {
+ return !this.responseIsSuccess();
+ }
+}
+
+Ajax.Request = Class.create();
+Ajax.Request.Events =
+ ['Uninitialized', 'Loading', 'Loaded', 'Interactive', 'Complete'];
+
+Ajax.Request.prototype = Object.extend(new Ajax.Base(), {
+ initialize: function(url, options) {
+ this.transport = Ajax.getTransport();
+ this.setOptions(options);
+ this.request(url);
+ },
+
+ request: function(url) {
+ var parameters = this.options.parameters || '';
+ if (parameters.length > 0) parameters += '&_=';
+
+ try {
+ if (this.options.method == 'get')
+ url += '?' + parameters;
+
+ this.transport.open(this.options.method, url,
+ this.options.asynchronous);
+
+ if (this.options.asynchronous) {
+ this.transport.onreadystatechange = this.onStateChange.bind(this);
+ setTimeout((function() {this.respondToReadyState(1)}).bind(this), 10);
+ }
+
+ this.setRequestHeaders();
+
+ var body = this.options.postBody ? this.options.postBody : parameters;
+ this.transport.send(this.options.method == 'post' ? body : null);
+
+ } catch (e) {
+ }
+ },
+
+ setRequestHeaders: function() {
+ var requestHeaders =
+ ['X-Requested-With', 'XMLHttpRequest',
+ 'X-Prototype-Version', Prototype.Version];
+
+ if (this.options.method == 'post') {
+ requestHeaders.push('Content-type',
+ 'application/x-www-form-urlencoded');
+
+ /* Force "Connection: close" for Mozilla browsers to work around
+ * a bug where XMLHttpReqeuest sends an incorrect Content-length
+ * header. See Mozilla Bugzilla #246651.
+ */
+ if (this.transport.overrideMimeType)
+ requestHeaders.push('Connection', 'close');
+ }
+
+ if (this.options.requestHeaders)
+ requestHeaders.push.apply(requestHeaders, this.options.requestHeaders);
+
+ for (var i = 0; i < requestHeaders.length; i += 2)
+ this.transport.setRequestHeader(requestHeaders[i], requestHeaders[i+1]);
+ },
+
+ onStateChange: function() {
+ var readyState = this.transport.readyState;
+ if (readyState != 1)
+ this.respondToReadyState(this.transport.readyState);
+ },
+
+ respondToReadyState: function(readyState) {
+ var event = Ajax.Request.Events[readyState];
+
+ if (event == 'Complete')
+ (this.options['on' + this.transport.status]
+ || this.options['on' + (this.responseIsSuccess() ? 'Success' : 'Failure')]
+ || Prototype.emptyFunction)(this.transport);
+
+ (this.options['on' + event] || Prototype.emptyFunction)(this.transport);
+
+ /* Avoid memory leak in MSIE: clean up the oncomplete event handler */
+ if (event == 'Complete')
+ this.transport.onreadystatechange = Prototype.emptyFunction;
+ }
+});
+
+Ajax.Updater = Class.create();
+Ajax.Updater.ScriptFragment = '(?:<script.*?>)((\n|.)*?)(?:<\/script>)';
+
+Object.extend(Object.extend(Ajax.Updater.prototype, Ajax.Request.prototype), {
+ initialize: function(container, url, options) {
+ this.containers = {
+ success: container.success ? $(container.success) : $(container),
+ failure: container.failure ? $(container.failure) :
+ (container.success ? null : $(container))
+ }
+
+ this.transport = Ajax.getTransport();
+ this.setOptions(options);
+
+ var onComplete = this.options.onComplete || Prototype.emptyFunction;
+ this.options.onComplete = (function() {
+ this.updateContent();
+ onComplete(this.transport);
+ }).bind(this);
+
+ this.request(url);
+ },
+
+ updateContent: function() {
+ var receiver = this.responseIsSuccess() ?
+ this.containers.success : this.containers.failure;
+
+ var match = new RegExp(Ajax.Updater.ScriptFragment, 'img');
+ var response = this.transport.responseText.replace(match, '');
+ var scripts = this.transport.responseText.match(match);
+
+ if (receiver) {
+ if (this.options.insertion) {
+ new this.options.insertion(receiver, response);
+ } else {
+ receiver.innerHTML = response;
+ }
+ }
+
+ if (this.responseIsSuccess()) {
+ if (this.onComplete)
+ setTimeout((function() {this.onComplete(
+ this.transport)}).bind(this), 10);
+ }
+
+ if (this.options.evalScripts && scripts) {
+ match = new RegExp(Ajax.Updater.ScriptFragment, 'im');
+ setTimeout((function() {
+ for (var i = 0; i < scripts.length; i++)
+ eval(scripts[i].match(match)[1]);
+ }).bind(this), 10);
+ }
+ }
+});
+
+Ajax.PeriodicalUpdater = Class.create();
+Ajax.PeriodicalUpdater.prototype = Object.extend(new Ajax.Base(), {
+ initialize: function(container, url, options) {
+ this.setOptions(options);
+ this.onComplete = this.options.onComplete;
+
+ this.frequency = (this.options.frequency || 2);
+ this.decay = 1;
+
+ this.updater = {};
+ this.container = container;
+ this.url = url;
+
+ this.start();
+ },
+
+ start: function() {
+ this.options.onComplete = this.updateComplete.bind(this);
+ this.onTimerEvent();
+ },
+
+ stop: function() {
+ this.updater.onComplete = undefined;
+ clearTimeout(this.timer);
+ (this.onComplete || Ajax.emptyFunction).apply(this, arguments);
+ },
+
+ updateComplete: function(request) {
+ if (this.options.decay) {
+ this.decay = (request.responseText == this.lastText ?
+ this.decay * this.options.decay : 1);
+
+ this.lastText = request.responseText;
+ }
+ this.timer = setTimeout(this.onTimerEvent.bind(this),
+ this.decay * this.frequency * 1000);
+ },
+
+ onTimerEvent: function() {
+ this.updater = new Ajax.Updater(this.container, this.url, this.options);
+ }
+});
+
+document.getElementsByClassName = function(className) {
+ var children = document.getElementsByTagName('*') || document.all;
+ var elements = new Array();
+
+ for (var i = 0; i < children.length; i++) {
+ var child = children[i];
+ var classNames = child.className.split(' ');
+ for (var j = 0; j < classNames.length; j++) {
+ if (classNames[j] == className) {
+ elements.push(child);
+ break;
+ }
+ }
+ }
+
+ return elements;
+}
+
+/*--------------------------------------------------------------------------*/
+
+if (!window.Element) {
+ var Element = new Object();
+}
+
+Object.extend(Element, {
+ toggle: function() {
+ for (var i = 0; i < arguments.length; i++) {
+ var element = $(arguments[i]);
+ element.style.display =
+ (element.style.display == 'none' ? '' : 'none');
+ }
+ },
+
+ hide: function() {
+ for (var i = 0; i < arguments.length; i++) {
+ var element = $(arguments[i]);
+ element.style.display = 'none';
+ }
+ },
+
+ show: function() {
+ for (var i = 0; i < arguments.length; i++) {
+ var element = $(arguments[i]);
+ element.style.display = '';
+ }
+ },
+
+ remove: function(element) {
+ element = $(element);
+ element.parentNode.removeChild(element);
+ },
+
+ getHeight: function(element) {
+ element = $(element);
+ return element.offsetHeight;
+ },
+
+ hasClassName: function(element, className) {
+ element = $(element);
+ if (!element)
+ return;
+ var a = element.className.split(' ');
+ for (var i = 0; i < a.length; i++) {
+ if (a[i] == className)
+ return true;
+ }
+ return false;
+ },
+
+ addClassName: function(element, className) {
+ element = $(element);
+ Element.removeClassName(element, className);
+ element.className += ' ' + className;
+ },
+
+ removeClassName: function(element, className) {
+ element = $(element);
+ if (!element)
+ return;
+ var newClassName = '';
+ var a = element.className.split(' ');
+ for (var i = 0; i < a.length; i++) {
+ if (a[i] != className) {
+ if (i > 0)
+ newClassName += ' ';
+ newClassName += a[i];
+ }
+ }
+ element.className = newClassName;
+ },
+
+ // removes whitespace-only text node children
+ cleanWhitespace: function(element) {
+ var element = $(element);
+ for (var i = 0; i < element.childNodes.length; i++) {
+ var node = element.childNodes[i];
+ if (node.nodeType == 3 && !/\S/.test(node.nodeValue))
+ Element.remove(node);
+ }
+ }
+});
+
+var Toggle = new Object();
+Toggle.display = Element.toggle;
+
+/*--------------------------------------------------------------------------*/
+
+Abstract.Insertion = function(adjacency) {
+ this.adjacency = adjacency;
+}
+
+Abstract.Insertion.prototype = {
+ initialize: function(element, content) {
+ this.element = $(element);
+ this.content = content;
+
+ if (this.adjacency && this.element.insertAdjacentHTML) {
+ this.element.insertAdjacentHTML(this.adjacency, this.content);
+ } else {
+ this.range = this.element.ownerDocument.createRange();
+ if (this.initializeRange) this.initializeRange();
+ this.fragment = this.range.createContextualFragment(this.content);
+ this.insertContent();
+ }
+ }
+}
+
+var Insertion = new Object();
+
+Insertion.Before = Class.create();
+Insertion.Before.prototype = Object.extend(new Abstract.Insertion('beforeBegin'), {
+ initializeRange: function() {
+ this.range.setStartBefore(this.element);
+ },
+
+ insertContent: function() {
+ this.element.parentNode.insertBefore(this.fragment, this.element);
+ }
+});
+
+Insertion.Top = Class.create();
+Insertion.Top.prototype = Object.extend(new Abstract.Insertion('afterBegin'), {
+ initializeRange: function() {
+ this.range.selectNodeContents(this.element);
+ this.range.collapse(true);
+ },
+
+ insertContent: function() {
+ this.element.insertBefore(this.fragment, this.element.firstChild);
+ }
+});
+
+Insertion.Bottom = Class.create();
+Insertion.Bottom.prototype = Object.extend(new Abstract.Insertion('beforeEnd'), {
+ initializeRange: function() {
+ this.range.selectNodeContents(this.element);
+ this.range.collapse(this.element);
+ },
+
+ insertContent: function() {
+ this.element.appendChild(this.fragment);
+ }
+});
+
+Insertion.After = Class.create();
+Insertion.After.prototype = Object.extend(new Abstract.Insertion('afterEnd'), {
+ initializeRange: function() {
+ this.range.setStartAfter(this.element);
+ },
+
+ insertContent: function() {
+ this.element.parentNode.insertBefore(this.fragment,
+ this.element.nextSibling);
+ }
+});
+
+var Field = {
+ clear: function() {
+ for (var i = 0; i < arguments.length; i++)
+ $(arguments[i]).value = '';
+ },
+
+ focus: function(element) {
+ $(element).focus();
+ },
+
+ present: function() {
+ for (var i = 0; i < arguments.length; i++)
+ if ($(arguments[i]).value == '') return false;
+ return true;
+ },
+
+ select: function(element) {
+ $(element).select();
+ },
+
+ activate: function(element) {
+ $(element).focus();
+ $(element).select();
+ }
+}
+
+/*--------------------------------------------------------------------------*/
+
+var Form = {
+ serialize: function(form) {
+ var elements = Form.getElements($(form));
+ var queryComponents = new Array();
+
+ for (var i = 0; i < elements.length; i++) {
+ var queryComponent = Form.Element.serialize(elements[i]);
+ if (queryComponent)
+ queryComponents.push(queryComponent);
+ }
+
+ return queryComponents.join('&');
+ },
+
+ getElements: function(form) {
+ var form = $(form);
+ var elements = new Array();
+
+ for (tagName in Form.Element.Serializers) {
+ var tagElements = form.getElementsByTagName(tagName);
+ for (var j = 0; j < tagElements.length; j++)
+ elements.push(tagElements[j]);
+ }
+ return elements;
+ },
+
+ getInputs: function(form, typeName, name) {
+ var form = $(form);
+ var inputs = form.getElementsByTagName('input');
+
+ if (!typeName && !name)
+ return inputs;
+
+ var matchingInputs = new Array();
+ for (var i = 0; i < inputs.length; i++) {
+ var input = inputs[i];
+ if ((typeName && input.type != typeName) ||
+ (name && input.name != name))
+ continue;
+ matchingInputs.push(input);
+ }
+
+ return matchingInputs;
+ },
+
+ disable: function(form) {
+ var elements = Form.getElements(form);
+ for (var i = 0; i < elements.length; i++) {
+ var element = elements[i];
+ element.blur();
+ element.disabled = 'true';
+ }
+ },
+
+ enable: function(form) {
+ var elements = Form.getElements(form);
+ for (var i = 0; i < elements.length; i++) {
+ var element = elements[i];
+ element.disabled = '';
+ }
+ },
+
+ focusFirstElement: function(form) {
+ var form = $(form);
+ var elements = Form.getElements(form);
+ for (var i = 0; i < elements.length; i++) {
+ var element = elements[i];
+ if (element.type != 'hidden' && !element.disabled) {
+ Field.activate(element);
+ break;
+ }
+ }
+ },
+
+ reset: function(form) {
+ $(form).reset();
+ }
+}
+
+Form.Element = {
+ serialize: function(element) {
+ var element = $(element);
+ var method = element.tagName.toLowerCase();
+ var parameter = Form.Element.Serializers[method](element);
+
+ if (parameter)
+ return encodeURIComponent(parameter[0]) + '=' +
+ encodeURIComponent(parameter[1]);
+ },
+
+ getValue: function(element) {
+ var element = $(element);
+ var method = element.tagName.toLowerCase();
+ var parameter = Form.Element.Serializers[method](element);
+
+ if (parameter)
+ return parameter[1];
+ }
+}
+
+Form.Element.Serializers = {
+ input: function(element) {
+ switch (element.type.toLowerCase()) {
+ case 'submit':
+ case 'hidden':
+ case 'password':
+ case 'text':
+ return Form.Element.Serializers.textarea(element);
+ case 'checkbox':
+ case 'radio':
+ return Form.Element.Serializers.inputSelector(element);
+ }
+ return false;
+ },
+
+ inputSelector: function(element) {
+ if (element.checked)
+ return [element.name, element.value];
+ },
+
+ textarea: function(element) {
+ return [element.name, element.value];
+ },
+
+ select: function(element) {
+ var value = '';
+ if (element.type == 'select-one') {
+ var index = element.selectedIndex;
+ if (index >= 0)
+ value = element.options[index].value || element.options[index].text;
+ } else {
+ value = new Array();
+ for (var i = 0; i < element.length; i++) {
+ var opt = element.options[i];
+ if (opt.selected)
+ value.push(opt.value || opt.text);
+ }
+ }
+ return [element.name, value];
+ }
+}
+
+/*--------------------------------------------------------------------------*/
+
+var $F = Form.Element.getValue;
+
+/*--------------------------------------------------------------------------*/
+
+Abstract.TimedObserver = function() {}
+Abstract.TimedObserver.prototype = {
+ initialize: function(element, frequency, callback) {
+ this.frequency = frequency;
+ this.element = $(element);
+ this.callback = callback;
+
+ this.lastValue = this.getValue();
+ this.registerCallback();
+ },
+
+ registerCallback: function() {
+ setInterval(this.onTimerEvent.bind(this), this.frequency * 1000);
+ },
+
+ onTimerEvent: function() {
+ var value = this.getValue();
+ if (this.lastValue != value) {
+ this.callback(this.element, value);
+ this.lastValue = value;
+ }
+ }
+}
+
+Form.Element.Observer = Class.create();
+Form.Element.Observer.prototype = Object.extend(new Abstract.TimedObserver(), {
+ getValue: function() {
+ return Form.Element.getValue(this.element);
+ }
+});
+
+Form.Observer = Class.create();
+Form.Observer.prototype = Object.extend(new Abstract.TimedObserver(), {
+ getValue: function() {
+ return Form.serialize(this.element);
+ }
+});
+
+/*--------------------------------------------------------------------------*/
+
+Abstract.EventObserver = function() {}
+Abstract.EventObserver.prototype = {
+ initialize: function(element, callback) {
+ this.element = $(element);
+ this.callback = callback;
+
+ this.lastValue = this.getValue();
+ if (this.element.tagName.toLowerCase() == 'form')
+ this.registerFormCallbacks();
+ else
+ this.registerCallback(this.element);
+ },
+
+ onElementEvent: function() {
+ var value = this.getValue();
+ if (this.lastValue != value) {
+ this.callback(this.element, value);
+ this.lastValue = value;
+ }
+ },
+
+ registerFormCallbacks: function() {
+ var elements = Form.getElements(this.element);
+ for (var i = 0; i < elements.length; i++)
+ this.registerCallback(elements[i]);
+ },
+
+ registerCallback: function(element) {
+ if (element.type) {
+ switch (element.type.toLowerCase()) {
+ case 'checkbox':
+ case 'radio':
+ element.target = this;
+ element.prev_onclick = element.onclick || Prototype.emptyFunction;
+ element.onclick = function() {
+ this.prev_onclick();
+ this.target.onElementEvent();
+ }
+ break;
+ case 'password':
+ case 'text':
+ case 'textarea':
+ case 'select-one':
+ case 'select-multiple':
+ element.target = this;
+ element.prev_onchange = element.onchange || Prototype.emptyFunction;
+ element.onchange = function() {
+ this.prev_onchange();
+ this.target.onElementEvent();
+ }
+ break;
+ }
+ }
+ }
+}
+
+Form.Element.EventObserver = Class.create();
+Form.Element.EventObserver.prototype = Object.extend(new Abstract.EventObserver(), {
+ getValue: function() {
+ return Form.Element.getValue(this.element);
+ }
+});
+
+Form.EventObserver = Class.create();
+Form.EventObserver.prototype = Object.extend(new Abstract.EventObserver(), {
+ getValue: function() {
+ return Form.serialize(this.element);
+ }
+});
+
+
+if (!window.Event) {
+ var Event = new Object();
+}
+
+Object.extend(Event, {
+ KEY_BACKSPACE: 8,
+ KEY_TAB: 9,
+ KEY_RETURN: 13,
+ KEY_ESC: 27,
+ KEY_LEFT: 37,
+ KEY_UP: 38,
+ KEY_RIGHT: 39,
+ KEY_DOWN: 40,
+ KEY_DELETE: 46,
+
+ element: function(event) {
+ return event.target || event.srcElement;
+ },
+
+ isLeftClick: function(event) {
+ return (((event.which) && (event.which == 1)) ||
+ ((event.button) && (event.button == 1)));
+ },
+
+ pointerX: function(event) {
+ return event.pageX || (event.clientX +
+ (document.documentElement.scrollLeft || document.body.scrollLeft));
+ },
+
+ pointerY: function(event) {
+ return event.pageY || (event.clientY +
+ (document.documentElement.scrollTop || document.body.scrollTop));
+ },
+
+ stop: function(event) {
+ if (event.preventDefault) {
+ event.preventDefault();
+ event.stopPropagation();
+ } else {
+ event.returnValue = false;
+ }
+ },
+
+ // find the first node with the given tagName, starting from the
+ // node the event was triggered on; traverses the DOM upwards
+ findElement: function(event, tagName) {
+ var element = Event.element(event);
+ while (element.parentNode && (!element.tagName ||
+ (element.tagName.toUpperCase() != tagName.toUpperCase())))
+ element = element.parentNode;
+ return element;
+ },
+
+ observers: false,
+
+ _observeAndCache: function(element, name, observer, useCapture) {
+ if (!this.observers) this.observers = [];
+ if (element.addEventListener) {
+ this.observers.push([element, name, observer, useCapture]);
+ element.addEventListener(name, observer, useCapture);
+ } else if (element.attachEvent) {
+ this.observers.push([element, name, observer, useCapture]);
+ element.attachEvent('on' + name, observer);
+ }
+ },
+
+ unloadCache: function() {
+ if (!Event.observers) return;
+ for (var i = 0; i < Event.observers.length; i++) {
+ Event.stopObserving.apply(this, Event.observers[i]);
+ Event.observers[i][0] = null;
+ }
+ Event.observers = false;
+ },
+
+ observe: function(element, name, observer, useCapture) {
+ var element = $(element);
+ useCapture = useCapture || false;
+
+ if (name == 'keypress' &&
+ ((/Konqueror|Safari|KHTML/.test(navigator.userAgent))
+ || element.attachEvent))
+ name = 'keydown';
+
+ this._observeAndCache(element, name, observer, useCapture);
+ },
+
+ stopObserving: function(element, name, observer, useCapture) {
+ var element = $(element);
+ useCapture = useCapture || false;
+
+ if (name == 'keypress' &&
+ ((/Konqueror|Safari|KHTML/.test(navigator.userAgent))
+ || element.detachEvent))
+ name = 'keydown';
+
+ if (element.removeEventListener) {
+ element.removeEventListener(name, observer, useCapture);
+ } else if (element.detachEvent) {
+ element.detachEvent('on' + name, observer);
+ }
+ }
+});
+
+/* prevent memory leaks in IE */
+Event.observe(window, 'unload', Event.unloadCache, false);
+
+var Position = {
+
+ // set to true if needed, warning: firefox performance problems
+ // NOT neeeded for page scrolling, only if draggable contained in
+ // scrollable elements
+ includeScrollOffsets: false,
+
+ // must be called before calling withinIncludingScrolloffset, every time the
+ // page is scrolled
+ prepare: function() {
+ this.deltaX = window.pageXOffset
+ || document.documentElement.scrollLeft
+ || document.body.scrollLeft
+ || 0;
+ this.deltaY = window.pageYOffset
+ || document.documentElement.scrollTop
+ || document.body.scrollTop
+ || 0;
+ },
+
+ realOffset: function(element) {
+ var valueT = 0, valueL = 0;
+ do {
+ valueT += element.scrollTop || 0;
+ valueL += element.scrollLeft || 0;
+ element = element.parentNode;
+ } while (element);
+ return [valueL, valueT];
+ },
+
+ cumulativeOffset: function(element) {
+ var valueT = 0, valueL = 0;
+ do {
+ valueT += element.offsetTop || 0;
+ valueL += element.offsetLeft || 0;
+ element = element.offsetParent;
+ } while (element);
+ return [valueL, valueT];
+ },
+
+ // caches x/y coordinate pair to use with overlap
+ within: function(element, x, y) {
+ if (this.includeScrollOffsets)
+ return this.withinIncludingScrolloffsets(element, x, y);
+ this.xcomp = x;
+ this.ycomp = y;
+ this.offset = this.cumulativeOffset(element);
+
+ return (y >= this.offset[1] &&
+ y < this.offset[1] + element.offsetHeight &&
+ x >= this.offset[0] &&
+ x < this.offset[0] + element.offsetWidth);
+ },
+
+ withinIncludingScrolloffsets: function(element, x, y) {
+ var offsetcache = this.realOffset(element);
+
+ this.xcomp = x + offsetcache[0] - this.deltaX;
+ this.ycomp = y + offsetcache[1] - this.deltaY;
+ this.offset = this.cumulativeOffset(element);
+
+ return (this.ycomp >= this.offset[1] &&
+ this.ycomp < this.offset[1] + element.offsetHeight &&
+ this.xcomp >= this.offset[0] &&
+ this.xcomp < this.offset[0] + element.offsetWidth);
+ },
+
+ // within must be called directly before
+ overlap: function(mode, element) {
+ if (!mode) return 0;
+ if (mode == 'vertical')
+ return ((this.offset[1] + element.offsetHeight) - this.ycomp) /
+ element.offsetHeight;
+ if (mode == 'horizontal')
+ return ((this.offset[0] + element.offsetWidth) - this.xcomp) /
+ element.offsetWidth;
+ },
+
+ clone: function(source, target) {
+ source = $(source);
+ target = $(target);
+ target.style.position = 'absolute';
+ var offsets = this.cumulativeOffset(source);
+ target.style.top = offsets[1] + 'px';
+ target.style.left = offsets[0] + 'px';
+ target.style.width = source.offsetWidth + 'px';
+ target.style.height = source.offsetHeight + 'px';
+ }
+}
diff --git a/docs/skin/screen.css b/docs/skin/screen.css
new file mode 100644
index 0000000..c6084f8
--- /dev/null
+++ b/docs/skin/screen.css
@@ -0,0 +1,587 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+body { margin: 0px 0px 0px 0px; font-family: Verdana, Helvetica, sans-serif; }
+
+h1 { font-size : 160%; margin: 0px 0px 0px 0px; padding: 0px; }
+h2 { font-size : 140%; margin: 1em 0px 0.8em 0px; padding: 0px; font-weight : bold;}
+h3 { font-size : 130%; margin: 0.8em 0px 0px 0px; padding: 0px; font-weight : bold; }
+.h3 { margin: 22px 0px 3px 0px; }
+h4 { font-size : 120%; margin: 0.7em 0px 0px 0px; padding: 0px; font-weight : normal; text-align: left; }
+.h4 { margin: 18px 0px 0px 0px; }
+h4.faq { font-size : 120%; margin: 18px 0px 0px 0px; padding: 0px; font-weight : bold; text-align: left; }
+h5 { font-size : 100%; margin: 14px 0px 0px 0px; padding: 0px; font-weight : normal; text-align: left; }
+
+/**
+* table
+*/
+table .title { background-color: #000000; }
+.ForrestTable {
+ color: #ffffff;
+ background-color: #7099C5;
+ width: 100%;
+ font-size : 100%;
+ empty-cells: show;
+}
+table caption {
+ padding-left: 5px;
+ color: white;
+ text-align: left;
+ font-weight: bold;
+ background-color: #000000;
+}
+.ForrestTable td {
+ color: black;
+ background-color: #f0f0ff;
+}
+.ForrestTable th { text-align: center; }
+/**
+ * Page Header
+ */
+
+#top {
+ position: relative;
+ float: left;
+ width: 100%;
+ background: #294563; /* if you want a background in the header, put it here */
+}
+
+#top .breadtrail {
+ background: #CFDCED;
+ color: black;
+ border-bottom: solid 1px white;
+ padding: 3px 10px;
+ font-size: 75%;
+}
+#top .breadtrail a { color: black; }
+
+#top .header {
+ float: left;
+ width: 100%;
+ background: url("images/header_white_line.gif") repeat-x bottom;
+}
+
+#top .grouplogo {
+ padding: 7px 0 10px 10px;
+ float: left;
+ text-align: left;
+}
+#top .projectlogo {
+ padding: 7px 0 10px 10px;
+ float: left;
+ width: 33%;
+ text-align: right;
+}
+#top .projectlogoA1 {
+ padding: 7px 0 10px 10px;
+ float: right;
+}
+html>body #top .searchbox {
+ bottom: 0px;
+}
+#top .searchbox {
+ position: absolute;
+ right: 10px;
+ height: 42px;
+ font-size: 70%;
+ white-space: nowrap;
+ text-align: right;
+ color: white;
+ background-color: #000000;
+ z-index:0;
+ background-image: url(images/rc-t-l-5-1header-2searchbox-3searchbox.png);
+ background-repeat: no-repeat;
+ background-position: top left;
+ bottom: -1px; /* compensate for IE rendering issue */
+}
+
+#top .searchbox form {
+ padding: 5px 10px;
+ margin: 0;
+}
+#top .searchbox p {
+ padding: 0 0 2px 0;
+ margin: 0;
+}
+#top .searchbox input {
+ font-size: 100%;
+}
+
+#tabs {
+ clear: both;
+ padding-left: 10px;
+ margin: 0;
+ list-style: none;
+}
+/* background: #CFDCED url("images/tab-right.gif") no-repeat right top;*/
+#tabs li {
+ float: left;
+ background-image: url(images/rc-t-r-5-1header-2tab-unselected-3tab-unselected.png);
+ background-repeat: no-repeat;
+ background-position: top right;
+ background-color: #000000;
+ margin: 0 3px 0 0;
+ padding: 0;
+}
+
+/*background: url("images/tab-left.gif") no-repeat left top;*/
+#tabs li a {
+ float: left;
+ display: block;
+ font-family: verdana, arial, sans-serif;
+ text-decoration: none;
+ color: black;
+ white-space: nowrap;
+ background-image: url(images/rc-t-l-5-1header-2tab-unselected-3tab-unselected.png);
+ background-repeat: no-repeat;
+ background-position: top left;
+ padding: 5px 15px 4px;
+ width: .1em; /* IE/Win fix */
+}
+
+#tabs li a:hover {
+
+ cursor: pointer;
+ text-decoration:underline;
+}
+
+#tabs > li a { width: auto; } /* Rest of IE/Win fix */
+
+/* Commented Backslash Hack hides rule from IE5-Mac \*/
+#tabs a { float: none; }
+/* End IE5-Mac hack */
+
+#top .header .current {
+ background-color: #4C6C8F;
+ background-image: url(images/rc-t-r-5-1header-2tab-selected-3tab-selected.png);
+ background-repeat: no-repeat;
+ background-position: top right;
+}
+#top .header .current a {
+ font-weight: bold;
+ padding-bottom: 5px;
+ color: white;
+ background-image: url(images/rc-t-l-5-1header-2tab-selected-3tab-selected.png);
+ background-repeat: no-repeat;
+ background-position: top left;
+}
+#publishedStrip {
+ padding-right: 10px;
+ padding-left: 20px;
+ padding-top: 3px;
+ padding-bottom:3px;
+ color: #ffffff;
+ font-size : 60%;
+ font-weight: bold;
+ background-color: #4C6C8F;
+ text-align:right;
+}
+
+#level2tabs {
+margin: 0;
+float:left;
+position:relative;
+
+}
+
+
+
+#level2tabs a:hover {
+
+ cursor: pointer;
+ text-decoration:underline;
+
+}
+
+#level2tabs a{
+
+ cursor: pointer;
+ text-decoration:none;
+ background-image: url('images/chapter.gif');
+ background-repeat: no-repeat;
+ background-position: center left;
+ padding-left: 6px;
+ margin-left: 6px;
+}
+
+/*
+* border-top: solid #4C6C8F 15px;
+*/
+#main {
+ position: relative;
+ background: white;
+ clear:both;
+}
+#main .breadtrail {
+ clear:both;
+ position: relative;
+ background: #CFDCED;
+ color: black;
+ border-bottom: solid 1px black;
+ border-top: solid 1px black;
+ padding: 0px 180px;
+ font-size: 75%;
+ z-index:10;
+}
+/**
+* Round corner
+*/
+#roundtop {
+ background-image: url(images/rc-t-r-15-1body-2menu-3menu.png);
+ background-repeat: no-repeat;
+ background-position: top right;
+}
+
+#roundbottom {
+ background-image: url(images/rc-b-r-15-1body-2menu-3menu.png);
+ background-repeat: no-repeat;
+ background-position: top right;
+}
+
+img.corner {
+ width: 15px;
+ height: 15px;
+ border: none;
+ display: block !important;
+}
+
+.roundtopsmall {
+ background-image: url(images/rc-t-r-5-1header-2searchbox-3searchbox.png);
+ background-repeat: no-repeat;
+ background-position: top right;
+}
+
+#roundbottomsmall {
+ background-image: url(images/rc-b-r-5-1header-2tab-selected-3tab-selected.png);
+ background-repeat: no-repeat;
+ background-position: top right;
+}
+
+img.cornersmall {
+ width: 5px;
+ height: 5px;
+ border: none;
+ display: block !important;
+}
+/**
+ * Side menu
+ */
+#menu a { font-weight: normal; text-decoration: none;}
+#menu a:visited { font-weight: normal; }
+#menu a:active { font-weight: normal; }
+#menu a:hover { font-weight: normal; text-decoration:underline;}
+
+#menuarea { width:10em;}
+#menu {
+ position: relative;
+ float: left;
+ width: 160px;
+ padding-top: 0px;
+ top:-18px;
+ left:10px;
+ z-index: 20;
+ background-color: #f90;
+ font-size : 70%;
+
+}
+
+.menutitle {
+ cursor:pointer;
+ padding: 3px 12px;
+ margin-left: 10px;
+ background-image: url('images/chapter.gif');
+ background-repeat: no-repeat;
+ background-position: center left;
+ font-weight : bold;
+
+
+}
+
+.menutitle:hover{text-decoration:underline;cursor: pointer;}
+
+#menu .menuitemgroup {
+ margin: 0px 0px 6px 8px;
+ padding: 0px;
+ font-weight : bold; }
+
+#menu .selectedmenuitemgroup{
+ margin: 0px 0px 0px 8px;
+ padding: 0px;
+ font-weight : normal;
+
+ }
+
+#menu .menuitem {
+ padding: 2px 0px 1px 13px;
+ background-image: url('images/page.gif');
+ background-repeat: no-repeat;
+ background-position: center left;
+ font-weight : normal;
+ margin-left: 10px;
+}
+
+#menu .menupage {
+ margin: 2px 0px 1px 10px;
+ padding: 0px 3px 0px 12px;
+ background-image: url('images/page.gif');
+ background-repeat: no-repeat;
+ background-position: center left;
+ font-style : normal;
+}
+#menu .menupagetitle {
+ padding: 0px 0px 0px 1px;
+ font-style : normal;
+ border-style: solid;
+ border-width: 1px;
+ margin-right: 10px;
+
+}
+#menu .menupageitemgroup {
+ padding: 3px 0px 4px 6px;
+ font-style : normal;
+ border-bottom: 1px solid ;
+ border-left: 1px solid ;
+ border-right: 1px solid ;
+ margin-right: 10px;
+}
+#menu .menupageitem {
+ font-style : normal;
+ font-weight : normal;
+ border-width: 0px;
+ font-size : 90%;
+}
+#menu #credit {
+ text-align: center;
+}
+#menu #credit2 {
+ text-align: center;
+ padding: 3px 3px 3px 3px;
+ background-color: #ffffff;
+}
+#menu .searchbox {
+ text-align: center;
+}
+#menu .searchbox form {
+ padding: 3px 3px;
+ margin: 0;
+}
+#menu .searchbox input {
+ font-size: 100%;
+}
+
+#content {
+ padding: 20px 20px 20px 180px;
+ margin: 0;
+ font : small Verdana, Helvetica, sans-serif;
+ font-size : 80%;
+}
+
+#content ul {
+ margin: 0;
+ padding: 0 25px;
+}
+#content li {
+ padding: 0 5px;
+}
+#feedback {
+ color: black;
+ background: #CFDCED;
+ text-align:center;
+ margin-top: 5px;
+}
+#feedback #feedbackto {
+ font-size: 90%;
+ color: black;
+}
+#footer {
+ clear: both;
+ position: relative; /* IE bugfix (http://www.dracos.co.uk/web/css/ie6floatbug/) */
+ width: 100%;
+ background: #CFDCED;
+ border-top: solid 1px #4C6C8F;
+ color: black;
+}
+#footer .copyright {
+ position: relative; /* IE bugfix cont'd */
+ padding: 5px;
+ margin: 0;
+ width: 45%;
+}
+#footer .lastmodified {
+ position: relative; /* IE bugfix cont'd */
+ float: right;
+ width: 45%;
+ padding: 5px;
+ margin: 0;
+ text-align: right;
+}
+#footer a { color: white; }
+
+#footer #logos {
+ text-align: left;
+}
+
+
+/**
+ * Misc Styles
+ */
+
+acronym { cursor: help; }
+.boxed { background-color: #a5b6c6;}
+.underlined_5 {border-bottom: solid 5px #4C6C8F;}
+.underlined_10 {border-bottom: solid 10px #4C6C8F;}
+/* ==================== snail trail ============================ */
+
+.trail {
+ position: relative; /* IE bugfix cont'd */
+ font-size: 70%;
+ text-align: right;
+ float: right;
+ margin: -10px 5px 0px 5px;
+ padding: 0;
+}
+
+#motd-area {
+ position: relative; /* IE bugfix cont'd */
+ float: right;
+ width: 35%;
+ background-color: #f0f0ff;
+ border-top: solid 1px #4C6C8F;
+ border-bottom: solid 1px #4C6C8F;
+ margin-bottom: 15px;
+ margin-left: 15px;
+ margin-right: 10%;
+ padding-bottom: 5px;
+ padding-top: 5px;
+}
+
+#minitoc-area {
+ border-top: solid 1px #4C6C8F;
+ border-bottom: solid 1px #4C6C8F;
+ margin: 15px 10% 5px 15px;
+ /* margin-bottom: 15px;
+ margin-left: 15px;
+ margin-right: 10%;*/
+ padding-bottom: 7px;
+ padding-top: 5px;
+}
+.minitoc {
+ list-style-image: url('images/current.gif');
+ font-weight: normal;
+}
+
+li p {
+ margin: 0;
+ padding: 0;
+}
+
+.pdflink {
+ position: relative; /* IE bugfix cont'd */
+ float: right;
+ margin: 0px 5px;
+ padding: 0;
+}
+.pdflink br {
+ margin-top: -10px;
+ padding-left: 1px;
+}
+.pdflink a {
+ display: block;
+ font-size: 70%;
+ text-align: center;
+ margin: 0;
+ padding: 0;
+}
+
+.pdflink img {
+ display: block;
+ height: 16px;
+ width: 16px;
+}
+.xmllink {
+ position: relative; /* IE bugfix cont'd */
+ float: right;
+ margin: 0px 5px;
+ padding: 0;
+}
+.xmllink br {
+ margin-top: -10px;
+ padding-left: 1px;
+}
+.xmllink a {
+ display: block;
+ font-size: 70%;
+ text-align: center;
+ margin: 0;
+ padding: 0;
+}
+
+.xmllink img {
+ display: block;
+ height: 16px;
+ width: 16px;
+}
+.podlink {
+ position: relative; /* IE bugfix cont'd */
+ float: right;
+ margin: 0px 5px;
+ padding: 0;
+}
+.podlink br {
+ margin-top: -10px;
+ padding-left: 1px;
+}
+.podlink a {
+ display: block;
+ font-size: 70%;
+ text-align: center;
+ margin: 0;
+ padding: 0;
+}
+
+.podlink img {
+ display: block;
+ height: 16px;
+ width: 16px;
+}
+
+.printlink {
+ position: relative; /* IE bugfix cont'd */
+ float: right;
+}
+.printlink br {
+ margin-top: -10px;
+ padding-left: 1px;
+}
+.printlink a {
+ display: block;
+ font-size: 70%;
+ text-align: center;
+ margin: 0;
+ padding: 0;
+}
+.printlink img {
+ display: block;
+ height: 16px;
+ width: 16px;
+}
+
+p.instruction {
+ display: list-item;
+ list-style-image: url('../images/instruction_arrow.png');
+ list-style-position: outside;
+ margin-left: 2em;
+}
\ No newline at end of file
diff --git a/lib/AgileJSON-2.0.jar b/lib/AgileJSON-2.0.jar
new file mode 100644
index 0000000..906161a
--- /dev/null
+++ b/lib/AgileJSON-2.0.jar
Binary files differ
diff --git a/lib/commons-cli-2.0-SNAPSHOT.jar b/lib/commons-cli-2.0-SNAPSHOT.jar
new file mode 100644
index 0000000..0b1d510
--- /dev/null
+++ b/lib/commons-cli-2.0-SNAPSHOT.jar
Binary files differ
diff --git a/lib/commons-el-from-jetty-5.1.4.jar b/lib/commons-el-from-jetty-5.1.4.jar
new file mode 100644
index 0000000..608ed79
--- /dev/null
+++ b/lib/commons-el-from-jetty-5.1.4.jar
Binary files differ
diff --git a/lib/commons-httpclient-3.0.1.jar b/lib/commons-httpclient-3.0.1.jar
new file mode 100644
index 0000000..cfc777c
--- /dev/null
+++ b/lib/commons-httpclient-3.0.1.jar
Binary files differ
diff --git a/lib/commons-logging-1.0.4.jar b/lib/commons-logging-1.0.4.jar
new file mode 100644
index 0000000..b73a80f
--- /dev/null
+++ b/lib/commons-logging-1.0.4.jar
Binary files differ
diff --git a/lib/commons-logging-api-1.0.4.jar b/lib/commons-logging-api-1.0.4.jar
new file mode 100644
index 0000000..ade9a13
--- /dev/null
+++ b/lib/commons-logging-api-1.0.4.jar
Binary files differ
diff --git a/lib/commons-math-1.1.jar b/lib/commons-math-1.1.jar
new file mode 100644
index 0000000..6888813
--- /dev/null
+++ b/lib/commons-math-1.1.jar
Binary files differ
diff --git a/lib/hadoop-0.18.3-core.jar b/lib/hadoop-0.18.3-core.jar
new file mode 100644
index 0000000..d191a68
--- /dev/null
+++ b/lib/hadoop-0.18.3-core.jar
Binary files differ
diff --git a/lib/hadoop-0.18.3-test.jar b/lib/hadoop-0.18.3-test.jar
new file mode 100644
index 0000000..6f57fb5
--- /dev/null
+++ b/lib/hadoop-0.18.3-test.jar
Binary files differ
diff --git a/lib/jetty-5.1.4.LICENSE.txt b/lib/jetty-5.1.4.LICENSE.txt
new file mode 100644
index 0000000..d645695
--- /dev/null
+++ b/lib/jetty-5.1.4.LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/lib/jetty-5.1.4.jar b/lib/jetty-5.1.4.jar
new file mode 100644
index 0000000..dcbd99e
--- /dev/null
+++ b/lib/jetty-5.1.4.jar
Binary files differ
diff --git a/lib/jruby-complete-1.2.0.jar b/lib/jruby-complete-1.2.0.jar
new file mode 100644
index 0000000..02e447d
--- /dev/null
+++ b/lib/jruby-complete-1.2.0.jar
Binary files differ
diff --git a/lib/jruby-complete-LICENSE.txt b/lib/jruby-complete-LICENSE.txt
new file mode 100644
index 0000000..a8e6c05
--- /dev/null
+++ b/lib/jruby-complete-LICENSE.txt
@@ -0,0 +1,86 @@
+Common Public License - v 1.0
+
+THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
+
+1. DEFINITIONS
+
+"Contribution" means:
+
+ a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and
+ b) in the case of each subsequent Contributor:
+
+ i) changes to the Program, and
+
+ ii) additions to the Program;
+
+ where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
+
+"Contributor" means any person or entity that distributes the Program.
+
+"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
+
+"Program" means the Contributions distributed in accordance with this Agreement.
+
+"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
+
+2. GRANT OF RIGHTS
+
+ a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
+
+ b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
+
+ c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
+
+ d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
+
+3. REQUIREMENTS
+
+A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
+
+ a) it complies with the terms and conditions of this Agreement; and
+
+ b) its license agreement:
+
+ i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
+
+ ii) effectively excludes on behalf of all Contributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
+
+ iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
+
+ iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
+
+When the Program is made available in source code form:
+
+ a) it must be made available under this Agreement; and
+
+ b) a copy of this Agreement must be included with each copy of the Program.
+
+Contributors may not remove or alter any copyright notices contained within the Program.
+
+Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
+
+4. COMMERCIAL DISTRIBUTION
+
+Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
+
+For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
+
+5. NO WARRANTY
+
+EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
+
+6. DISCLAIMER OF LIABILITY
+
+EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+7. GENERAL
+
+If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
+
+If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
+
+All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
+
+Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
+
+This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.
diff --git a/lib/junit-3.8.1.LICENSE.txt b/lib/junit-3.8.1.LICENSE.txt
new file mode 100644
index 0000000..f735a71
--- /dev/null
+++ b/lib/junit-3.8.1.LICENSE.txt
@@ -0,0 +1,100 @@
+Common Public License Version 1.0
+
+THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS COMMON PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
+
+1. DEFINITIONS
+
+"Contribution" means:
+
+ a) in the case of the initial Contributor, the initial code and documentation distributed under this Agreement, and
+
+ b) in the case of each subsequent Contributor:
+
+ i) changes to the Program, and
+
+ ii) additions to the Program;
+
+ where such changes and/or additions to the Program originate from and are distributed by that particular Contributor. A Contribution 'originates' from a Contributor if it was added to the Program by such Contributor itself or anyone acting on such Contributor's behalf. Contributions do not include additions to the Program which: (i) are separate modules of software distributed in conjunction with the Program under their own license agreement, and (ii) are not derivative works of the Program.
+
+"Contributor" means any person or entity that distributes the Program.
+
+"Licensed Patents " mean patent claims licensable by a Contributor which are necessarily infringed by the use or sale of its Contribution alone or when combined with the Program.
+
+"Program" means the Contributions distributed in accordance with this Agreement.
+
+"Recipient" means anyone who receives the Program under this Agreement, including all Contributors.
+
+2. GRANT OF RIGHTS
+
+ a) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, distribute and sublicense the Contribution of such Contributor, if any, and such derivative works, in source code and object code form.
+
+ b) Subject to the terms of this Agreement, each Contributor hereby grants Recipient a non-exclusive, worldwide, royalty-free patent license under Licensed Patents to make, use, sell, offer to sell, import and otherwise transfer the Contribution of such Contributor, if any, in source code and object code form. This patent license shall apply to the combination of the Contribution and the Program if, at the time the Contribution is added by the Contributor, such addition of the Contribution causes such combination to be covered by the Licensed Patents. The patent license shall not apply to any other combinations which include the Contribution. No hardware per se is licensed hereunder.
+
+ c) Recipient understands that although each Contributor grants the licenses to its Contributions set forth herein, no assurances are provided by any Contributor that the Program does not infringe the patent or other intellectual property rights of any other entity. Each Contributor disclaims any liability to Recipient for claims brought by any other entity based on infringement of intellectual property rights or otherwise. As a condition to exercising the rights and licenses granted hereunder, each Recipient hereby assumes sole responsibility to secure any other intellectual property rights needed, if any. For example, if a third party patent license is required to allow Recipient to distribute the Program, it is Recipient's responsibility to acquire that license before distributing the Program.
+
+ d) Each Contributor represents that to its knowledge it has sufficient copyright rights in its Contribution, if any, to grant the copyright license set forth in this Agreement.
+
+3. REQUIREMENTS
+
+A Contributor may choose to distribute the Program in object code form under its own license agreement, provided that:
+
+ a) it complies with the terms and conditions of this Agreement; and
+
+ b) its license agreement:
+
+ i) effectively disclaims on behalf of all Contributors all warranties and conditions, express and implied, including warranties or conditions of title and non-infringement, and implied warranties or conditions of merchantability and fitness for a particular purpose;
+
+ ii) effectively excludes on behalf of all Cntributors all liability for damages, including direct, indirect, special, incidental and consequential damages, such as lost profits;
+
+ iii) states that any provisions which differ from this Agreement are offered by that Contributor alone and not by any other party; and
+
+ iv) states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange.
+
+When the Program is made available in source code form:
+
+ a) it must be made available under this Agreement; and
+
+ b) a copy of this Agreement must be included with each copy of the Program.
+
+Contributors may not remove or alter any copyright notices contained within the Program.
+
+Each Contributor must identify itself as the originator of its Contribution, if any, in a manner that reasonably allows subsequent Recipients to identify the originator of the Contribution.
+
+4. COMMERCIAL DISTRIBUTION
+
+Commercial distributors of software may accept certain responsibilities with respect to end users, business partners and the like. While this license is intended to facilitate the commercial use of the Program, the Contributor who includes the Program in a commercial product offering should do so in a manner which does not create potential liability for other Contributors. Therefore, if a Contributor includes the Program in a commercial product offering, such Contributor ("Commercial Contributor") hereby agrees to defend and indemnify every other Contributor ("Indemnified Contributor") against any losses, damages and costs (collectively "Losses") arising from claims, lawsuits and other legal actions brought by a third party against the Indemnified Contributor to the extent caused by the acts or omissions of such Commercial Contributor in connection with its distribution of the Program in a commercial product offering. The obligations in this section do not apply to any claims or Losses relating to any actual or alleged intellectual property infringement. In order to qualify, an Indemnified Contributor must: a) promptly notify the Commercial Contributor in writing of such claim, and b) allow the Commercial Contributor to control, and cooperate with the Commercial Contributor in, the defense and any related settlement negotiations. The Indemnified Contributor may participate in any such claim at its own expense.
+
+For example, a Contributor might include the Program in a commercial product offering, Product X. That Contributor is then a Commercial Contributor. If that Commercial Contributor then makes performance claims, or offers warranties related to Product X, those performance claims and warranties are such Commercial Contributor's responsibility alone. Under this section, the Commercial Contributor would have to defend claims against the other Contributors related to those performance claims and warranties, and if a court requires any other Contributor to pay any damages as a result, the Commercial Contributor must pay those damages.
+
+5. NO WARRANTY
+
+EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is solely responsible for determining the appropriateness of using and distributing the Program and assumes all risks associated with its exercise of rights under this Agreement, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and unavailability or interruption of operations.
+
+6. DISCLAIMER OF LIABILITY
+
+EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PR LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+7. GENERAL
+
+If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
+
+If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
+
+All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
+
+Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
+
+This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.
+OFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+7. GENERAL
+
+If any provision of this Agreement is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this Agreement, and without further action by the parties hereto, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
+
+If Recipient institutes patent litigation against a Contributor with respect to a patent applicable to software (including a cross-claim or counterclaim in a lawsuit), then any patent licenses granted by that Contributor to such Recipient under this Agreement shall terminate as of the date such litigation is filed. In addition, if Recipient institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Program itself (excluding combinations of the Program with other software or hardware) infringes such Recipient's patent(s), then such Recipient's rights granted under Section 2(b) shall terminate as of the date such litigation is filed.
+
+All Recipient's rights under this Agreement shall terminate if it fails to comply with any of the material terms or conditions of this Agreement and does not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If all Recipient's rights under this Agreement terminate, Recipient agrees to cease use and distribution of the Program as soon as reasonably practicable. However, Recipient's obligations under this Agreement and any licenses granted by Recipient relating to the Program shall continue and survive.
+
+Everyone is permitted to copy and distribute copies of this Agreement, but in order to avoid inconsistency the Agreement is copyrighted and may only be modified in the following manner. The Agreement Steward reserves the right to publish new versions (including revisions) of this Agreement from time to time. No one other than the Agreement Steward has the right to modify this Agreement. IBM is the initial Agreement Steward. IBM may assign the responsibility to serve as the Agreement Steward to a suitable separate entity. Each new version of the Agreement will be given a distinguishing version number. The Program (including Contributions) may always be distributed subject to the version of the Agreement under which it was received. In addition, after a new version of the Agreement is published, Contributor may elect to distribute the Program (including its Contributions) under the new version. Except as expressly stated in Sections 2(a) and 2(b) above, Recipient receives no rights or licenses to the intellectual property of any Contributor under this Agreement, whether expressly, by implication, estoppel or otherwise. All rights in the Program not expressly granted under this Agreement are reserved.
+
+This Agreement is governed by the laws of the State of New York and the intellectual property laws of the United States of America. No party to this Agreement will bring a legal action under this Agreement more than one year after the cause of action arose. Each party waives its rights to a jury trial in any resulting litigation.
diff --git a/lib/junit-3.8.1.jar b/lib/junit-3.8.1.jar
new file mode 100644
index 0000000..674d71e
--- /dev/null
+++ b/lib/junit-3.8.1.jar
Binary files differ
diff --git a/lib/libthrift-r771587.jar b/lib/libthrift-r771587.jar
new file mode 100644
index 0000000..3988da7
--- /dev/null
+++ b/lib/libthrift-r771587.jar
Binary files differ
diff --git a/lib/log4j-1.2.15.jar b/lib/log4j-1.2.15.jar
new file mode 100644
index 0000000..c930a6a
--- /dev/null
+++ b/lib/log4j-1.2.15.jar
Binary files differ
diff --git a/lib/lucene-core-2.2.0.jar b/lib/lucene-core-2.2.0.jar
new file mode 100644
index 0000000..2469481
--- /dev/null
+++ b/lib/lucene-core-2.2.0.jar
Binary files differ
diff --git a/lib/servlet-api.jar b/lib/servlet-api.jar
new file mode 100644
index 0000000..c9dab30
--- /dev/null
+++ b/lib/servlet-api.jar
Binary files differ
diff --git a/lib/xmlenc-0.52.jar b/lib/xmlenc-0.52.jar
new file mode 100644
index 0000000..ec568b4
--- /dev/null
+++ b/lib/xmlenc-0.52.jar
Binary files differ
diff --git a/lib/zookeeper-3.1.0-hbase-1241.jar b/lib/zookeeper-3.1.0-hbase-1241.jar
new file mode 100644
index 0000000..51e2aad
--- /dev/null
+++ b/lib/zookeeper-3.1.0-hbase-1241.jar
Binary files differ
diff --git a/src/docs/forrest.properties b/src/docs/forrest.properties
new file mode 100644
index 0000000..a4ebc51
--- /dev/null
+++ b/src/docs/forrest.properties
@@ -0,0 +1,104 @@
+# Copyright 2002-2004 The Apache Software Foundation
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+##############
+# Properties used by forrest.build.xml for building the website
+# These are the defaults, un-comment them if you need to change them.
+##############
+
+# Prints out a summary of Forrest settings for this project
+#forrest.echo=true
+
+# Project name (used to name .war file)
+#project.name=my-project
+
+# Specifies name of Forrest skin to use
+#project.skin=tigris
+#project.skin=pelt
+
+# comma separated list, file:// is supported
+#forrest.skins.descriptors=http://forrest.apache.org/skins/skins.xml,file:///c:/myskins/skins.xml
+
+##############
+# behavioural properties
+#project.menu-scheme=tab_attributes
+#project.menu-scheme=directories
+
+##############
+# layout properties
+
+# Properties that can be set to override the default locations
+#
+# Parent properties must be set. This usually means uncommenting
+# project.content-dir if any other property using it is uncommented
+
+#project.status=status.xml
+#project.content-dir=src/documentation
+#project.raw-content-dir=${project.content-dir}/content
+#project.conf-dir=${project.content-dir}/conf
+#project.sitemap-dir=${project.content-dir}
+#project.xdocs-dir=${project.content-dir}/content/xdocs
+#project.resources-dir=${project.content-dir}/resources
+#project.stylesheets-dir=${project.resources-dir}/stylesheets
+#project.images-dir=${project.resources-dir}/images
+#project.schema-dir=${project.resources-dir}/schema
+#project.skins-dir=${project.content-dir}/skins
+#project.skinconf=${project.content-dir}/skinconf.xml
+#project.lib-dir=${project.content-dir}/lib
+#project.classes-dir=${project.content-dir}/classes
+#project.translations-dir=${project.content-dir}/translations
+
+##############
+# validation properties
+
+# This set of properties determine if validation is performed
+# Values are inherited unless overridden.
+# e.g. if forrest.validate=false then all others are false unless set to true.
+#forrest.validate=true
+#forrest.validate.xdocs=${forrest.validate}
+#forrest.validate.skinconf=${forrest.validate}
+#forrest.validate.sitemap=${forrest.validate}
+#forrest.validate.stylesheets=${forrest.validate}
+#forrest.validate.skins=${forrest.validate}
+#forrest.validate.skins.stylesheets=${forrest.validate.skins}
+
+# *.failonerror=(true|false) - stop when an XML file is invalid
+#forrest.validate.failonerror=true
+
+# *.excludes=(pattern) - comma-separated list of path patterns to not validate
+# e.g.
+#forrest.validate.xdocs.excludes=samples/subdir/**, samples/faq.xml
+#forrest.validate.xdocs.excludes=
+
+
+##############
+# General Forrest properties
+
+# The URL to start crawling from
+#project.start-uri=linkmap.html
+# Set logging level for messages printed to the console
+# (DEBUG, INFO, WARN, ERROR, FATAL_ERROR)
+#project.debuglevel=ERROR
+# Max memory to allocate to Java
+#forrest.maxmemory=64m
+# Any other arguments to pass to the JVM. For example, to run on an X-less
+# server, set to -Djava.awt.headless=true
+#forrest.jvmargs=
+# The bugtracking URL - the issue number will be appended
+#project.bugtracking-url=http://issues.apache.org/bugzilla/show_bug.cgi?id=
+#project.bugtracking-url=http://issues.apache.org/jira/browse/
+# The issues list as rss
+#project.issues-rss-url=
+#I18n Property only works for the "forrest run" target.
+#project.i18n=true
diff --git a/src/docs/src/documentation/README.txt b/src/docs/src/documentation/README.txt
new file mode 100644
index 0000000..9bc261b
--- /dev/null
+++ b/src/docs/src/documentation/README.txt
@@ -0,0 +1,7 @@
+This is the base documentation directory.
+
+skinconf.xml # This file customizes Forrest for your project. In it, you
+ # tell forrest the project name, logo, copyright info, etc
+
+sitemap.xmap # Optional. This sitemap is consulted before all core sitemaps.
+ # See http://forrest.apache.org/docs/project-sitemap.html
diff --git a/src/docs/src/documentation/classes/CatalogManager.properties b/src/docs/src/documentation/classes/CatalogManager.properties
new file mode 100644
index 0000000..ac060b9
--- /dev/null
+++ b/src/docs/src/documentation/classes/CatalogManager.properties
@@ -0,0 +1,37 @@
+# Copyright 2002-2004 The Apache Software Foundation
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#=======================================================================
+# CatalogManager.properties
+#
+# This is the default properties file for Apache Forrest.
+# This facilitates local configuration of application-specific catalogs.
+#
+# See the Apache Forrest documentation:
+# http://forrest.apache.org/docs/your-project.html
+# http://forrest.apache.org/docs/validation.html
+
+# verbosity ... level of messages for status/debug
+# See forrest/src/core/context/WEB-INF/cocoon.xconf
+
+# catalogs ... list of additional catalogs to load
+# (Note that Apache Forrest will automatically load its own default catalog
+# from src/core/context/resources/schema/catalog.xcat)
+# use full pathnames
+# pathname separator is always semi-colon (;) regardless of operating system
+# directory separator is always slash (/) regardless of operating system
+#
+#catalogs=/home/me/forrest/my-site/src/documentation/resources/schema/catalog.xcat
+catalogs=
+
diff --git a/src/docs/src/documentation/content/xdocs/index.xml b/src/docs/src/documentation/content/xdocs/index.xml
new file mode 100644
index 0000000..2b25a89
--- /dev/null
+++ b/src/docs/src/documentation/content/xdocs/index.xml
@@ -0,0 +1,40 @@
+<?xml version="1.0"?>
+<!--
+ Copyright 2002-2004 The Apache Software Foundation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document>
+
+ <header>
+ <title>HBase Documentation</title>
+ </header>
+
+ <body>
+ <p>
+ The following documents provide concepts and procedures that will help you
+ get started using HBase. If you have more questions, you can ask the
+ <a href="ext:lists">mailing list</a> or browse the archives.
+ </p>
+ <ul>
+ <li><a href="ext:api/started">Getting Started</a></li>
+ <li><a href="ext:api/index">API Docs</a></li>
+ <li><a href="ext:wiki">Wiki</a></li>
+ <li><a href="ext:faq">FAQ</a></li>
+ </ul>
+ </body>
+
+</document>
diff --git a/src/docs/src/documentation/content/xdocs/metrics.xml b/src/docs/src/documentation/content/xdocs/metrics.xml
new file mode 100644
index 0000000..01acae8
--- /dev/null
+++ b/src/docs/src/documentation/content/xdocs/metrics.xml
@@ -0,0 +1,67 @@
+<?xml version="1.0"?>
+<!--
+ Copyright 2002-2008 The Apache Software Foundation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+ "http://forrest.apache.org/dtd/document-v20.dtd">
+
+
+<document>
+
+ <header>
+ <title>
+ HBase Metrics
+ </title>
+ </header>
+
+ <body>
+ <section>
+ <title> Introduction </title>
+ <p>
+ HBase emits Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
+ </p>
+ </section>
+ <section>
+ <title>HOWTO</title>
+ <p>First read up on Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
+ If you are using ganglia, the <a href="http://wiki.apache.org/hadoop/GangliaMetrics">GangliaMetrics</a>
+ wiki page is useful read.</p>
+ <p>To have HBase emit metrics, edit <code>$HBASE_HOME/conf/hadoop-metrics.properties</code>
+ and enable metric 'contexts' per plugin. As of this writing, hadoop supports
+ <strong>file</strong> and <strong>ganglia</strong> plugins.
+ Yes, the hbase metrics files is named hadoop-metrics rather than
+ <em>hbase-metrics</em> because currently at least the hadoop metrics system has the
+ properties filename hardcoded. Per metrics <em>context</em>,
+ comment out the NullContext and enable one or more plugins instead.
+ </p>
+ <p>
+ If you enable the <em>hbase</em> context, on regionservers you'll see total requests since last
+ metric emission, count of regions and storefiles as well as a count of memcache size.
+ On the master, you'll see a count of the cluster's requests.
+ </p>
+ <p>
+ Enabling the <em>rpc</em> context is good if you are interested in seeing
+ metrics on each hbase rpc method invocation (counts and time taken).
+ </p>
+ <p>
+ The <em>jvm</em> context is
+ useful for long-term stats on running hbase jvms -- memory used, thread counts, etc.
+ As of this writing, if more than one jvm is running emitting metrics, at least
+ in ganglia, the stats are aggregated rather than reported per instance.
+ </p>
+ </section>
+ </body>
+</document>
diff --git a/src/docs/src/documentation/content/xdocs/site.xml b/src/docs/src/documentation/content/xdocs/site.xml
new file mode 100644
index 0000000..ec22180
--- /dev/null
+++ b/src/docs/src/documentation/content/xdocs/site.xml
@@ -0,0 +1,72 @@
+<?xml version="1.0"?>
+<!--
+ Copyright 2002-2004 The Apache Software Foundation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!--
+Forrest site.xml
+
+This file contains an outline of the site's information content. It is used to:
+- Generate the website menus (though these can be overridden - see docs)
+- Provide semantic, location-independent aliases for internal 'site:' URIs, eg
+<link href="site:changes"> links to changes.html (or ../changes.html if in
+ subdir).
+- Provide aliases for external URLs in the external-refs section. Eg, <link
+ href="ext:cocoon"> links to http://xml.apache.org/cocoon/
+
+See http://forrest.apache.org/docs/linking.html for more info.
+-->
+
+<site label="Hadoop" href="" xmlns="http://apache.org/forrest/linkmap/1.0">
+
+ <docs label="Documentation">
+ <overview label="Overview" href="index.html" />
+ <started label="Getting Started" href="ext:api/started" />
+ <api label="API Docs" href="ext:api/index" />
+ <api label="HBase Metrics" href="metrics.html" />
+ <wiki label="Wiki" href="ext:wiki" />
+ <faq label="FAQ" href="ext:faq" />
+ <lists label="Mailing Lists" href="ext:lists" />
+ </docs>
+
+ <external-refs>
+ <site href="http://hadoop.apache.org/hbase/"/>
+ <lists href="http://hadoop.apache.org/hbase/mailing_lists.html"/>
+ <releases href="http://hadoop.apache.org/hbase/releases.html">
+ <download href="#Download" />
+ </releases>
+ <jira href="http://hadoop.apache.org/hbase/issue_tracking.html"/>
+ <wiki href="http://wiki.apache.org/hadoop/Hbase" />
+ <faq href="http://wiki.apache.org/hadoop/Hbase/FAQ" />
+ <zlib href="http://www.zlib.net/" />
+ <lzo href="http://www.oberhumer.com/opensource/lzo/" />
+ <gzip href="http://www.gzip.org/" />
+ <cygwin href="http://www.cygwin.com/" />
+ <osx href="http://www.apple.com/macosx" />
+ <api href="api/">
+ <started href="overview-summary.html#overview_description" />
+ <index href="index.html" />
+ <org href="org/">
+ <apache href="apache/">
+ <hadoop href="hadoop/">
+ <hbase href="hbase/">
+ </hbase>
+ </hadoop>
+ </apache>
+ </org>
+ </api>
+ </external-refs>
+
+</site>
diff --git a/src/docs/src/documentation/content/xdocs/tabs.xml b/src/docs/src/documentation/content/xdocs/tabs.xml
new file mode 100644
index 0000000..ac6ccb4b2
--- /dev/null
+++ b/src/docs/src/documentation/content/xdocs/tabs.xml
@@ -0,0 +1,36 @@
+<?xml version="1.0"?>
+<!--
+ Copyright 2002-2004 The Apache Software Foundation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!DOCTYPE tabs PUBLIC "-//APACHE//DTD Cocoon Documentation Tab V1.0//EN"
+ "http://forrest.apache.org/dtd/tab-cocoon-v10.dtd">
+
+<tabs software="HBase"
+ title="HBase"
+ copyright="The Apache Software Foundation"
+ xmlns:xlink="http://www.w3.org/1999/xlink">
+
+ <!-- The rules are:
+ @dir will always have /index.html added.
+ @href is not modified unless it is root-relative and obviously specifies a
+ directory (ends in '/'), in which case /index.html will be added
+ -->
+
+ <tab label="Project" href="http://hadoop.apache.org/hbase/" />
+ <tab label="Wiki" href="http://wiki.apache.org/hadoop/Hbase" />
+ <tab label="HBase Documentation" dir="" />
+
+</tabs>
diff --git a/src/docs/src/documentation/resources/images/architecture.gif b/src/docs/src/documentation/resources/images/architecture.gif
new file mode 100644
index 0000000..8d84a23
--- /dev/null
+++ b/src/docs/src/documentation/resources/images/architecture.gif
Binary files differ
diff --git a/src/docs/src/documentation/resources/images/favicon.ico b/src/docs/src/documentation/resources/images/favicon.ico
new file mode 100644
index 0000000..161bcf7
--- /dev/null
+++ b/src/docs/src/documentation/resources/images/favicon.ico
Binary files differ
diff --git a/src/docs/src/documentation/resources/images/hadoop-logo.jpg b/src/docs/src/documentation/resources/images/hadoop-logo.jpg
new file mode 100644
index 0000000..809525d
--- /dev/null
+++ b/src/docs/src/documentation/resources/images/hadoop-logo.jpg
Binary files differ
diff --git a/src/docs/src/documentation/resources/images/hbase_logo_med.gif b/src/docs/src/documentation/resources/images/hbase_logo_med.gif
new file mode 100644
index 0000000..36d3e3c
--- /dev/null
+++ b/src/docs/src/documentation/resources/images/hbase_logo_med.gif
Binary files differ
diff --git a/src/docs/src/documentation/resources/images/hbase_small.gif b/src/docs/src/documentation/resources/images/hbase_small.gif
new file mode 100644
index 0000000..3275765
--- /dev/null
+++ b/src/docs/src/documentation/resources/images/hbase_small.gif
Binary files differ
diff --git a/src/docs/src/documentation/skinconf.xml b/src/docs/src/documentation/skinconf.xml
new file mode 100644
index 0000000..b31ed33
--- /dev/null
+++ b/src/docs/src/documentation/skinconf.xml
@@ -0,0 +1,345 @@
+<?xml version="1.0"?>
+<!--
+ Copyright 2002-2004 The Apache Software Foundation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!--
+Skin configuration file. This file contains details of your project,
+which will be used to configure the chosen Forrest skin.
+-->
+
+<!DOCTYPE skinconfig PUBLIC "-//APACHE//DTD Skin Configuration V0.6-3//EN" "http://forrest.apache.org/dtd/skinconfig-v06-3.dtd">
+<skinconfig>
+ <!-- To enable lucene search add provider="lucene" (default is google).
+ Add box-location="alt" to move the search box to an alternate location
+ (if the skin supports it) and box-location="all" to show it in all
+ available locations on the page. Remove the <search> element to show
+ no search box. @domain will enable sitesearch for the specific domain with google.
+ In other words google will search the @domain for the query string.
+
+ -->
+ <search name="HBase" domain="hadoop.apache.org" provider="google"/>
+
+ <!-- Disable the print link? If enabled, invalid HTML 4.0.1 -->
+ <disable-print-link>true</disable-print-link>
+ <!-- Disable the PDF link? -->
+ <disable-pdf-link>false</disable-pdf-link>
+ <!-- Disable the POD link? -->
+ <disable-pod-link>true</disable-pod-link>
+ <!-- Disable the Text link? FIXME: NOT YET IMPLEMENETED. -->
+ <disable-txt-link>true</disable-txt-link>
+ <!-- Disable the xml source link? -->
+ <!-- The xml source link makes it possible to access the xml rendition
+ of the source frim the html page, and to have it generated statically.
+ This can be used to enable other sites and services to reuse the
+ xml format for their uses. Keep this disabled if you don't want other
+ sites to easily reuse your pages.-->
+ <disable-xml-link>true</disable-xml-link>
+
+ <!-- Disable navigation icons on all external links? -->
+ <disable-external-link-image>true</disable-external-link-image>
+
+ <!-- Disable w3c compliance links?
+ Use e.g. align="center" to move the compliance links logos to
+ an alternate location default is left.
+ (if the skin supports it) -->
+ <disable-compliance-links>true</disable-compliance-links>
+
+ <!-- Render mailto: links unrecognisable by spam harvesters? -->
+ <obfuscate-mail-links>false</obfuscate-mail-links>
+
+ <!-- Disable the javascript facility to change the font size -->
+ <disable-font-script>true</disable-font-script>
+
+ <!-- project logo -->
+ <project-name>HBase</project-name>
+ <project-description>The Hadoop database</project-description>
+ <project-url>http://hadoop.apache.org/hbase/</project-url>
+ <project-logo>images/hbase_small.gif</project-logo>
+
+ <!-- group logo -->
+ <group-name>Hadoop</group-name>
+ <group-description>Apache Hadoop</group-description>
+ <group-url>http://hadoop.apache.org/</group-url>
+ <group-logo>images/hadoop-logo.jpg</group-logo>
+
+ <!-- optional host logo (e.g. sourceforge logo)
+ default skin: renders it at the bottom-left corner -->
+ <host-url></host-url>
+ <host-logo></host-logo>
+
+ <!-- relative url of a favicon file, normally favicon.ico -->
+ <favicon-url>images/favicon.ico</favicon-url>
+
+ <!-- The following are used to construct a copyright statement -->
+ <year>2008</year>
+ <vendor>The Apache Software Foundation.</vendor>
+ <copyright-link>http://www.apache.org/licenses/</copyright-link>
+
+ <!-- Some skins use this to form a 'breadcrumb trail' of links.
+ Use location="alt" to move the trail to an alternate location
+ (if the skin supports it).
+ Omit the location attribute to display the trail in the default location.
+ Use location="none" to not display the trail (if the skin supports it).
+ For some skins just set the attributes to blank.
+ -->
+ <trail>
+ <link1 name="Apache" href="http://www.apache.org/"/>
+ <link2 name="Hadoop" href="http://hadoop.apache.org/"/>
+ <link3 name="HBase" href="http://hadoop.apache.org/hbase/"/>
+ </trail>
+
+ <!-- Configure the TOC, i.e. the Table of Contents.
+ @max-depth
+ how many "section" levels need to be included in the
+ generated Table of Contents (TOC).
+ @min-sections
+ Minimum required to create a TOC.
+ @location ("page","menu","page,menu", "none")
+ Where to show the TOC.
+ -->
+ <toc max-depth="2" min-sections="1" location="page"/>
+
+ <!-- Heading types can be clean|underlined|boxed -->
+ <headings type="clean"/>
+
+ <!-- The optional feedback element will be used to construct a
+ feedback link in the footer with the page pathname appended:
+ <a href="@href">{@to}</a>
+ <feedback to="webmaster@foo.com"
+ href="mailto:webmaster@foo.com?subject=Feedback " >
+ Send feedback about the website to:
+ </feedback>
+ -->
+ <!--
+ extra-css - here you can define custom css-elements that are
+ a. overriding the fallback elements or
+ b. adding the css definition from new elements that you may have
+ used in your documentation.
+ -->
+ <extra-css>
+ <!--Example of b.
+ To define the css definition of a new element that you may have used
+ in the class attribute of a <p> node.
+ e.g. <p class="quote"/>
+ -->
+ p.quote {
+ margin-left: 2em;
+ padding: .5em;
+ background-color: #f0f0f0;
+ font-family: monospace;
+ }
+ </extra-css>
+
+ <colors>
+ <!-- These values are used for the generated CSS files. -->
+
+ <!-- Krysalis -->
+<!--
+ <color name="header" value="#FFFFFF"/>
+
+ <color name="tab-selected" value="#a5b6c6" link="#000000" vlink="#000000" hlink="#000000"/>
+ <color name="tab-unselected" value="#F7F7F7" link="#000000" vlink="#000000" hlink="#000000"/>
+ <color name="subtab-selected" value="#a5b6c6" link="#000000" vlink="#000000" hlink="#000000"/>
+ <color name="subtab-unselected" value="#a5b6c6" link="#000000" vlink="#000000" hlink="#000000"/>
+
+ <color name="heading" value="#a5b6c6"/>
+ <color name="subheading" value="#CFDCED"/>
+
+ <color name="navstrip" value="#CFDCED" font="#000000" link="#000000" vlink="#000000" hlink="#000000"/>
+ <color name="toolbox" value="#a5b6c6"/>
+ <color name="border" value="#a5b6c6"/>
+
+ <color name="menu" value="#F7F7F7" link="#000000" vlink="#000000" hlink="#000000"/>
+ <color name="dialog" value="#F7F7F7"/>
+
+ <color name="body" value="#ffffff" link="#0F3660" vlink="#009999" hlink="#000066"/>
+
+ <color name="table" value="#a5b6c6"/>
+ <color name="table-cell" value="#ffffff"/>
+ <color name="highlight" value="#ffff00"/>
+ <color name="fixme" value="#cc6600"/>
+ <color name="note" value="#006699"/>
+ <color name="warning" value="#990000"/>
+ <color name="code" value="#a5b6c6"/>
+
+ <color name="footer" value="#a5b6c6"/>
+-->
+
+ <!-- Forrest -->
+<!--
+ <color name="header" value="#294563"/>
+
+ <color name="tab-selected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+ <color name="tab-unselected" value="#b5c7e7" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+ <color name="subtab-selected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+ <color name="subtab-unselected" value="#4a6d8c" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+
+ <color name="heading" value="#294563"/>
+ <color name="subheading" value="#4a6d8c"/>
+
+ <color name="navstrip" value="#cedfef" font="#0F3660" link="#0F3660" vlink="#0F3660" hlink="#000066"/>
+ <color name="toolbox" value="#4a6d8c"/>
+ <color name="border" value="#294563"/>
+
+ <color name="menu" value="#4a6d8c" font="#cedfef" link="#ffffff" vlink="#ffffff" hlink="#ffcf00"/>
+ <color name="dialog" value="#4a6d8c"/>
+
+ <color name="body" value="#ffffff" link="#0F3660" vlink="#009999" hlink="#000066"/>
+
+ <color name="table" value="#7099C5"/>
+ <color name="table-cell" value="#f0f0ff"/>
+ <color name="highlight" value="#ffff00"/>
+ <color name="fixme" value="#cc6600"/>
+ <color name="note" value="#006699"/>
+ <color name="warning" value="#990000"/>
+ <color name="code" value="#CFDCED"/>
+
+ <color name="footer" value="#cedfef"/>
+-->
+
+ <!-- Collabnet -->
+<!--
+ <color name="header" value="#003366"/>
+
+ <color name="tab-selected" value="#dddddd" link="#555555" vlink="#555555" hlink="#555555"/>
+ <color name="tab-unselected" value="#999999" link="#ffffff" vlink="#ffffff" hlink="#ffffff"/>
+ <color name="subtab-selected" value="#cccccc" link="#000000" vlink="#000000" hlink="#000000"/>
+ <color name="subtab-unselected" value="#cccccc" link="#555555" vlink="#555555" hlink="#555555"/>
+
+ <color name="heading" value="#003366"/>
+ <color name="subheading" value="#888888"/>
+
+ <color name="navstrip" value="#dddddd" font="#555555"/>
+ <color name="toolbox" value="#dddddd" font="#555555"/>
+ <color name="border" value="#999999"/>
+
+ <color name="menu" value="#ffffff"/>
+ <color name="dialog" value="#eeeeee"/>
+
+ <color name="body" value="#ffffff"/>
+
+ <color name="table" value="#ccc"/>
+ <color name="table-cell" value="#ffffff"/>
+ <color name="highlight" value="#ffff00"/>
+ <color name="fixme" value="#cc6600"/>
+ <color name="note" value="#006699"/>
+ <color name="warning" value="#990000"/>
+ <color name="code" value="#003366"/>
+
+ <color name="footer" value="#ffffff"/>
+-->
+ <!-- Lenya using pelt-->
+<!--
+ <color name="header" value="#ffffff"/>
+
+ <color name="tab-selected" value="#4C6C8F" link="#ffffff" vlink="#ffffff" hlink="#ffffff"/>
+ <color name="tab-unselected" value="#E5E4D9" link="#000000" vlink="#000000" hlink="#000000"/>
+ <color name="subtab-selected" value="#000000" link="#000000" vlink="#000000" hlink="#000000"/>
+ <color name="subtab-unselected" value="#E5E4D9" link="#000000" vlink="#000000" hlink="#000000"/>
+
+ <color name="heading" value="#E5E4D9"/>
+ <color name="subheading" value="#000000"/>
+ <color name="published" value="#4C6C8F" font="#FFFFFF"/>
+ <color name="feedback" value="#4C6C8F" font="#FFFFFF" align="center"/>
+ <color name="navstrip" value="#E5E4D9" font="#000000"/>
+
+ <color name="toolbox" value="#CFDCED" font="#000000"/>
+
+ <color name="border" value="#999999"/>
+ <color name="menu" value="#4C6C8F" font="#ffffff" link="#ffffff" vlink="#ffffff" hlink="#ffffff" current="#FFCC33" />
+ <color name="menuheading" value="#cfdced" font="#000000" />
+ <color name="searchbox" value="#E5E4D9" font="#000000"/>
+
+ <color name="dialog" value="#CFDCED"/>
+ <color name="body" value="#ffffff" />
+
+ <color name="table" value="#ccc"/>
+ <color name="table-cell" value="#ffffff"/>
+ <color name="highlight" value="#ffff00"/>
+ <color name="fixme" value="#cc6600"/>
+ <color name="note" value="#006699"/>
+ <color name="warning" value="#990000"/>
+ <color name="code" value="#003366"/>
+
+ <color name="footer" value="#E5E4D9"/>
+-->
+ </colors>
+
+ <!-- Settings specific to PDF output. -->
+ <pdf>
+ <!--
+ Supported page sizes are a0, a1, a2, a3, a4, a5, executive,
+ folio, legal, ledger, letter, quarto, tabloid (default letter).
+ Supported page orientations are portrait, landscape (default
+ portrait).
+ Supported text alignments are left, right, justify (default left).
+ -->
+ <page size="letter" orientation="portrait" text-align="left"/>
+
+ <!--
+ Margins can be specified for top, bottom, inner, and outer
+ edges. If double-sided="false", the inner edge is always left
+ and the outer is always right. If double-sided="true", the
+ inner edge will be left on odd pages, right on even pages,
+ the outer edge vice versa.
+ Specified below are the default settings.
+ -->
+ <margins double-sided="false">
+ <top>1in</top>
+ <bottom>1in</bottom>
+ <inner>1.25in</inner>
+ <outer>1in</outer>
+ </margins>
+
+ <!--
+ Print the URL text next to all links going outside the file
+ -->
+ <show-external-urls>false</show-external-urls>
+
+ <!--
+ Disable the copyright footer on each page of the PDF.
+ A footer is composed for each page. By default, a "credit" with role=pdf
+ will be used, as explained below. Otherwise a copyright statement
+ will be generated. This latter can be disabled.
+ -->
+ <disable-copyright-footer>false</disable-copyright-footer>
+ </pdf>
+
+ <!-- Credits are typically rendered as a set of small clickable
+ images in the page footer.
+ Use box-location="alt" to move the credit to an alternate location
+ (if the skin supports it).
+ -->
+ <credits>
+ <credit box-location="alt">
+ <name>Built with Apache Forrest</name>
+ <url>http://forrest.apache.org/</url>
+ 
+ <width>88</width>
+ <height>31</height>
+ </credit>
+ <!-- A credit with @role="pdf" will be used to compose a footer
+ for each page in the PDF, using either "name" or "url" or both.
+ -->
+ <!--
+ <credit role="pdf">
+ <name>Built with Apache Forrest</name>
+ <url>http://forrest.apache.org/</url>
+ </credit>
+ -->
+ </credits>
+
+</skinconfig>
diff --git a/src/docs/status.xml b/src/docs/status.xml
new file mode 100644
index 0000000..3ac3fda
--- /dev/null
+++ b/src/docs/status.xml
@@ -0,0 +1,74 @@
+<?xml version="1.0"?>
+<!--
+ Copyright 2002-2004 The Apache Software Foundation
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<status>
+
+ <developers>
+ <person name="Joe Bloggs" email="joe@joescompany.org" id="JB" />
+ <!-- Add more people here -->
+ </developers>
+
+ <changes>
+ <!-- Add new releases here -->
+ <release version="0.1" date="unreleased">
+ <!-- Some action types have associated images. By default, images are
+ defined for 'add', 'fix', 'remove', 'update' and 'hack'. If you add
+ src/documentation/resources/images/<foo>.jpg images, these will
+ automatically be used for entries of type <foo>. -->
+
+ <action dev="JB" type="add" context="admin">
+ Initial Import
+ </action>
+ <!-- Sample action:
+ <action dev="JB" type="fix" due-to="Joe Contributor"
+ due-to-email="joec@apache.org" fixes-bug="123">
+ Fixed a bug in the Foo class.
+ </action>
+ -->
+ </release>
+ </changes>
+
+ <todo>
+ <actions priority="high">
+ <action context="docs" dev="JB">
+ Customize this template project with your project's details. This
+ TODO list is generated from 'status.xml'.
+ </action>
+ <action context="docs" dev="JB">
+ Add lots of content. XML content goes in
+ <code>src/documentation/content/xdocs</code>, or wherever the
+ <code>${project.xdocs-dir}</code> property (set in
+ <code>forrest.properties</code>) points.
+ </action>
+ <action context="feedback" dev="JB">
+ Mail <link
+ href="mailto:forrest-dev@xml.apache.org">forrest-dev@xml.apache.org</link>
+ with feedback.
+ </action>
+ </actions>
+ <!-- Add todo items. @context is an arbitrary string. Eg:
+ <actions priority="high">
+ <action context="code" dev="SN">
+ </action>
+ </actions>
+ <actions priority="medium">
+ <action context="docs" dev="open">
+ </action>
+ </actions>
+ -->
+ </todo>
+
+</status>
diff --git a/src/examples/REAME.txt b/src/examples/REAME.txt
new file mode 100644
index 0000000..69cff06
--- /dev/null
+++ b/src/examples/REAME.txt
@@ -0,0 +1,2 @@
+Example code. Includes thrift clients and uploader examples including
+a script to replicate a postgres database in hbase by Tim Sell.
diff --git a/src/examples/mapred/org/apache/hadoop/hbase/mapred/SampleUploader.java b/src/examples/mapred/org/apache/hadoop/hbase/mapred/SampleUploader.java
new file mode 100644
index 0000000..88e4894
--- /dev/null
+++ b/src/examples/mapred/org/apache/hadoop/hbase/mapred/SampleUploader.java
@@ -0,0 +1,139 @@
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapred.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapred.TableReduce;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.FileInputFormat;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.Mapper;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/*
+ * Sample uploader.
+ *
+ * This is EXAMPLE code. You will need to change it to work for your context.
+ *
+ * Uses TableReduce to put the data into hbase. Change the InputFormat to suit
+ * your data. Use the map to massage the input so it fits hbase. Currently its
+ * just a pass-through map. In the reduce, you need to output a row and a
+ * map of columns to cells. Change map and reduce to suit your input.
+ *
+ * <p>The below is wired up to handle an input whose format is a text file
+ * which has a line format as follow:
+ * <pre>
+ * row columnname columndata
+ * </pre>
+ *
+ * <p>The table and columnfamily we're to insert into must preexist.
+ *
+ * <p>Do the following to start the MR job:
+ * <pre>
+ * ./bin/hadoop org.apache.hadoop.hbase.mapred.SampleUploader /tmp/input.txt TABLE_NAME
+ * </pre>
+ *
+ * <p>This code was written against hbase 0.1 branch.
+ */
+public class SampleUploader extends MapReduceBase
+implements Mapper<LongWritable, Text, ImmutableBytesWritable, HbaseMapWritable<byte [], byte []>>,
+ Tool {
+ private static final String NAME = "SampleUploader";
+ private Configuration conf;
+
+ public JobConf createSubmittableJob(String[] args)
+ throws IOException {
+ JobConf c = new JobConf(getConf(), SampleUploader.class);
+ c.setJobName(NAME);
+ FileInputFormat.setInputPaths(c, new Path(args[0]));
+ c.setMapperClass(this.getClass());
+ c.setMapOutputKeyClass(ImmutableBytesWritable.class);
+ c.setMapOutputValueClass(HbaseMapWritable.class);
+ c.setReducerClass(TableUploader.class);
+ TableMapReduceUtil.initTableReduceJob(args[1], TableUploader.class, c);
+ return c;
+ }
+
+ public void map(LongWritable k, Text v,
+ OutputCollector<ImmutableBytesWritable, HbaseMapWritable<byte [], byte []>> output,
+ Reporter r)
+ throws IOException {
+ // Lines are space-delimited; first item is row, next the columnname and
+ // then the third the cell value.
+ String tmp = v.toString();
+ if (tmp.length() == 0) {
+ return;
+ }
+ String [] splits = v.toString().split(" ");
+ HbaseMapWritable<byte [], byte []> mw =
+ new HbaseMapWritable<byte [], byte []>();
+ mw.put(Bytes.toBytes(splits[1]), Bytes.toBytes(splits[2]));
+ byte [] row = Bytes.toBytes(splits[0]);
+ r.setStatus("Map emitting " + splits[0] + " for record " + k.toString());
+ output.collect(new ImmutableBytesWritable(row), mw);
+ }
+
+ public static class TableUploader extends MapReduceBase
+ implements TableReduce<ImmutableBytesWritable, HbaseMapWritable<byte [], byte []>> {
+ public void reduce(ImmutableBytesWritable k, Iterator<HbaseMapWritable<byte [], byte []>> v,
+ OutputCollector<ImmutableBytesWritable, BatchUpdate> output,
+ Reporter r)
+ throws IOException {
+ while (v.hasNext()) {
+ r.setStatus("Reducer committing " + k);
+ BatchUpdate bu = new BatchUpdate(k.get());
+ while (v.hasNext()) {
+ HbaseMapWritable<byte [], byte []> hmw = v.next();
+ for (Entry<byte [], byte []> e: hmw.entrySet()) {
+ bu.put(e.getKey(), e.getValue());
+ }
+ }
+ output.collect(k, bu);
+ }
+ }
+ }
+
+ static int printUsage() {
+ System.out.println(NAME + " <input> <table_name>");
+ return -1;
+ }
+
+ public int run(@SuppressWarnings("unused") String[] args) throws Exception {
+ // Make sure there are exactly 2 parameters left.
+ if (args.length != 2) {
+ System.out.println("ERROR: Wrong number of parameters: " +
+ args.length + " instead of 2.");
+ return printUsage();
+ }
+ JobClient.runJob(createSubmittableJob(args));
+ return 0;
+ }
+
+ public Configuration getConf() {
+ return this.conf;
+ }
+
+ public void setConf(final Configuration c) {
+ this.conf = c;
+ }
+
+ public static void main(String[] args) throws Exception {
+ int errCode = ToolRunner.run(new Configuration(), new SampleUploader(),
+ args);
+ System.exit(errCode);
+ }
+}
\ No newline at end of file
diff --git a/src/examples/thrift/DemoClient.cpp b/src/examples/thrift/DemoClient.cpp
new file mode 100644
index 0000000..ac1972f
--- /dev/null
+++ b/src/examples/thrift/DemoClient.cpp
@@ -0,0 +1,300 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/*
+ * Instructions:
+ * 1. Run Thrift to generate the cpp module HBase
+ * thrift --gen cpp ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+ * 2. Execute {make}.
+ * 3. Execute {./DemoClient}.
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/time.h>
+#include <poll.h>
+
+#include <iostream>
+
+#include <protocol/TBinaryProtocol.h>
+#include <transport/TSocket.h>
+#include <transport/TTransportUtils.h>
+
+#include "Hbase.h"
+
+using namespace facebook::thrift;
+using namespace facebook::thrift::protocol;
+using namespace facebook::thrift::transport;
+
+using namespace apache::hadoop::hbase::thrift;
+
+typedef std::vector<std::string> StrVec;
+typedef std::map<std::string,std::string> StrMap;
+typedef std::vector<ColumnDescriptor> ColVec;
+typedef std::map<std::string,ColumnDescriptor> ColMap;
+typedef std::vector<TCell> CellVec;
+typedef std::map<std::string,TCell> CellMap;
+
+
+static void
+printRow(const TRowResult &rowResult)
+{
+ std::cout << "row: " << rowResult.row << ", cols: ";
+ for (CellMap::const_iterator it = rowResult.columns.begin();
+ it != rowResult.columns.end(); ++it) {
+ std::cout << it->first << " => " << it->second.value << "; ";
+ }
+ std::cout << std::endl;
+}
+
+static void
+printVersions(const std::string &row, const CellVec &versions)
+{
+ std::cout << "row: " << row << ", values: ";
+ for (CellVec::const_iterator it = versions.begin(); it != versions.end(); ++it) {
+ std::cout << (*it).value << "; ";
+ }
+ std::cout << std::endl;
+}
+
+int
+main(int argc, char** argv)
+{
+ boost::shared_ptr<TTransport> socket(new TSocket("localhost", 9090));
+ boost::shared_ptr<TTransport> transport(new TBufferedTransport(socket));
+ boost::shared_ptr<TProtocol> protocol(new TBinaryProtocol(transport));
+ HbaseClient client(protocol);
+
+ try {
+ transport->open();
+
+ std::string t("demo_table");
+
+ //
+ // Scan all tables, look for the demo table and delete it.
+ //
+ std::cout << "scanning tables..." << std::endl;
+ StrVec tables;
+ client.getTableNames(tables);
+ for (StrVec::const_iterator it = tables.begin(); it != tables.end(); ++it) {
+ std::cout << " found: " << *it << std::endl;
+ if (t == *it) {
+ if (client.isTableEnabled(*it)) {
+ std::cout << " disabling table: " << *it << std::endl;
+ client.disableTable(*it);
+ }
+ std::cout << " deleting table: " << *it << std::endl;
+ client.deleteTable(*it);
+ }
+ }
+
+ //
+ // Create the demo table with two column families, entry: and unused:
+ //
+ ColVec columns;
+ columns.push_back(ColumnDescriptor());
+ columns.back().name = "entry:";
+ columns.back().maxVersions = 10;
+ columns.push_back(ColumnDescriptor());
+ columns.back().name = "unused:";
+
+ std::cout << "creating table: " << t << std::endl;
+ try {
+ client.createTable(t, columns);
+ } catch (AlreadyExists &ae) {
+ std::cout << "WARN: " << ae.message << std::endl;
+ }
+
+ ColMap columnMap;
+ client.getColumnDescriptors(columnMap, t);
+ std::cout << "column families in " << t << ": " << std::endl;
+ for (ColMap::const_iterator it = columnMap.begin(); it != columnMap.end(); ++it) {
+ std::cout << " column: " << it->second.name << ", maxVer: " << it->second.maxVersions << std::endl;
+ }
+
+ //
+ // Test UTF-8 handling
+ //
+ std::string invalid("foo-\xfc\xa1\xa1\xa1\xa1\xa1");
+ std::string valid("foo-\xE7\x94\x9F\xE3\x83\x93\xE3\x83\xBC\xE3\x83\xAB");
+
+ // non-utf8 is fine for data
+ std::vector<Mutation> mutations;
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:foo";
+ mutations.back().value = invalid;
+ client.mutateRow(t, "foo", mutations);
+
+ // try empty strings
+ mutations.clear();
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:";
+ mutations.back().value = "";
+ client.mutateRow(t, "", mutations);
+
+ // this row name is valid utf8
+ mutations.clear();
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:foo";
+ mutations.back().value = valid;
+ client.mutateRow(t, valid, mutations);
+
+ // non-utf8 is not allowed in row names
+ try {
+ mutations.clear();
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:foo";
+ mutations.back().value = invalid;
+ client.mutateRow(t, invalid, mutations);
+ std::cout << "FATAL: shouldn't get here!" << std::endl;
+ exit(-1);
+ } catch (IOError e) {
+ std::cout << "expected error: " << e.message << std::endl;
+ }
+
+ // Run a scanner on the rows we just created
+ StrVec columnNames;
+ columnNames.push_back("entry:");
+
+ std::cout << "Starting scanner..." << std::endl;
+ int scanner = client.scannerOpen(t, "", columnNames);
+ try {
+ while (true) {
+ TRowResult value;
+ client.scannerGet(value, scanner);
+ printRow(value);
+ }
+ } catch (NotFound &nf) {
+ client.scannerClose(scanner);
+ std::cout << "Scanner finished" << std::endl;
+ }
+
+ //
+ // Run some operations on a bunch of rows.
+ //
+ for (int i = 100; i >= 0; --i) {
+ // format row keys as "00000" to "00100"
+ char buf[32];
+ sprintf(buf, "%0.5d", i);
+ std::string row(buf);
+
+ TRowResult rowResult;
+
+ mutations.clear();
+ mutations.push_back(Mutation());
+ mutations.back().column = "unused:";
+ mutations.back().value = "DELETE_ME";
+ client.mutateRow(t, row, mutations);
+ client.getRow(rowResult, t, row);
+ printRow(rowResult);
+ client.deleteAllRow(t, row);
+
+ mutations.clear();
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:num";
+ mutations.back().value = "0";
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:foo";
+ mutations.back().value = "FOO";
+ client.mutateRow(t, row, mutations);
+ client.getRow(rowResult, t, row);
+ printRow(rowResult);
+
+ // sleep to force later timestamp
+ poll(0, 0, 50);
+
+ mutations.clear();
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:foo";
+ mutations.back().isDelete = true;
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:num";
+ mutations.back().value = "-1";
+ client.mutateRow(t, row, mutations);
+ client.getRow(rowResult, t, row);
+ printRow(rowResult);
+
+ mutations.clear();
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:num";
+ mutations.back().value = boost::lexical_cast<std::string>(i);
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:sqr";
+ mutations.back().value = boost::lexical_cast<std::string>(i*i);
+ client.mutateRow(t, row, mutations);
+ client.getRow(rowResult, t, row);
+ printRow(rowResult);
+
+ mutations.clear();
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:num";
+ mutations.back().value = "-999";
+ mutations.push_back(Mutation());
+ mutations.back().column = "entry:sqr";
+ mutations.back().isDelete = true;
+ client.mutateRowTs(t, row, mutations, 1); // shouldn't override latest
+ client.getRow(rowResult, t, row);
+ printRow(rowResult);
+
+ CellVec versions;
+ client.getVer(versions, t, row, "entry:num", 10);
+ printVersions(row, versions);
+ assert(versions.size() == 4);
+ std::cout << std::endl;
+
+ try {
+ TCell value;
+ client.get(value, t, row, "entry:foo");
+ std::cout << "FATAL: shouldn't get here!" << std::endl;
+ exit(-1);
+ } catch (NotFound &nf) {
+ // blank
+ }
+ }
+
+ // scan all rows/columns
+
+ columnNames.clear();
+ client.getColumnDescriptors(columnMap, t);
+ for (ColMap::const_iterator it = columnMap.begin(); it != columnMap.end(); ++it) {
+ std::cout << "column with name: " + it->second.name << std::endl;
+ columnNames.push_back(it->second.name + ":");
+ }
+
+ std::cout << "Starting scanner..." << std::endl;
+ scanner = client.scannerOpenWithStop(t, "00020", "00040", columnNames);
+ try {
+ while (true) {
+ TRowResult value;
+ client.scannerGet(value, scanner);
+ printRow(value);
+ }
+ } catch (NotFound &nf) {
+ client.scannerClose(scanner);
+ std::cout << "Scanner finished" << std::endl;
+ }
+
+ transport->close();
+ }
+ catch (TException &tx) {
+ printf("ERROR: %s\n", tx.what());
+ }
+
+}
diff --git a/src/examples/thrift/DemoClient.java b/src/examples/thrift/DemoClient.java
new file mode 100644
index 0000000..13562e3
--- /dev/null
+++ b/src/examples/thrift/DemoClient.java
@@ -0,0 +1,331 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import java.io.UnsupportedEncodingException;
+import java.nio.ByteBuffer;
+import java.nio.charset.CharacterCodingException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.text.NumberFormat;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.SortedMap;
+
+import org.apache.hadoop.hbase.thrift.generated.AlreadyExists;
+import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.thrift.generated.IOError;
+import org.apache.hadoop.hbase.thrift.generated.IllegalArgument;
+import org.apache.hadoop.hbase.thrift.generated.Mutation;
+import org.apache.hadoop.hbase.thrift.generated.NotFound;
+import org.apache.hadoop.hbase.thrift.generated.TCell;
+import org.apache.hadoop.hbase.thrift.generated.TRowResult;
+
+import com.facebook.thrift.TException;
+import com.facebook.thrift.protocol.TBinaryProtocol;
+import com.facebook.thrift.protocol.TProtocol;
+import com.facebook.thrift.transport.TSocket;
+import com.facebook.thrift.transport.TTransport;
+
+/*
+ * Instructions:
+ * 1. Run Thrift to generate the java module HBase
+ * thrift --gen java ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+ * 2. Acquire a jar of compiled Thrift java classes. As of this writing, HBase ships
+ * with this jar (libthrift-[VERSION].jar). If this jar is not present, or it is
+ * out-of-date with your current version of thrift, you can compile the jar
+ * yourself by executing {ant} in {$THRIFT_HOME}/lib/java.
+ * 3. Compile and execute this file with both the libthrift jar and the gen-java/
+ * directory in the classpath. This can be done on the command-line with the
+ * following lines: (from the directory containing this file and gen-java/)
+ *
+ * javac -cp /path/to/libthrift/jar.jar:gen-java/ DemoClient.java
+ * mv DemoClient.class gen-java/org/apache/hadoop/hbase/thrift/
+ * java -cp /path/to/libthrift/jar.jar:gen-java/ org.apache.hadoop.hbase.thrift.DemoClient
+ *
+ */
+public class DemoClient {
+
+ protected int port = 9090;
+ CharsetDecoder decoder = null;
+
+ public static void main(String[] args)
+ throws IOError, TException, NotFound, UnsupportedEncodingException, IllegalArgument, AlreadyExists {
+ DemoClient client = new DemoClient();
+ client.run();
+ }
+
+ DemoClient() {
+ decoder = Charset.forName("UTF-8").newDecoder();
+ }
+
+ // Helper to translate byte[]'s to UTF8 strings
+ private String utf8(byte[] buf) {
+ try {
+ return decoder.decode(ByteBuffer.wrap(buf)).toString();
+ } catch (CharacterCodingException e) {
+ return "[INVALID UTF-8]";
+ }
+ }
+
+ // Helper to translate strings to UTF8 bytes
+ private byte[] bytes(String s) {
+ try {
+ return s.getBytes("UTF-8");
+ } catch (UnsupportedEncodingException e) {
+ e.printStackTrace();
+ return null;
+ }
+ }
+
+ private void run() throws IOError, TException, NotFound, IllegalArgument,
+ AlreadyExists {
+
+ TTransport transport = new TSocket("localhost", port);
+ TProtocol protocol = new TBinaryProtocol(transport, true, true);
+ Hbase.Client client = new Hbase.Client(protocol);
+
+ transport.open();
+
+ byte[] t = bytes("demo_table");
+
+ //
+ // Scan all tables, look for the demo table and delete it.
+ //
+ System.out.println("scanning tables...");
+ for (byte[] name : client.getTableNames()) {
+ System.out.println(" found: " + utf8(name));
+ if (utf8(name).equals(utf8(t))) {
+ if (client.isTableEnabled(name)) {
+ System.out.println(" disabling table: " + utf8(name));
+ client.disableTable(name);
+ }
+ System.out.println(" deleting table: " + utf8(name));
+ client.deleteTable(name);
+ }
+ }
+
+ //
+ // Create the demo table with two column families, entry: and unused:
+ //
+ ArrayList<ColumnDescriptor> columns = new ArrayList<ColumnDescriptor>();
+ ColumnDescriptor col = null;
+ col = new ColumnDescriptor();
+ col.name = bytes("entry:");
+ col.maxVersions = 10;
+ columns.add(col);
+ col = new ColumnDescriptor();
+ col.name = bytes("unused:");
+ columns.add(col);
+
+ System.out.println("creating table: " + utf8(t));
+ try {
+ client.createTable(t, columns);
+ } catch (AlreadyExists ae) {
+ System.out.println("WARN: " + ae.message);
+ }
+
+ System.out.println("column families in " + utf8(t) + ": ");
+ Map<byte[], ColumnDescriptor> columnMap = client.getColumnDescriptors(t);
+ for (ColumnDescriptor col2 : columnMap.values()) {
+ System.out.println(" column: " + utf8(col2.name) + ", maxVer: " + Integer.toString(col2.maxVersions));
+ }
+
+ //
+ // Test UTF-8 handling
+ //
+ byte[] invalid = { (byte) 'f', (byte) 'o', (byte) 'o', (byte) '-', (byte) 0xfc, (byte) 0xa1, (byte) 0xa1, (byte) 0xa1, (byte) 0xa1 };
+ byte[] valid = { (byte) 'f', (byte) 'o', (byte) 'o', (byte) '-', (byte) 0xE7, (byte) 0x94, (byte) 0x9F, (byte) 0xE3, (byte) 0x83, (byte) 0x93, (byte) 0xE3, (byte) 0x83, (byte) 0xBC, (byte) 0xE3, (byte) 0x83, (byte) 0xAB};
+
+ ArrayList<Mutation> mutations;
+ // non-utf8 is fine for data
+ mutations = new ArrayList<Mutation>();
+ mutations.add(new Mutation(false, bytes("entry:foo"), invalid));
+ client.mutateRow(t, bytes("foo"), mutations);
+
+ // try empty strings
+ mutations = new ArrayList<Mutation>();
+ mutations.add(new Mutation(false, bytes("entry:"), bytes("")));
+ client.mutateRow(t, bytes(""), mutations);
+
+ // this row name is valid utf8
+ mutations = new ArrayList<Mutation>();
+ mutations.add(new Mutation(false, bytes("entry:foo"), valid));
+ client.mutateRow(t, valid, mutations);
+
+ // non-utf8 is not allowed in row names
+ try {
+ mutations = new ArrayList<Mutation>();
+ mutations.add(new Mutation(false, bytes("entry:foo"), invalid));
+ client.mutateRow(t, invalid, mutations);
+ System.out.println("FATAL: shouldn't get here");
+ System.exit(-1);
+ } catch (IOError e) {
+ System.out.println("expected error: " + e.message);
+ }
+
+ // Run a scanner on the rows we just created
+ ArrayList<byte[]> columnNames = new ArrayList<byte[]>();
+ columnNames.add(bytes("entry:"));
+
+ System.out.println("Starting scanner...");
+ int scanner = client.scannerOpen(t, bytes(""), columnNames);
+ try {
+ while (true) {
+ TRowResult entry = client.scannerGet(scanner);
+ printRow(entry);
+ }
+ } catch (NotFound nf) {
+ client.scannerClose(scanner);
+ System.out.println("Scanner finished");
+ }
+
+ //
+ // Run some operations on a bunch of rows
+ //
+ for (int i = 100; i >= 0; --i) {
+ // format row keys as "00000" to "00100"
+ NumberFormat nf = NumberFormat.getInstance();
+ nf.setMinimumIntegerDigits(5);
+ nf.setGroupingUsed(false);
+ byte[] row = bytes(nf.format(i));
+
+ mutations = new ArrayList<Mutation>();
+ mutations.add(new Mutation(false, bytes("unused:"), bytes("DELETE_ME")));
+ client.mutateRow(t, row, mutations);
+ printRow(client.getRow(t, row));
+ client.deleteAllRow(t, row);
+
+ mutations = new ArrayList<Mutation>();
+ mutations.add(new Mutation(false, bytes("entry:num"), bytes("0")));
+ mutations.add(new Mutation(false, bytes("entry:foo"), bytes("FOO")));
+ client.mutateRow(t, row, mutations);
+ printRow(client.getRow(t, row));
+
+ Mutation m = null;
+ mutations = new ArrayList<Mutation>();
+ m = new Mutation();
+ m.column = bytes("entry:foo");
+ m.isDelete = true;
+ mutations.add(m);
+ m = new Mutation();
+ m.column = bytes("entry:num");
+ m.value = bytes("-1");
+ mutations.add(m);
+ client.mutateRow(t, row, mutations);
+ printRow(client.getRow(t, row));
+
+ mutations = new ArrayList<Mutation>();
+ mutations.add(new Mutation(false, bytes("entry:num"), bytes(Integer.toString(i))));
+ mutations.add(new Mutation(false, bytes("entry:sqr"), bytes(Integer.toString(i * i))));
+ client.mutateRow(t, row, mutations);
+ printRow(client.getRow(t, row));
+
+ // sleep to force later timestamp
+ try {
+ Thread.sleep(50);
+ } catch (InterruptedException e) {
+ // no-op
+ }
+
+ mutations.clear();
+ m = new Mutation();
+ m.column = bytes("entry:num");
+ m.value = bytes("-999");
+ mutations.add(m);
+ m = new Mutation();
+ m.column = bytes("entry:sqr");
+ m.isDelete = true;
+ client.mutateRowTs(t, row, mutations, 1); // shouldn't override latest
+ printRow(client.getRow(t, row));
+
+ List<TCell> versions = client.getVer(t, row, bytes("entry:num"), 10);
+ printVersions(row, versions);
+ if (versions.size() != 4) {
+ System.out.println("FATAL: wrong # of versions");
+ System.exit(-1);
+ }
+
+ try {
+ client.get(t, row, bytes("entry:foo"));
+ System.out.println("FATAL: shouldn't get here");
+ System.exit(-1);
+ } catch (NotFound nf2) {
+ // blank
+ }
+
+ System.out.println("");
+ }
+
+ // scan all rows/columnNames
+
+ columnNames.clear();
+ for (ColumnDescriptor col2 : client.getColumnDescriptors(t).values()) {
+ System.out.println("column with name: " + new String(col2.name));
+ System.out.println(col2.toString());
+ columnNames.add((utf8(col2.name) + ":").getBytes());
+ }
+
+ System.out.println("Starting scanner...");
+ scanner = client.scannerOpenWithStop(t, bytes("00020"), bytes("00040"),
+ columnNames);
+ try {
+ while (true) {
+ TRowResult entry = client.scannerGet(scanner);
+ printRow(entry);
+ }
+ } catch (NotFound nf) {
+ client.scannerClose(scanner);
+ System.out.println("Scanner finished");
+ }
+
+ transport.close();
+ }
+
+ private final void printVersions(byte[] row, List<TCell> versions) {
+ StringBuilder rowStr = new StringBuilder();
+ for (TCell cell : versions) {
+ rowStr.append(utf8(cell.value));
+ rowStr.append("; ");
+ }
+ System.out.println("row: " + utf8(row) + ", values: " + rowStr);
+ }
+
+ private final void printRow(TRowResult rowResult) {
+ // copy values into a TreeMap to get them in sorted order
+
+ TreeMap<String,TCell> sorted = new TreeMap<String,TCell>();
+ for (Map.Entry<byte[], TCell> column : rowResult.columns.entrySet()) {
+ sorted.put(utf8(column.getKey()), column.getValue());
+ }
+
+ StringBuilder rowStr = new StringBuilder();
+ for (SortedMap.Entry<String, TCell> entry : sorted.entrySet()) {
+ rowStr.append(entry.getKey());
+ rowStr.append(" => ");
+ rowStr.append(utf8(entry.getValue().value));
+ rowStr.append("; ");
+ }
+ System.out.println("row: " + utf8(rowResult.row) + ", cols: " + rowStr);
+ }
+}
diff --git a/src/examples/thrift/DemoClient.php b/src/examples/thrift/DemoClient.php
new file mode 100644
index 0000000..b5ea551
--- /dev/null
+++ b/src/examples/thrift/DemoClient.php
@@ -0,0 +1,277 @@
+<?php
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+# Instructions:
+# 1. Run Thrift to generate the php module HBase
+# thrift -php ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+# 2. Modify the import string below to point to {$THRIFT_HOME}/lib/php/src.
+# 3. Execute {php DemoClient.php}. Note that you must use php5 or higher.
+# 4. See {$THRIFT_HOME}/lib/php/README for additional help.
+
+# Change this to match your thrift root
+$GLOBALS['THRIFT_ROOT'] = '/Users/irubin/Thrift/thrift-20080411p1/lib/php/src';
+
+require_once( $GLOBALS['THRIFT_ROOT'].'/Thrift.php' );
+
+require_once( $GLOBALS['THRIFT_ROOT'].'/transport/TSocket.php' );
+require_once( $GLOBALS['THRIFT_ROOT'].'/transport/TBufferedTransport.php' );
+require_once( $GLOBALS['THRIFT_ROOT'].'/protocol/TBinaryProtocol.php' );
+
+# According to the thrift documentation, compiled PHP thrift libraries should
+# reside under the THRIFT_ROOT/packages directory. If these compiled libraries
+# are not present in this directory, move them there from gen-php/.
+require_once( $GLOBALS['THRIFT_ROOT'].'/packages/Hbase/Hbase.php' );
+
+function printRow( $rowresult ) {
+ echo( "row: {$rowresult->row}, cols: \n" );
+ $values = $rowresult->columns;
+ asort( $values );
+ foreach ( $values as $k=>$v ) {
+ echo( " {$k} => {$v->value}\n" );
+ }
+}
+
+$socket = new TSocket( 'localhost', 9090 );
+$socket->setSendTimeout( 10000 ); // Ten seconds (too long for production, but this is just a demo ;)
+$socket->setRecvTimeout( 20000 ); // Twenty seconds
+$transport = new TBufferedTransport( $socket );
+$protocol = new TBinaryProtocol( $transport );
+$client = new HbaseClient( $protocol );
+
+$transport->open();
+
+$t = 'demo_table';
+
+?><html>
+<head>
+<title>DemoClient</title>
+</head>
+<body>
+<pre>
+<?php
+
+#
+# Scan all tables, look for the demo table and delete it.
+#
+echo( "scanning tables...\n" );
+$tables = $client->getTableNames();
+sort( $tables );
+foreach ( $tables as $name ) {
+ echo( " found: {$name}\n" );
+ if ( $name == $t ) {
+ if ($client->isTableEnabled( $name )) {
+ echo( " disabling table: {$name}\n");
+ $client->disableTable( $name );
+ }
+ echo( " deleting table: {$name}\n" );
+ $client->deleteTable( $name );
+ }
+}
+
+#
+# Create the demo table with two column families, entry: and unused:
+#
+$columns = array(
+ new ColumnDescriptor( array(
+ 'name' => 'entry:',
+ 'maxVersions' => 10
+ ) ),
+ new ColumnDescriptor( array(
+ 'name' => 'unused:'
+ ) )
+);
+
+echo( "creating table: {$t}\n" );
+try {
+ $client->createTable( $t, $columns );
+} catch ( AlreadyExists $ae ) {
+ echo( "WARN: {$ae->message}\n" );
+}
+
+echo( "column families in {$t}:\n" );
+$descriptors = $client->getColumnDescriptors( $t );
+asort( $descriptors );
+foreach ( $descriptors as $col ) {
+ echo( " column: {$col->name}, maxVer: {$col->maxVersions}\n" );
+}
+
+#
+# Test UTF-8 handling
+#
+$invalid = "foo-\xfc\xa1\xa1\xa1\xa1\xa1";
+$valid = "foo-\xE7\x94\x9F\xE3\x83\x93\xE3\x83\xBC\xE3\x83\xAB";
+
+# non-utf8 is fine for data
+$mutations = array(
+ new Mutation( array(
+ 'column' => 'entry:foo',
+ 'value' => $invalid
+ ) ),
+);
+$client->mutateRow( $t, "foo", $mutations );
+
+# try empty strings
+$mutations = array(
+ new Mutation( array(
+ 'column' => 'entry:',
+ 'value' => ""
+ ) ),
+);
+$client->mutateRow( $t, "", $mutations );
+
+# this row name is valid utf8
+$mutations = array(
+ new Mutation( array(
+ 'column' => 'entry:foo',
+ 'value' => $valid
+ ) ),
+);
+$client->mutateRow( $t, $valid, $mutations );
+
+# non-utf8 is not allowed in row names
+try {
+ $mutations = array(
+ new Mutation( array(
+ 'column' => 'entry:foo',
+ 'value' => $invalid
+ ) ),
+ );
+ $client->mutateRow( $t, $invalid, $mutations );
+ throw new Exception( "shouldn't get here!" );
+} catch ( IOError $e ) {
+ echo( "expected error: {$e->message}\n" );
+}
+
+# Run a scanner on the rows we just created
+echo( "Starting scanner...\n" );
+$scanner = $client->scannerOpen( $t, "", array( "entry:" ) );
+try {
+ while (true) printRow( $client->scannerGet( $scanner ) );
+} catch ( NotFound $nf ) {
+ $client->scannerClose( $scanner );
+ echo( "Scanner finished\n" );
+}
+
+#
+# Run some operations on a bunch of rows.
+#
+for ($e=100; $e>=0; $e--) {
+
+ # format row keys as "00000" to "00100"
+ $row = str_pad( $e, 5, '0', STR_PAD_LEFT );
+
+ $mutations = array(
+ new Mutation( array(
+ 'column' => 'unused:',
+ 'value' => "DELETE_ME"
+ ) ),
+ );
+ $client->mutateRow( $t, $row, $mutations);
+ printRow( $client->getRow( $t, $row ));
+ $client->deleteAllRow( $t, $row );
+
+ $mutations = array(
+ new Mutation( array(
+ 'column' => 'entry:num',
+ 'value' => "0"
+ ) ),
+ new Mutation( array(
+ 'column' => 'entry:foo',
+ 'value' => "FOO"
+ ) ),
+ );
+ $client->mutateRow( $t, $row, $mutations );
+ printRow( $client->getRow( $t, $row ));
+
+ $mutations = array(
+ new Mutation( array(
+ 'column' => 'entry:foo',
+ 'isDelete' => 1
+ ) ),
+ new Mutation( array(
+ 'column' => 'entry:num',
+ 'value' => '-1'
+ ) ),
+ );
+ $client->mutateRow( $t, $row, $mutations );
+ printRow( $client->getRow( $t, $row ) );
+
+ $mutations = array(
+ new Mutation( array(
+ 'column' => "entry:num",
+ 'value' => $e
+ ) ),
+ new Mutation( array(
+ 'column' => "entry:sqr",
+ 'value' => $e * $e
+ ) ),
+ );
+ $client->mutateRow( $t, $row, $mutations );
+ printRow( $client->getRow( $t, $row ));
+
+ $mutations = array(
+ new Mutation( array(
+ 'column' => 'entry:num',
+ 'value' => '-999'
+ ) ),
+ new Mutation( array(
+ 'column' => 'entry:sqr',
+ 'isDelete' => 1
+ ) ),
+ );
+ $client->mutateRowTs( $t, $row, $mutations, 1 ); # shouldn't override latest
+ printRow( $client->getRow( $t, $row ) );
+
+ $versions = $client->getVer( $t, $row, "entry:num", 10 );
+ echo( "row: {$row}, values: \n" );
+ foreach ( $versions as $v ) echo( " {$v->value};\n" );
+
+ try {
+ $client->get( $t, $row, "entry:foo");
+ throw new Exception ( "shouldn't get here! " );
+ } catch ( NotFound $nf ) {
+ # blank
+ }
+
+}
+
+$columns = array();
+foreach ( $client->getColumnDescriptors($t) as $col=>$desc ) {
+ echo("column with name: {$desc->name}\n");
+ $columns[] = $desc->name.":";
+}
+
+echo( "Starting scanner...\n" );
+$scanner = $client->scannerOpenWithStop( $t, "00020", "00040", $columns );
+try {
+ while (true) printRow( $client->scannerGet( $scanner ) );
+} catch ( NotFound $nf ) {
+ $client->scannerClose( $scanner );
+ echo( "Scanner finished\n" );
+}
+
+$transport->close();
+
+?>
+</pre>
+</body>
+</html>
+
diff --git a/src/examples/thrift/DemoClient.py b/src/examples/thrift/DemoClient.py
new file mode 100755
index 0000000..0fb22b6
--- /dev/null
+++ b/src/examples/thrift/DemoClient.py
@@ -0,0 +1,213 @@
+#!/usr/bin/python
+'''Copyright 2008 The Apache Software Foundation
+
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements. See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership. The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+'''
+# Instructions:
+# 1. Run Thrift to generate the python module HBase
+# thrift --gen py ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+# 2. Create a directory of your choosing that contains:
+# a. This file (DemoClient.py).
+# b. The directory gen-py/hbase (generated by instruction step 1).
+# c. The directory {$THRIFT_HOME}/lib/py/build/lib.{YOUR_SYSTEM}/thrift.
+# Or, modify the import statements below such that this file can access the
+# directories from steps 3b and 3c.
+# 3. Execute {python DemoClient.py}.
+
+import sys
+import time
+
+from thrift import Thrift
+from thrift.transport import TSocket, TTransport
+from thrift.protocol import TBinaryProtocol
+from hbase import ttypes
+from hbase.Hbase import Client, ColumnDescriptor, Mutation
+
+def printVersions(row, versions):
+ print "row: " + row + ", values: ",
+ for cell in versions:
+ print cell.value + "; ",
+ print
+
+def printRow(entry):
+ print "row: " + entry.row + ", cols:",
+ for k in sorted(entry.columns):
+ print k + " => " + entry.columns[k].value,
+ print
+
+# Make socket
+transport = TSocket.TSocket('localhost', 9090)
+
+# Buffering is critical. Raw sockets are very slow
+transport = TTransport.TBufferedTransport(transport)
+
+# Wrap in a protocol
+protocol = TBinaryProtocol.TBinaryProtocol(transport)
+
+# Create a client to use the protocol encoder
+client = Client(protocol)
+
+# Connect!
+transport.open()
+
+t = "demo_table"
+
+#
+# Scan all tables, look for the demo table and delete it.
+#
+print "scanning tables..."
+for table in client.getTableNames():
+ print " found: %s" %(table)
+ if table == t:
+ if client.isTableEnabled(table):
+ print " disabling table: %s" %(t)
+ client.disableTable(table)
+ print " deleting table: %s" %(t)
+ client.deleteTable(table)
+
+columns = []
+col = ColumnDescriptor()
+col.name = 'entry:'
+col.maxVersions = 10
+columns.append(col)
+col = ColumnDescriptor()
+col.name = 'unused:'
+columns.append(col)
+
+try:
+ print "creating table: %s" %(t)
+ client.createTable(t, columns)
+except AlreadyExists, ae:
+ print "WARN: " + ae.message
+
+cols = client.getColumnDescriptors(t)
+print "column families in %s" %(t)
+for col_name in cols.keys():
+ col = cols[col_name]
+ print " column: %s, maxVer: %d" % (col.name, col.maxVersions)
+#
+# Test UTF-8 handling
+#
+invalid = "foo-\xfc\xa1\xa1\xa1\xa1\xa1"
+valid = "foo-\xE7\x94\x9F\xE3\x83\x93\xE3\x83\xBC\xE3\x83\xAB";
+
+# non-utf8 is fine for data
+mutations = [Mutation({"column":"entry:foo", "value":invalid})]
+client.mutateRow(t, "foo", mutations)
+
+# try empty strings
+mutations = [Mutation({"column":"entry:", "value":""})]
+client.mutateRow(t, "", mutations)
+
+# this row name is valid utf8
+mutations = [Mutation({"column":"entry:foo", "value":valid})]
+client.mutateRow(t, valid, mutations)
+
+# non-utf8 is not allowed in row names
+try:
+ mutations = [Mutation({"column":"entry:foo", "value":invalid})]
+ client.mutateRow(t, invalid, mutations)
+except ttypes.IOError, e:
+ print 'expected exception: %s' %(e.message)
+
+# Run a scanner on the rows we just created
+print "Starting scanner..."
+scanner = client.scannerOpen(t, "", ["entry:"])
+try:
+ while 1:
+ printRow(client.scannerGet(scanner))
+except ttypes.NotFound, e:
+ print "Scanner finished"
+
+#
+# Run some operations on a bunch of rows.
+#
+for e in range(100, 0, -1):
+ # format row keys as "00000" to "00100"
+ row = "%0.5d" % (e)
+
+ mutations = [Mutation({"column":"unused:", "value":"DELETE_ME"})]
+ client.mutateRow(t, row, mutations)
+ printRow(client.getRow(t, row))
+ client.deleteAllRow(t, row)
+
+ mutations = [Mutation({"column":"entry:num", "value":"0"}),
+ Mutation({"column":"entry:foo", "value":"FOO"})]
+ client.mutateRow(t, row, mutations)
+ printRow(client.getRow(t, row));
+
+ mutations = []
+ m = Mutation()
+ m.column = "entry:foo"
+ m.isDelete = 1
+ mutations.append(m)
+ m = Mutation()
+ m.column = "entry:num"
+ m.value = "-1"
+ mutations.append(m)
+ client.mutateRow(t, row, mutations)
+ printRow(client.getRow(t, row))
+
+ mutations = [Mutation({"column":"entry:num", "value":str(e)}),
+ Mutation({"column":"entry:sqr", "value":str(e*e)})]
+ client.mutateRow(t, row, mutations)
+ printRow(client.getRow(t, row));
+
+ time.sleep(0.05)
+
+ mutations = []
+ m = Mutation()
+ m.column = "entry:num"
+ m.value = "-999"
+ mutations.append(m)
+ m = Mutation()
+ m.column = "entry:sqr"
+ m.isDelete = 1
+ mutations.append(m)
+ client.mutateRowTs(t, row, mutations, 1) # shouldn't override latest
+ printRow(client.getRow(t, row))
+
+ versions = client.getVer(t, row, "entry:num", 10)
+ printVersions(row, versions)
+ if len(versions) != 4:
+ print("FATAL: wrong # of versions")
+ sys.exit(-1)
+
+ try:
+ client.get(t, row, "entry:foo")
+ raise "shouldn't get here!"
+ except ttypes.NotFound, e:
+ pass
+
+ print
+
+columnNames = []
+for (col, desc) in client.getColumnDescriptors(t).items():
+ print "column with name: "+desc.name
+ print desc
+ columnNames.append(desc.name+":")
+
+print "Starting scanner..."
+scanner = client.scannerOpenWithStop(t, "00020", "00040", columnNames)
+try:
+ while 1:
+ printRow(client.scannerGet(scanner))
+except ttypes.NotFound:
+ client.scannerClose(scanner)
+ print "Scanner finished"
+
+transport.close()
diff --git a/src/examples/thrift/DemoClient.rb b/src/examples/thrift/DemoClient.rb
new file mode 100644
index 0000000..84f8818
--- /dev/null
+++ b/src/examples/thrift/DemoClient.rb
@@ -0,0 +1,245 @@
+#!/usr/bin/ruby
+
+# Copyright 2008 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Instructions:
+# 1. Run Thrift to generate the ruby module HBase
+# thrift --gen rb ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+# 2. Modify the import string below to point to {$THRIFT_HOME}/lib/rb/lib.
+# 3. Execute {ruby DemoClient.rb}.
+
+# You will need to modify this import string:
+$:.push('~/Thrift/thrift-20080411p1/lib/rb/lib')
+$:.push('./gen-rb')
+
+require 'thrift/transport/tsocket'
+require 'thrift/protocol/tbinaryprotocol'
+
+require 'Hbase'
+
+def printRow(rowresult)
+ print "row: #{rowresult.row}, cols: "
+ rowresult.columns.sort.each do |k,v|
+ print "#{k} => #{v.value}; "
+ end
+ puts ""
+end
+
+transport = TBufferedTransport.new(TSocket.new("localhost", 9090))
+protocol = TBinaryProtocol.new(transport)
+client = Apache::Hadoop::Hbase::Thrift::Hbase::Client.new(protocol)
+
+transport.open()
+
+t = "demo_table"
+
+#
+# Scan all tables, look for the demo table and delete it.
+#
+puts "scanning tables..."
+client.getTableNames().sort.each do |name|
+ puts " found: #{name}"
+ if (name == t)
+ if (client.isTableEnabled(name))
+ puts " disabling table: #{name}"
+ client.disableTable(name)
+ end
+ puts " deleting table: #{name}"
+ client.deleteTable(name)
+ end
+end
+
+#
+# Create the demo table with two column families, entry: and unused:
+#
+columns = []
+col = Apache::Hadoop::Hbase::Thrift::ColumnDescriptor.new
+col.name = "entry:"
+col.maxVersions = 10
+columns << col;
+col = Apache::Hadoop::Hbase::Thrift::ColumnDescriptor.new
+col.name = "unused:"
+columns << col;
+
+puts "creating table: #{t}"
+begin
+ client.createTable(t, columns)
+rescue Apache::Hadoop::Hbase::Thrift::AlreadyExists => ae
+ puts "WARN: #{ae.message}"
+end
+
+puts "column families in #{t}: "
+client.getColumnDescriptors(t).sort.each do |key, col|
+ puts " column: #{col.name}, maxVer: #{col.maxVersions}"
+end
+
+#
+# Test UTF-8 handling
+#
+invalid = "foo-\xfc\xa1\xa1\xa1\xa1\xa1"
+valid = "foo-\xE7\x94\x9F\xE3\x83\x93\xE3\x83\xBC\xE3\x83\xAB";
+
+# non-utf8 is fine for data
+mutations = []
+m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+m.column = "entry:foo"
+m.value = invalid
+mutations << m
+client.mutateRow(t, "foo", mutations)
+
+# try empty strings
+mutations = []
+m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+m.column = "entry:"
+m.value = ""
+mutations << m
+client.mutateRow(t, "", mutations)
+
+# this row name is valid utf8
+mutations = []
+m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+m.column = "entry:foo"
+m.value = valid
+mutations << m
+client.mutateRow(t, valid, mutations)
+
+# non-utf8 is not allowed in row names
+begin
+ mutations = []
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:foo"
+ m.value = invalid
+ mutations << m
+ client.mutateRow(t, invalid, mutations)
+ raise "shouldn't get here!"
+rescue Apache::Hadoop::Hbase::Thrift::IOError => e
+ puts "expected error: #{e.message}"
+end
+
+# Run a scanner on the rows we just created
+puts "Starting scanner..."
+scanner = client.scannerOpen(t, "", ["entry:"])
+begin
+ while (true)
+ printRow(client.scannerGet(scanner))
+ end
+rescue Apache::Hadoop::Hbase::Thrift::NotFound => nf
+ client.scannerClose(scanner)
+ puts "Scanner finished"
+end
+
+#
+# Run some operations on a bunch of rows.
+#
+(0..100).to_a.reverse.each do |e|
+ # format row keys as "00000" to "00100"
+ row = format("%0.5d", e)
+
+ mutations = []
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "unused:"
+ m.value = "DELETE_ME"
+ mutations << m
+ client.mutateRow(t, row, mutations)
+ printRow(client.getRow(t, row))
+ client.deleteAllRow(t, row)
+
+ mutations = []
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:num"
+ m.value = "0"
+ mutations << m
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:foo"
+ m.value = "FOO"
+ mutations << m
+ client.mutateRow(t, row, mutations)
+ printRow(client.getRow(t, row))
+
+ mutations = []
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:foo"
+ m.isDelete = 1
+ mutations << m
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:num"
+ m.value = "-1"
+ mutations << m
+ client.mutateRow(t, row, mutations)
+ printRow(client.getRow(t, row));
+
+ mutations = []
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:num"
+ m.value = e.to_s
+ mutations << m
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:sqr"
+ m.value = (e*e).to_s
+ mutations << m
+ client.mutateRow(t, row, mutations)
+ printRow(client.getRow(t, row))
+
+ mutations = []
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:num"
+ m.value = "-999"
+ mutations << m
+ m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+ m.column = "entry:sqr"
+ m.isDelete = 1
+ mutations << m
+ client.mutateRowTs(t, row, mutations, 1) # shouldn't override latest
+ printRow(client.getRow(t, row));
+
+ versions = client.getVer(t, row, "entry:num", 10)
+ print "row: #{row}, values: "
+ versions.each do |v|
+ print "#{v.value}; "
+ end
+ puts ""
+
+ begin
+ client.get(t, row, "entry:foo")
+ raise "shouldn't get here!"
+ rescue Apache::Hadoop::Hbase::Thrift::NotFound => nf
+ # blank
+ end
+
+ puts ""
+end
+
+columns = []
+client.getColumnDescriptors(t).each do |col, desc|
+ puts "column with name: #{desc.name}"
+ columns << desc.name + ":"
+end
+
+puts "Starting scanner..."
+scanner = client.scannerOpenWithStop(t, "00020", "00040", columns)
+begin
+ while (true)
+ printRow(client.scannerGet(scanner))
+ end
+rescue Apache::Hadoop::Hbase::Thrift::NotFound => nf
+ client.scannerClose(scanner)
+ puts "Scanner finished"
+end
+
+transport.close()
diff --git a/src/examples/thrift/Makefile b/src/examples/thrift/Makefile
new file mode 100644
index 0000000..691a1e9
--- /dev/null
+++ b/src/examples/thrift/Makefile
@@ -0,0 +1,35 @@
+# Copyright 2008 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Makefile for C++ Hbase Thrift DemoClient
+# NOTE: run 'thrift -cpp Hbase.thrift' first
+
+THRIFT_DIR = /usr/local/include/thrift
+LIB_DIR = /usr/local/lib
+
+GEN_SRC = ./gen-cpp/Hbase.cpp \
+ ./gen-cpp/Hbase_types.cpp \
+ ./gen-cpp/Hbase_constants.cpp
+
+default: DemoClient
+
+DemoClient: DemoClient.cpp
+ g++ -o DemoClient -I${THRIFT_DIR} -I./gen-cpp -L${LIB_DIR} -lthrift DemoClient.cpp ${GEN_SRC}
+
+clean:
+ rm -rf DemoClient
diff --git a/src/examples/thrift/README.txt b/src/examples/thrift/README.txt
new file mode 100644
index 0000000..c742f8d
--- /dev/null
+++ b/src/examples/thrift/README.txt
@@ -0,0 +1,16 @@
+Hbase Thrift Client Examples
+============================
+
+Included in this directory are sample clients of the HBase ThriftServer. They
+all perform the same actions but are implemented in C++, Java, Ruby, PHP, and
+Python respectively.
+
+To run/compile this clients, you will first need to install the thrift package
+(from http://developers.facebook.com/thrift/) and then run thrift to generate
+the language files:
+
+thrift --gen cpp --gen java --gen rb --gen py -php \
+ ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+
+See the individual DemoClient test files for more specific instructions on
+running each test.
diff --git a/src/examples/uploaders/hbrep/HBaseConnection.py b/src/examples/uploaders/hbrep/HBaseConnection.py
new file mode 100644
index 0000000..d8006b2
--- /dev/null
+++ b/src/examples/uploaders/hbrep/HBaseConnection.py
@@ -0,0 +1,39 @@
+import sys, os
+
+from Hbase.ttypes import *
+from Hbase import Hbase
+
+from thrift import Thrift
+from thrift.transport import TSocket, TTransport
+from thrift.protocol import TBinaryProtocol
+
+class HBaseConnection:
+ def __init__(self, hostname, port):
+ # Make socket
+ self.transport = TSocket.TSocket(hostname, port)
+ # Buffering is critical. Raw sockets are very slow
+ self.transport = TTransport.TBufferedTransport(self.transport)
+ # Wrap in a protocol
+ self.protocol = TBinaryProtocol.TBinaryProtocol(self.transport)
+ # Create a client to use the protocol encoder
+ self.client = Hbase.Client(self.protocol)
+
+ def connect(self):
+ self.transport.open()
+
+ def disconnect(self):
+ self.transport.close()
+
+ def validate_column_descriptors(self, table_name, column_descriptors):
+ hbase_families = self.client.getColumnDescriptors(table_name)
+ for col_desc in column_descriptors:
+ family, column = col_desc.split(":")
+ if not family in hbase_families:
+ raise Exception("Invalid column descriptor \"%s\" for hbase table \"%s\"" % (col_desc,table_name))
+
+ def validate_table_name(self, table_name):
+ if not table_name in self.client.getTableNames():
+ raise Exception("hbase table '%s' not found." % (table_name))
+
+
+
\ No newline at end of file
diff --git a/src/examples/uploaders/hbrep/HBaseConsumer.py b/src/examples/uploaders/hbrep/HBaseConsumer.py
new file mode 100644
index 0000000..175f331
--- /dev/null
+++ b/src/examples/uploaders/hbrep/HBaseConsumer.py
@@ -0,0 +1,90 @@
+import sys, os, pgq, skytools, ConfigParser
+
+from thrift import Thrift
+from thrift.transport import TSocket, TTransport
+from thrift.protocol import TBinaryProtocol
+
+from HBaseConnection import *
+import tablemapping
+
+INSERT = 'I'
+UPDATE = 'U'
+DELETE = 'D'
+
+class HBaseConsumer(pgq.Consumer):
+ """HBaseConsumer is a pgq.Consumer that sends processed events to hbase as mutations."""
+
+ def __init__(self, service_name, args):
+ pgq.Consumer.__init__(self, service_name, "postgresql_db", args)
+
+ config_file = self.args[0]
+ if len(self.args) < 2:
+ print "need table names"
+ sys.exit(1)
+ else:
+ self.table_names = self.args[1:]
+
+ #just to check this option exists
+ self.cf.get("postgresql_db")
+
+ self.max_batch_size = int(self.cf.get("max_batch_size", "10000"))
+ self.hbase_hostname = self.cf.get("hbase_hostname", "localhost")
+ self.hbase_port = int(self.cf.get("hbase_port", "9090"))
+ self.row_limit = int(self.cf.get("bootstrap_row_limit", 0))
+ self.table_mappings = tablemapping.load_table_mappings(config_file, self.table_names)
+
+ def process_batch(self, source_db, batch_id, event_list):
+ try:
+ self.log.debug("processing batch %s" % (batch_id))
+ hbase = HBaseConnection(self.hbase_hostname, self.hbase_port)
+ try:
+ self.log.debug("Connecting to HBase")
+ hbase.connect()
+
+ i = 0L
+ for event in event_list:
+ i = i+1
+ self.process_event(event, hbase)
+ print "%i events processed" % (i)
+
+ except Exception, e:
+ #self.log.info(e)
+ sys.exit(e)
+
+ finally:
+ hbase.disconnect()
+
+ def process_event(self, event, hbase):
+ if event.ev_extra1 in self.table_mappings:
+ table_mapping = self.table_mappings[event.ev_extra1]
+ else:
+ self.log.info("table name not found in config, skipping event")
+ return
+ #hbase.validate_table_name(table_mapping.hbase_table_name)
+ #hbase.validate_column_descriptors(table_mapping.hbase_table_name, table_mapping.hbase_column_descriptors)
+ event_data = skytools.db_urldecode(event.data)
+ event_type = event.type.split(':')[0]
+
+ batch = BatchMutation()
+ batch.row = table_mapping.hbase_row_prefix + str(event_data[table_mapping.psql_key_column])
+
+ batch.mutations = []
+ for psql_column, hbase_column in zip(table_mapping.psql_columns, table_mapping.hbase_column_descriptors):
+ if event_type == INSERT or event_type == UPDATE:
+ m = Mutation()
+ m.column = hbase_column
+ m.value = str(event_data[psql_column])
+ elif event_type == DELETE:
+ # delete this column entry
+ m = Mutation()
+ m.isDelete = True
+ m.column = hbase_column
+ else:
+ raise Exception("Invalid event type: %s, event data was: %s" % (event_type, str(event_data)))
+ batch.mutations.append(m)
+ hbase.client.mutateRow(table_mapping.hbase_table_name, batch.row, batch.mutations)
+ event.tag_done()
+
+if __name__ == '__main__':
+ script = HBaseConsumer("HBaseReplic",sys.argv[1:])
+ script.start()
diff --git a/src/examples/uploaders/hbrep/Hbase/Hbase-remote b/src/examples/uploaders/hbrep/Hbase/Hbase-remote
new file mode 100755
index 0000000..2ec9e95
--- /dev/null
+++ b/src/examples/uploaders/hbrep/Hbase/Hbase-remote
@@ -0,0 +1,247 @@
+#!/usr/bin/env python
+#
+# Autogenerated by Thrift
+#
+# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+#
+
+import sys
+import pprint
+from urlparse import urlparse
+from thrift.transport import TTransport
+from thrift.transport import TSocket
+from thrift.transport import THttpClient
+from thrift.protocol import TBinaryProtocol
+
+import Hbase
+from ttypes import *
+
+if len(sys.argv) <= 1 or sys.argv[1] == '--help':
+ print ''
+ print 'Usage: ' + sys.argv[0] + ' [-h host:port] [-u url] [-f[ramed]] function [arg1 [arg2...]]'
+ print ''
+ print 'Functions:'
+ print ' getTableNames()'
+ print ' getColumnDescriptors(Text tableName)'
+ print ' getTableRegions(Text tableName)'
+ print ' void createTable(Text tableName, columnFamilies)'
+ print ' void deleteTable(Text tableName)'
+ print ' Bytes get(Text tableName, Text row, Text column)'
+ print ' getVer(Text tableName, Text row, Text column, i32 numVersions)'
+ print ' getVerTs(Text tableName, Text row, Text column, i64 timestamp, i32 numVersions)'
+ print ' getRow(Text tableName, Text row)'
+ print ' getRowTs(Text tableName, Text row, i64 timestamp)'
+ print ' void put(Text tableName, Text row, Text column, Bytes value)'
+ print ' void mutateRow(Text tableName, Text row, mutations)'
+ print ' void mutateRowTs(Text tableName, Text row, mutations, i64 timestamp)'
+ print ' void mutateRows(Text tableName, rowBatches)'
+ print ' void mutateRowsTs(Text tableName, rowBatches, i64 timestamp)'
+ print ' void deleteAll(Text tableName, Text row, Text column)'
+ print ' void deleteAllTs(Text tableName, Text row, Text column, i64 timestamp)'
+ print ' void deleteAllRow(Text tableName, Text row)'
+ print ' void deleteAllRowTs(Text tableName, Text row, i64 timestamp)'
+ print ' ScannerID scannerOpen(Text tableName, Text startRow, columns)'
+ print ' ScannerID scannerOpenWithStop(Text tableName, Text startRow, Text stopRow, columns)'
+ print ' ScannerID scannerOpenTs(Text tableName, Text startRow, columns, i64 timestamp)'
+ print ' ScannerID scannerOpenWithStopTs(Text tableName, Text startRow, Text stopRow, columns, i64 timestamp)'
+ print ' ScanEntry scannerGet(ScannerID id)'
+ print ' void scannerClose(ScannerID id)'
+ print ''
+ sys.exit(0)
+
+pp = pprint.PrettyPrinter(indent = 2)
+host = 'localhost'
+port = 9090
+uri = ''
+framed = False
+http = False
+argi = 1
+
+if sys.argv[argi] == '-h':
+ parts = sys.argv[argi+1].split(':')
+ host = parts[0]
+ port = int(parts[1])
+ argi += 2
+
+if sys.argv[argi] == '-u':
+ url = urlparse(sys.argv[argi+1])
+ parts = url[1].split(':')
+ host = parts[0]
+ if len(parts) > 1:
+ port = int(parts[1])
+ else:
+ port = 80
+ uri = url[2]
+ http = True
+ argi += 2
+
+if sys.argv[argi] == '-f' or sys.argv[argi] == '-framed':
+ framed = True
+ argi += 1
+
+cmd = sys.argv[argi]
+args = sys.argv[argi+1:]
+
+if http:
+ transport = THttpClient.THttpClient(host, port, uri)
+else:
+ socket = TSocket.TSocket(host, port)
+ if framed:
+ transport = TTransport.TFramedTransport(socket)
+ else:
+ transport = TTransport.TBufferedTransport(socket)
+protocol = TBinaryProtocol.TBinaryProtocol(transport)
+client = Hbase.Client(protocol)
+transport.open()
+
+if cmd == 'getTableNames':
+ if len(args) != 0:
+ print 'getTableNames requires 0 args'
+ sys.exit(1)
+ pp.pprint(client.getTableNames())
+
+elif cmd == 'getColumnDescriptors':
+ if len(args) != 1:
+ print 'getColumnDescriptors requires 1 args'
+ sys.exit(1)
+ pp.pprint(client.getColumnDescriptors(eval(args[0]),))
+
+elif cmd == 'getTableRegions':
+ if len(args) != 1:
+ print 'getTableRegions requires 1 args'
+ sys.exit(1)
+ pp.pprint(client.getTableRegions(eval(args[0]),))
+
+elif cmd == 'createTable':
+ if len(args) != 2:
+ print 'createTable requires 2 args'
+ sys.exit(1)
+ pp.pprint(client.createTable(eval(args[0]),eval(args[1]),))
+
+elif cmd == 'deleteTable':
+ if len(args) != 1:
+ print 'deleteTable requires 1 args'
+ sys.exit(1)
+ pp.pprint(client.deleteTable(eval(args[0]),))
+
+elif cmd == 'get':
+ if len(args) != 3:
+ print 'get requires 3 args'
+ sys.exit(1)
+ pp.pprint(client.get(eval(args[0]),eval(args[1]),eval(args[2]),))
+
+elif cmd == 'getVer':
+ if len(args) != 4:
+ print 'getVer requires 4 args'
+ sys.exit(1)
+ pp.pprint(client.getVer(eval(args[0]),eval(args[1]),eval(args[2]),eval(args[3]),))
+
+elif cmd == 'getVerTs':
+ if len(args) != 5:
+ print 'getVerTs requires 5 args'
+ sys.exit(1)
+ pp.pprint(client.getVerTs(eval(args[0]),eval(args[1]),eval(args[2]),eval(args[3]),eval(args[4]),))
+
+elif cmd == 'getRow':
+ if len(args) != 2:
+ print 'getRow requires 2 args'
+ sys.exit(1)
+ pp.pprint(client.getRow(eval(args[0]),eval(args[1]),))
+
+elif cmd == 'getRowTs':
+ if len(args) != 3:
+ print 'getRowTs requires 3 args'
+ sys.exit(1)
+ pp.pprint(client.getRowTs(eval(args[0]),eval(args[1]),eval(args[2]),))
+
+elif cmd == 'put':
+ if len(args) != 4:
+ print 'put requires 4 args'
+ sys.exit(1)
+ pp.pprint(client.put(eval(args[0]),eval(args[1]),eval(args[2]),eval(args[3]),))
+
+elif cmd == 'mutateRow':
+ if len(args) != 3:
+ print 'mutateRow requires 3 args'
+ sys.exit(1)
+ pp.pprint(client.mutateRow(eval(args[0]),eval(args[1]),eval(args[2]),))
+
+elif cmd == 'mutateRowTs':
+ if len(args) != 4:
+ print 'mutateRowTs requires 4 args'
+ sys.exit(1)
+ pp.pprint(client.mutateRowTs(eval(args[0]),eval(args[1]),eval(args[2]),eval(args[3]),))
+
+elif cmd == 'mutateRows':
+ if len(args) != 2:
+ print 'mutateRows requires 2 args'
+ sys.exit(1)
+ pp.pprint(client.mutateRows(eval(args[0]),eval(args[1]),))
+
+elif cmd == 'mutateRowsTs':
+ if len(args) != 3:
+ print 'mutateRowsTs requires 3 args'
+ sys.exit(1)
+ pp.pprint(client.mutateRowsTs(eval(args[0]),eval(args[1]),eval(args[2]),))
+
+elif cmd == 'deleteAll':
+ if len(args) != 3:
+ print 'deleteAll requires 3 args'
+ sys.exit(1)
+ pp.pprint(client.deleteAll(eval(args[0]),eval(args[1]),eval(args[2]),))
+
+elif cmd == 'deleteAllTs':
+ if len(args) != 4:
+ print 'deleteAllTs requires 4 args'
+ sys.exit(1)
+ pp.pprint(client.deleteAllTs(eval(args[0]),eval(args[1]),eval(args[2]),eval(args[3]),))
+
+elif cmd == 'deleteAllRow':
+ if len(args) != 2:
+ print 'deleteAllRow requires 2 args'
+ sys.exit(1)
+ pp.pprint(client.deleteAllRow(eval(args[0]),eval(args[1]),))
+
+elif cmd == 'deleteAllRowTs':
+ if len(args) != 3:
+ print 'deleteAllRowTs requires 3 args'
+ sys.exit(1)
+ pp.pprint(client.deleteAllRowTs(eval(args[0]),eval(args[1]),eval(args[2]),))
+
+elif cmd == 'scannerOpen':
+ if len(args) != 3:
+ print 'scannerOpen requires 3 args'
+ sys.exit(1)
+ pp.pprint(client.scannerOpen(eval(args[0]),eval(args[1]),eval(args[2]),))
+
+elif cmd == 'scannerOpenWithStop':
+ if len(args) != 4:
+ print 'scannerOpenWithStop requires 4 args'
+ sys.exit(1)
+ pp.pprint(client.scannerOpenWithStop(eval(args[0]),eval(args[1]),eval(args[2]),eval(args[3]),))
+
+elif cmd == 'scannerOpenTs':
+ if len(args) != 4:
+ print 'scannerOpenTs requires 4 args'
+ sys.exit(1)
+ pp.pprint(client.scannerOpenTs(eval(args[0]),eval(args[1]),eval(args[2]),eval(args[3]),))
+
+elif cmd == 'scannerOpenWithStopTs':
+ if len(args) != 5:
+ print 'scannerOpenWithStopTs requires 5 args'
+ sys.exit(1)
+ pp.pprint(client.scannerOpenWithStopTs(eval(args[0]),eval(args[1]),eval(args[2]),eval(args[3]),eval(args[4]),))
+
+elif cmd == 'scannerGet':
+ if len(args) != 1:
+ print 'scannerGet requires 1 args'
+ sys.exit(1)
+ pp.pprint(client.scannerGet(eval(args[0]),))
+
+elif cmd == 'scannerClose':
+ if len(args) != 1:
+ print 'scannerClose requires 1 args'
+ sys.exit(1)
+ pp.pprint(client.scannerClose(eval(args[0]),))
+
+transport.close()
diff --git a/src/examples/uploaders/hbrep/Hbase/Hbase.py b/src/examples/uploaders/hbrep/Hbase/Hbase.py
new file mode 100644
index 0000000..0ce842f
--- /dev/null
+++ b/src/examples/uploaders/hbrep/Hbase/Hbase.py
@@ -0,0 +1,5153 @@
+#
+# Autogenerated by Thrift
+#
+# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+#
+
+from thrift.Thrift import *
+from ttypes import *
+from thrift.Thrift import TProcessor
+from thrift.transport import TTransport
+from thrift.protocol import TBinaryProtocol
+try:
+ from thrift.protocol import fastbinary
+except:
+ fastbinary = None
+
+
+class Iface:
+ def getTableNames(self, ):
+ pass
+
+ def getColumnDescriptors(self, tableName):
+ pass
+
+ def getTableRegions(self, tableName):
+ pass
+
+ def createTable(self, tableName, columnFamilies):
+ pass
+
+ def deleteTable(self, tableName):
+ pass
+
+ def get(self, tableName, row, column):
+ pass
+
+ def getVer(self, tableName, row, column, numVersions):
+ pass
+
+ def getVerTs(self, tableName, row, column, timestamp, numVersions):
+ pass
+
+ def getRow(self, tableName, row):
+ pass
+
+ def getRowTs(self, tableName, row, timestamp):
+ pass
+
+ def put(self, tableName, row, column, value):
+ pass
+
+ def mutateRow(self, tableName, row, mutations):
+ pass
+
+ def mutateRowTs(self, tableName, row, mutations, timestamp):
+ pass
+
+ def mutateRows(self, tableName, rowBatches):
+ pass
+
+ def mutateRowsTs(self, tableName, rowBatches, timestamp):
+ pass
+
+ def deleteAll(self, tableName, row, column):
+ pass
+
+ def deleteAllTs(self, tableName, row, column, timestamp):
+ pass
+
+ def deleteAllRow(self, tableName, row):
+ pass
+
+ def deleteAllRowTs(self, tableName, row, timestamp):
+ pass
+
+ def scannerOpen(self, tableName, startRow, columns):
+ pass
+
+ def scannerOpenWithStop(self, tableName, startRow, stopRow, columns):
+ pass
+
+ def scannerOpenTs(self, tableName, startRow, columns, timestamp):
+ pass
+
+ def scannerOpenWithStopTs(self, tableName, startRow, stopRow, columns, timestamp):
+ pass
+
+ def scannerGet(self, id):
+ pass
+
+ def scannerClose(self, id):
+ pass
+
+
+class Client(Iface):
+ def __init__(self, iprot, oprot=None):
+ self._iprot = self._oprot = iprot
+ if oprot != None:
+ self._oprot = oprot
+ self._seqid = 0
+
+ def getTableNames(self, ):
+ self.send_getTableNames()
+ return self.recv_getTableNames()
+
+ def send_getTableNames(self, ):
+ self._oprot.writeMessageBegin('getTableNames', TMessageType.CALL, self._seqid)
+ args = getTableNames_args()
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_getTableNames(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = getTableNames_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "getTableNames failed: unknown result");
+
+ def getColumnDescriptors(self, tableName):
+ self.send_getColumnDescriptors(tableName)
+ return self.recv_getColumnDescriptors()
+
+ def send_getColumnDescriptors(self, tableName):
+ self._oprot.writeMessageBegin('getColumnDescriptors', TMessageType.CALL, self._seqid)
+ args = getColumnDescriptors_args()
+ args.tableName = tableName
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_getColumnDescriptors(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = getColumnDescriptors_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "getColumnDescriptors failed: unknown result");
+
+ def getTableRegions(self, tableName):
+ self.send_getTableRegions(tableName)
+ return self.recv_getTableRegions()
+
+ def send_getTableRegions(self, tableName):
+ self._oprot.writeMessageBegin('getTableRegions', TMessageType.CALL, self._seqid)
+ args = getTableRegions_args()
+ args.tableName = tableName
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_getTableRegions(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = getTableRegions_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "getTableRegions failed: unknown result");
+
+ def createTable(self, tableName, columnFamilies):
+ self.send_createTable(tableName, columnFamilies)
+ self.recv_createTable()
+
+ def send_createTable(self, tableName, columnFamilies):
+ self._oprot.writeMessageBegin('createTable', TMessageType.CALL, self._seqid)
+ args = createTable_args()
+ args.tableName = tableName
+ args.columnFamilies = columnFamilies
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_createTable(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = createTable_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ if result.ia != None:
+ raise result.ia
+ if result.exist != None:
+ raise result.exist
+ return
+
+ def deleteTable(self, tableName):
+ self.send_deleteTable(tableName)
+ self.recv_deleteTable()
+
+ def send_deleteTable(self, tableName):
+ self._oprot.writeMessageBegin('deleteTable', TMessageType.CALL, self._seqid)
+ args = deleteTable_args()
+ args.tableName = tableName
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_deleteTable(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = deleteTable_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ if result.nf != None:
+ raise result.nf
+ return
+
+ def get(self, tableName, row, column):
+ self.send_get(tableName, row, column)
+ return self.recv_get()
+
+ def send_get(self, tableName, row, column):
+ self._oprot.writeMessageBegin('get', TMessageType.CALL, self._seqid)
+ args = get_args()
+ args.tableName = tableName
+ args.row = row
+ args.column = column
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_get(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = get_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ if result.nf != None:
+ raise result.nf
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "get failed: unknown result");
+
+ def getVer(self, tableName, row, column, numVersions):
+ self.send_getVer(tableName, row, column, numVersions)
+ return self.recv_getVer()
+
+ def send_getVer(self, tableName, row, column, numVersions):
+ self._oprot.writeMessageBegin('getVer', TMessageType.CALL, self._seqid)
+ args = getVer_args()
+ args.tableName = tableName
+ args.row = row
+ args.column = column
+ args.numVersions = numVersions
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_getVer(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = getVer_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ if result.nf != None:
+ raise result.nf
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "getVer failed: unknown result");
+
+ def getVerTs(self, tableName, row, column, timestamp, numVersions):
+ self.send_getVerTs(tableName, row, column, timestamp, numVersions)
+ return self.recv_getVerTs()
+
+ def send_getVerTs(self, tableName, row, column, timestamp, numVersions):
+ self._oprot.writeMessageBegin('getVerTs', TMessageType.CALL, self._seqid)
+ args = getVerTs_args()
+ args.tableName = tableName
+ args.row = row
+ args.column = column
+ args.timestamp = timestamp
+ args.numVersions = numVersions
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_getVerTs(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = getVerTs_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ if result.nf != None:
+ raise result.nf
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "getVerTs failed: unknown result");
+
+ def getRow(self, tableName, row):
+ self.send_getRow(tableName, row)
+ return self.recv_getRow()
+
+ def send_getRow(self, tableName, row):
+ self._oprot.writeMessageBegin('getRow', TMessageType.CALL, self._seqid)
+ args = getRow_args()
+ args.tableName = tableName
+ args.row = row
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_getRow(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = getRow_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "getRow failed: unknown result");
+
+ def getRowTs(self, tableName, row, timestamp):
+ self.send_getRowTs(tableName, row, timestamp)
+ return self.recv_getRowTs()
+
+ def send_getRowTs(self, tableName, row, timestamp):
+ self._oprot.writeMessageBegin('getRowTs', TMessageType.CALL, self._seqid)
+ args = getRowTs_args()
+ args.tableName = tableName
+ args.row = row
+ args.timestamp = timestamp
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_getRowTs(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = getRowTs_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "getRowTs failed: unknown result");
+
+ def put(self, tableName, row, column, value):
+ self.send_put(tableName, row, column, value)
+ self.recv_put()
+
+ def send_put(self, tableName, row, column, value):
+ self._oprot.writeMessageBegin('put', TMessageType.CALL, self._seqid)
+ args = put_args()
+ args.tableName = tableName
+ args.row = row
+ args.column = column
+ args.value = value
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_put(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = put_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ if result.ia != None:
+ raise result.ia
+ return
+
+ def mutateRow(self, tableName, row, mutations):
+ self.send_mutateRow(tableName, row, mutations)
+ self.recv_mutateRow()
+
+ def send_mutateRow(self, tableName, row, mutations):
+ self._oprot.writeMessageBegin('mutateRow', TMessageType.CALL, self._seqid)
+ args = mutateRow_args()
+ args.tableName = tableName
+ args.row = row
+ args.mutations = mutations
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_mutateRow(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = mutateRow_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ if result.ia != None:
+ raise result.ia
+ return
+
+ def mutateRowTs(self, tableName, row, mutations, timestamp):
+ self.send_mutateRowTs(tableName, row, mutations, timestamp)
+ self.recv_mutateRowTs()
+
+ def send_mutateRowTs(self, tableName, row, mutations, timestamp):
+ self._oprot.writeMessageBegin('mutateRowTs', TMessageType.CALL, self._seqid)
+ args = mutateRowTs_args()
+ args.tableName = tableName
+ args.row = row
+ args.mutations = mutations
+ args.timestamp = timestamp
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_mutateRowTs(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = mutateRowTs_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ if result.ia != None:
+ raise result.ia
+ return
+
+ def mutateRows(self, tableName, rowBatches):
+ self.send_mutateRows(tableName, rowBatches)
+ self.recv_mutateRows()
+
+ def send_mutateRows(self, tableName, rowBatches):
+ self._oprot.writeMessageBegin('mutateRows', TMessageType.CALL, self._seqid)
+ args = mutateRows_args()
+ args.tableName = tableName
+ args.rowBatches = rowBatches
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_mutateRows(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = mutateRows_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ if result.ia != None:
+ raise result.ia
+ return
+
+ def mutateRowsTs(self, tableName, rowBatches, timestamp):
+ self.send_mutateRowsTs(tableName, rowBatches, timestamp)
+ self.recv_mutateRowsTs()
+
+ def send_mutateRowsTs(self, tableName, rowBatches, timestamp):
+ self._oprot.writeMessageBegin('mutateRowsTs', TMessageType.CALL, self._seqid)
+ args = mutateRowsTs_args()
+ args.tableName = tableName
+ args.rowBatches = rowBatches
+ args.timestamp = timestamp
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_mutateRowsTs(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = mutateRowsTs_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ if result.ia != None:
+ raise result.ia
+ return
+
+ def deleteAll(self, tableName, row, column):
+ self.send_deleteAll(tableName, row, column)
+ self.recv_deleteAll()
+
+ def send_deleteAll(self, tableName, row, column):
+ self._oprot.writeMessageBegin('deleteAll', TMessageType.CALL, self._seqid)
+ args = deleteAll_args()
+ args.tableName = tableName
+ args.row = row
+ args.column = column
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_deleteAll(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = deleteAll_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ return
+
+ def deleteAllTs(self, tableName, row, column, timestamp):
+ self.send_deleteAllTs(tableName, row, column, timestamp)
+ self.recv_deleteAllTs()
+
+ def send_deleteAllTs(self, tableName, row, column, timestamp):
+ self._oprot.writeMessageBegin('deleteAllTs', TMessageType.CALL, self._seqid)
+ args = deleteAllTs_args()
+ args.tableName = tableName
+ args.row = row
+ args.column = column
+ args.timestamp = timestamp
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_deleteAllTs(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = deleteAllTs_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ return
+
+ def deleteAllRow(self, tableName, row):
+ self.send_deleteAllRow(tableName, row)
+ self.recv_deleteAllRow()
+
+ def send_deleteAllRow(self, tableName, row):
+ self._oprot.writeMessageBegin('deleteAllRow', TMessageType.CALL, self._seqid)
+ args = deleteAllRow_args()
+ args.tableName = tableName
+ args.row = row
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_deleteAllRow(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = deleteAllRow_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ return
+
+ def deleteAllRowTs(self, tableName, row, timestamp):
+ self.send_deleteAllRowTs(tableName, row, timestamp)
+ self.recv_deleteAllRowTs()
+
+ def send_deleteAllRowTs(self, tableName, row, timestamp):
+ self._oprot.writeMessageBegin('deleteAllRowTs', TMessageType.CALL, self._seqid)
+ args = deleteAllRowTs_args()
+ args.tableName = tableName
+ args.row = row
+ args.timestamp = timestamp
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_deleteAllRowTs(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = deleteAllRowTs_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ return
+
+ def scannerOpen(self, tableName, startRow, columns):
+ self.send_scannerOpen(tableName, startRow, columns)
+ return self.recv_scannerOpen()
+
+ def send_scannerOpen(self, tableName, startRow, columns):
+ self._oprot.writeMessageBegin('scannerOpen', TMessageType.CALL, self._seqid)
+ args = scannerOpen_args()
+ args.tableName = tableName
+ args.startRow = startRow
+ args.columns = columns
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_scannerOpen(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = scannerOpen_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpen failed: unknown result");
+
+ def scannerOpenWithStop(self, tableName, startRow, stopRow, columns):
+ self.send_scannerOpenWithStop(tableName, startRow, stopRow, columns)
+ return self.recv_scannerOpenWithStop()
+
+ def send_scannerOpenWithStop(self, tableName, startRow, stopRow, columns):
+ self._oprot.writeMessageBegin('scannerOpenWithStop', TMessageType.CALL, self._seqid)
+ args = scannerOpenWithStop_args()
+ args.tableName = tableName
+ args.startRow = startRow
+ args.stopRow = stopRow
+ args.columns = columns
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_scannerOpenWithStop(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = scannerOpenWithStop_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenWithStop failed: unknown result");
+
+ def scannerOpenTs(self, tableName, startRow, columns, timestamp):
+ self.send_scannerOpenTs(tableName, startRow, columns, timestamp)
+ return self.recv_scannerOpenTs()
+
+ def send_scannerOpenTs(self, tableName, startRow, columns, timestamp):
+ self._oprot.writeMessageBegin('scannerOpenTs', TMessageType.CALL, self._seqid)
+ args = scannerOpenTs_args()
+ args.tableName = tableName
+ args.startRow = startRow
+ args.columns = columns
+ args.timestamp = timestamp
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_scannerOpenTs(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = scannerOpenTs_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenTs failed: unknown result");
+
+ def scannerOpenWithStopTs(self, tableName, startRow, stopRow, columns, timestamp):
+ self.send_scannerOpenWithStopTs(tableName, startRow, stopRow, columns, timestamp)
+ return self.recv_scannerOpenWithStopTs()
+
+ def send_scannerOpenWithStopTs(self, tableName, startRow, stopRow, columns, timestamp):
+ self._oprot.writeMessageBegin('scannerOpenWithStopTs', TMessageType.CALL, self._seqid)
+ args = scannerOpenWithStopTs_args()
+ args.tableName = tableName
+ args.startRow = startRow
+ args.stopRow = stopRow
+ args.columns = columns
+ args.timestamp = timestamp
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_scannerOpenWithStopTs(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = scannerOpenWithStopTs_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenWithStopTs failed: unknown result");
+
+ def scannerGet(self, id):
+ self.send_scannerGet(id)
+ return self.recv_scannerGet()
+
+ def send_scannerGet(self, id):
+ self._oprot.writeMessageBegin('scannerGet', TMessageType.CALL, self._seqid)
+ args = scannerGet_args()
+ args.id = id
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_scannerGet(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = scannerGet_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.success != None:
+ return result.success
+ if result.io != None:
+ raise result.io
+ if result.ia != None:
+ raise result.ia
+ if result.nf != None:
+ raise result.nf
+ raise TApplicationException(TApplicationException.MISSING_RESULT, "scannerGet failed: unknown result");
+
+ def scannerClose(self, id):
+ self.send_scannerClose(id)
+ self.recv_scannerClose()
+
+ def send_scannerClose(self, id):
+ self._oprot.writeMessageBegin('scannerClose', TMessageType.CALL, self._seqid)
+ args = scannerClose_args()
+ args.id = id
+ args.write(self._oprot)
+ self._oprot.writeMessageEnd()
+ self._oprot.trans.flush()
+
+ def recv_scannerClose(self, ):
+ (fname, mtype, rseqid) = self._iprot.readMessageBegin()
+ if mtype == TMessageType.EXCEPTION:
+ x = TApplicationException()
+ x.read(self._iprot)
+ self._iprot.readMessageEnd()
+ raise x
+ result = scannerClose_result()
+ result.read(self._iprot)
+ self._iprot.readMessageEnd()
+ if result.io != None:
+ raise result.io
+ if result.ia != None:
+ raise result.ia
+ return
+
+
+class Processor(Iface, TProcessor):
+ def __init__(self, handler):
+ self._handler = handler
+ self._processMap = {}
+ self._processMap["getTableNames"] = Processor.process_getTableNames
+ self._processMap["getColumnDescriptors"] = Processor.process_getColumnDescriptors
+ self._processMap["getTableRegions"] = Processor.process_getTableRegions
+ self._processMap["createTable"] = Processor.process_createTable
+ self._processMap["deleteTable"] = Processor.process_deleteTable
+ self._processMap["get"] = Processor.process_get
+ self._processMap["getVer"] = Processor.process_getVer
+ self._processMap["getVerTs"] = Processor.process_getVerTs
+ self._processMap["getRow"] = Processor.process_getRow
+ self._processMap["getRowTs"] = Processor.process_getRowTs
+ self._processMap["put"] = Processor.process_put
+ self._processMap["mutateRow"] = Processor.process_mutateRow
+ self._processMap["mutateRowTs"] = Processor.process_mutateRowTs
+ self._processMap["mutateRows"] = Processor.process_mutateRows
+ self._processMap["mutateRowsTs"] = Processor.process_mutateRowsTs
+ self._processMap["deleteAll"] = Processor.process_deleteAll
+ self._processMap["deleteAllTs"] = Processor.process_deleteAllTs
+ self._processMap["deleteAllRow"] = Processor.process_deleteAllRow
+ self._processMap["deleteAllRowTs"] = Processor.process_deleteAllRowTs
+ self._processMap["scannerOpen"] = Processor.process_scannerOpen
+ self._processMap["scannerOpenWithStop"] = Processor.process_scannerOpenWithStop
+ self._processMap["scannerOpenTs"] = Processor.process_scannerOpenTs
+ self._processMap["scannerOpenWithStopTs"] = Processor.process_scannerOpenWithStopTs
+ self._processMap["scannerGet"] = Processor.process_scannerGet
+ self._processMap["scannerClose"] = Processor.process_scannerClose
+
+ def process(self, iprot, oprot):
+ (name, type, seqid) = iprot.readMessageBegin()
+ if name not in self._processMap:
+ iprot.skip(TType.STRUCT)
+ iprot.readMessageEnd()
+ x = TApplicationException(TApplicationException.UNKNOWN_METHOD, 'Unknown function %s' % (name))
+ oprot.writeMessageBegin(name, TMessageType.EXCEPTION, seqid)
+ x.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+ return
+ else:
+ self._processMap[name](self, seqid, iprot, oprot)
+ return True
+
+ def process_getTableNames(self, seqid, iprot, oprot):
+ args = getTableNames_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = getTableNames_result()
+ try:
+ result.success = self._handler.getTableNames()
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("getTableNames", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_getColumnDescriptors(self, seqid, iprot, oprot):
+ args = getColumnDescriptors_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = getColumnDescriptors_result()
+ try:
+ result.success = self._handler.getColumnDescriptors(args.tableName)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("getColumnDescriptors", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_getTableRegions(self, seqid, iprot, oprot):
+ args = getTableRegions_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = getTableRegions_result()
+ try:
+ result.success = self._handler.getTableRegions(args.tableName)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("getTableRegions", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_createTable(self, seqid, iprot, oprot):
+ args = createTable_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = createTable_result()
+ try:
+ self._handler.createTable(args.tableName, args.columnFamilies)
+ except IOError, io:
+ result.io = io
+ except IllegalArgument, ia:
+ result.ia = ia
+ except AlreadyExists, exist:
+ result.exist = exist
+ oprot.writeMessageBegin("createTable", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_deleteTable(self, seqid, iprot, oprot):
+ args = deleteTable_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = deleteTable_result()
+ try:
+ self._handler.deleteTable(args.tableName)
+ except IOError, io:
+ result.io = io
+ except NotFound, nf:
+ result.nf = nf
+ oprot.writeMessageBegin("deleteTable", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_get(self, seqid, iprot, oprot):
+ args = get_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = get_result()
+ try:
+ result.success = self._handler.get(args.tableName, args.row, args.column)
+ except IOError, io:
+ result.io = io
+ except NotFound, nf:
+ result.nf = nf
+ oprot.writeMessageBegin("get", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_getVer(self, seqid, iprot, oprot):
+ args = getVer_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = getVer_result()
+ try:
+ result.success = self._handler.getVer(args.tableName, args.row, args.column, args.numVersions)
+ except IOError, io:
+ result.io = io
+ except NotFound, nf:
+ result.nf = nf
+ oprot.writeMessageBegin("getVer", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_getVerTs(self, seqid, iprot, oprot):
+ args = getVerTs_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = getVerTs_result()
+ try:
+ result.success = self._handler.getVerTs(args.tableName, args.row, args.column, args.timestamp, args.numVersions)
+ except IOError, io:
+ result.io = io
+ except NotFound, nf:
+ result.nf = nf
+ oprot.writeMessageBegin("getVerTs", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_getRow(self, seqid, iprot, oprot):
+ args = getRow_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = getRow_result()
+ try:
+ result.success = self._handler.getRow(args.tableName, args.row)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("getRow", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_getRowTs(self, seqid, iprot, oprot):
+ args = getRowTs_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = getRowTs_result()
+ try:
+ result.success = self._handler.getRowTs(args.tableName, args.row, args.timestamp)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("getRowTs", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_put(self, seqid, iprot, oprot):
+ args = put_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = put_result()
+ try:
+ self._handler.put(args.tableName, args.row, args.column, args.value)
+ except IOError, io:
+ result.io = io
+ except IllegalArgument, ia:
+ result.ia = ia
+ oprot.writeMessageBegin("put", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_mutateRow(self, seqid, iprot, oprot):
+ args = mutateRow_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = mutateRow_result()
+ try:
+ self._handler.mutateRow(args.tableName, args.row, args.mutations)
+ except IOError, io:
+ result.io = io
+ except IllegalArgument, ia:
+ result.ia = ia
+ oprot.writeMessageBegin("mutateRow", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_mutateRowTs(self, seqid, iprot, oprot):
+ args = mutateRowTs_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = mutateRowTs_result()
+ try:
+ self._handler.mutateRowTs(args.tableName, args.row, args.mutations, args.timestamp)
+ except IOError, io:
+ result.io = io
+ except IllegalArgument, ia:
+ result.ia = ia
+ oprot.writeMessageBegin("mutateRowTs", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_mutateRows(self, seqid, iprot, oprot):
+ args = mutateRows_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = mutateRows_result()
+ try:
+ self._handler.mutateRows(args.tableName, args.rowBatches)
+ except IOError, io:
+ result.io = io
+ except IllegalArgument, ia:
+ result.ia = ia
+ oprot.writeMessageBegin("mutateRows", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_mutateRowsTs(self, seqid, iprot, oprot):
+ args = mutateRowsTs_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = mutateRowsTs_result()
+ try:
+ self._handler.mutateRowsTs(args.tableName, args.rowBatches, args.timestamp)
+ except IOError, io:
+ result.io = io
+ except IllegalArgument, ia:
+ result.ia = ia
+ oprot.writeMessageBegin("mutateRowsTs", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_deleteAll(self, seqid, iprot, oprot):
+ args = deleteAll_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = deleteAll_result()
+ try:
+ self._handler.deleteAll(args.tableName, args.row, args.column)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("deleteAll", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_deleteAllTs(self, seqid, iprot, oprot):
+ args = deleteAllTs_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = deleteAllTs_result()
+ try:
+ self._handler.deleteAllTs(args.tableName, args.row, args.column, args.timestamp)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("deleteAllTs", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_deleteAllRow(self, seqid, iprot, oprot):
+ args = deleteAllRow_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = deleteAllRow_result()
+ try:
+ self._handler.deleteAllRow(args.tableName, args.row)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("deleteAllRow", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_deleteAllRowTs(self, seqid, iprot, oprot):
+ args = deleteAllRowTs_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = deleteAllRowTs_result()
+ try:
+ self._handler.deleteAllRowTs(args.tableName, args.row, args.timestamp)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("deleteAllRowTs", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_scannerOpen(self, seqid, iprot, oprot):
+ args = scannerOpen_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = scannerOpen_result()
+ try:
+ result.success = self._handler.scannerOpen(args.tableName, args.startRow, args.columns)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("scannerOpen", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_scannerOpenWithStop(self, seqid, iprot, oprot):
+ args = scannerOpenWithStop_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = scannerOpenWithStop_result()
+ try:
+ result.success = self._handler.scannerOpenWithStop(args.tableName, args.startRow, args.stopRow, args.columns)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("scannerOpenWithStop", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_scannerOpenTs(self, seqid, iprot, oprot):
+ args = scannerOpenTs_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = scannerOpenTs_result()
+ try:
+ result.success = self._handler.scannerOpenTs(args.tableName, args.startRow, args.columns, args.timestamp)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("scannerOpenTs", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_scannerOpenWithStopTs(self, seqid, iprot, oprot):
+ args = scannerOpenWithStopTs_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = scannerOpenWithStopTs_result()
+ try:
+ result.success = self._handler.scannerOpenWithStopTs(args.tableName, args.startRow, args.stopRow, args.columns, args.timestamp)
+ except IOError, io:
+ result.io = io
+ oprot.writeMessageBegin("scannerOpenWithStopTs", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_scannerGet(self, seqid, iprot, oprot):
+ args = scannerGet_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = scannerGet_result()
+ try:
+ result.success = self._handler.scannerGet(args.id)
+ except IOError, io:
+ result.io = io
+ except IllegalArgument, ia:
+ result.ia = ia
+ except NotFound, nf:
+ result.nf = nf
+ oprot.writeMessageBegin("scannerGet", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+ def process_scannerClose(self, seqid, iprot, oprot):
+ args = scannerClose_args()
+ args.read(iprot)
+ iprot.readMessageEnd()
+ result = scannerClose_result()
+ try:
+ self._handler.scannerClose(args.id)
+ except IOError, io:
+ result.io = io
+ except IllegalArgument, ia:
+ result.ia = ia
+ oprot.writeMessageBegin("scannerClose", TMessageType.REPLY, seqid)
+ result.write(oprot)
+ oprot.writeMessageEnd()
+ oprot.trans.flush()
+
+
+# HELPER FUNCTIONS AND STRUCTURES
+
+class getTableNames_args:
+
+ thrift_spec = (
+ )
+
+ def __init__(self, d=None):
+ pass
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getTableNames_args')
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getTableNames_result:
+
+ thrift_spec = (
+ (0, TType.LIST, 'success', (TType.STRING,None), None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.LIST:
+ self.success = []
+ (_etype19, _size16) = iprot.readListBegin()
+ for _i20 in xrange(_size16):
+ _elem21 = iprot.readString();
+ self.success.append(_elem21)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getTableNames_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.LIST, 0)
+ oprot.writeListBegin(TType.STRING, len(self.success))
+ for iter22 in self.success:
+ oprot.writeString(iter22)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getColumnDescriptors_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getColumnDescriptors_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getColumnDescriptors_result:
+
+ thrift_spec = (
+ (0, TType.MAP, 'success', (TType.STRING,None,TType.STRUCT,(ColumnDescriptor, ColumnDescriptor.thrift_spec)), None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.MAP:
+ self.success = {}
+ (_ktype24, _vtype25, _size23 ) = iprot.readMapBegin()
+ for _i27 in xrange(_size23):
+ _key28 = iprot.readString();
+ _val29 = ColumnDescriptor()
+ _val29.read(iprot)
+ self.success[_key28] = _val29
+ iprot.readMapEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getColumnDescriptors_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.MAP, 0)
+ oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.success))
+ for kiter30,viter31 in self.success.items():
+ oprot.writeString(kiter30)
+ viter31.write(oprot)
+ oprot.writeMapEnd()
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getTableRegions_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getTableRegions_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getTableRegions_result:
+
+ thrift_spec = (
+ (0, TType.LIST, 'success', (TType.STRUCT,(RegionDescriptor, RegionDescriptor.thrift_spec)), None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.LIST:
+ self.success = []
+ (_etype35, _size32) = iprot.readListBegin()
+ for _i36 in xrange(_size32):
+ _elem37 = RegionDescriptor()
+ _elem37.read(iprot)
+ self.success.append(_elem37)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getTableRegions_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.LIST, 0)
+ oprot.writeListBegin(TType.STRUCT, len(self.success))
+ for iter38 in self.success:
+ iter38.write(oprot)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class createTable_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.LIST, 'columnFamilies', (TType.STRUCT,(ColumnDescriptor, ColumnDescriptor.thrift_spec)), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.columnFamilies = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'columnFamilies' in d:
+ self.columnFamilies = d['columnFamilies']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.LIST:
+ self.columnFamilies = []
+ (_etype42, _size39) = iprot.readListBegin()
+ for _i43 in xrange(_size39):
+ _elem44 = ColumnDescriptor()
+ _elem44.read(iprot)
+ self.columnFamilies.append(_elem44)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('createTable_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.columnFamilies != None:
+ oprot.writeFieldBegin('columnFamilies', TType.LIST, 2)
+ oprot.writeListBegin(TType.STRUCT, len(self.columnFamilies))
+ for iter45 in self.columnFamilies:
+ iter45.write(oprot)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class createTable_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'ia', (IllegalArgument, IllegalArgument.thrift_spec), None, ), # 2
+ (3, TType.STRUCT, 'exist', (AlreadyExists, AlreadyExists.thrift_spec), None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ self.ia = None
+ self.exist = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+ if 'ia' in d:
+ self.ia = d['ia']
+ if 'exist' in d:
+ self.exist = d['exist']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.ia = IllegalArgument()
+ self.ia.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRUCT:
+ self.exist = AlreadyExists()
+ self.exist.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('createTable_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.ia != None:
+ oprot.writeFieldBegin('ia', TType.STRUCT, 2)
+ self.ia.write(oprot)
+ oprot.writeFieldEnd()
+ if self.exist != None:
+ oprot.writeFieldBegin('exist', TType.STRUCT, 3)
+ self.exist.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteTable_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteTable_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteTable_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'nf', (NotFound, NotFound.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ self.nf = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+ if 'nf' in d:
+ self.nf = d['nf']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.nf = NotFound()
+ self.nf.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteTable_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.nf != None:
+ oprot.writeFieldBegin('nf', TType.STRUCT, 2)
+ self.nf.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class get_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.STRING, 'column', None, None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.column = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'column' in d:
+ self.column = d['column']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.column = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('get_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.column != None:
+ oprot.writeFieldBegin('column', TType.STRING, 3)
+ oprot.writeString(self.column)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class get_result:
+
+ thrift_spec = (
+ (0, TType.STRING, 'success', None, None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'nf', (NotFound, NotFound.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ self.nf = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+ if 'nf' in d:
+ self.nf = d['nf']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.STRING:
+ self.success = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.nf = NotFound()
+ self.nf.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('get_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.STRING, 0)
+ oprot.writeString(self.success)
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.nf != None:
+ oprot.writeFieldBegin('nf', TType.STRUCT, 2)
+ self.nf.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getVer_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.STRING, 'column', None, None, ), # 3
+ (4, TType.I32, 'numVersions', None, None, ), # 4
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.column = None
+ self.numVersions = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'column' in d:
+ self.column = d['column']
+ if 'numVersions' in d:
+ self.numVersions = d['numVersions']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.column = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.I32:
+ self.numVersions = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getVer_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.column != None:
+ oprot.writeFieldBegin('column', TType.STRING, 3)
+ oprot.writeString(self.column)
+ oprot.writeFieldEnd()
+ if self.numVersions != None:
+ oprot.writeFieldBegin('numVersions', TType.I32, 4)
+ oprot.writeI32(self.numVersions)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getVer_result:
+
+ thrift_spec = (
+ (0, TType.LIST, 'success', (TType.STRING,None), None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'nf', (NotFound, NotFound.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ self.nf = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+ if 'nf' in d:
+ self.nf = d['nf']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.LIST:
+ self.success = []
+ (_etype49, _size46) = iprot.readListBegin()
+ for _i50 in xrange(_size46):
+ _elem51 = iprot.readString();
+ self.success.append(_elem51)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.nf = NotFound()
+ self.nf.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getVer_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.LIST, 0)
+ oprot.writeListBegin(TType.STRING, len(self.success))
+ for iter52 in self.success:
+ oprot.writeString(iter52)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.nf != None:
+ oprot.writeFieldBegin('nf', TType.STRUCT, 2)
+ self.nf.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getVerTs_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.STRING, 'column', None, None, ), # 3
+ (4, TType.I64, 'timestamp', None, None, ), # 4
+ (5, TType.I32, 'numVersions', None, None, ), # 5
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.column = None
+ self.timestamp = None
+ self.numVersions = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'column' in d:
+ self.column = d['column']
+ if 'timestamp' in d:
+ self.timestamp = d['timestamp']
+ if 'numVersions' in d:
+ self.numVersions = d['numVersions']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.column = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.I64:
+ self.timestamp = iprot.readI64();
+ else:
+ iprot.skip(ftype)
+ elif fid == 5:
+ if ftype == TType.I32:
+ self.numVersions = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getVerTs_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.column != None:
+ oprot.writeFieldBegin('column', TType.STRING, 3)
+ oprot.writeString(self.column)
+ oprot.writeFieldEnd()
+ if self.timestamp != None:
+ oprot.writeFieldBegin('timestamp', TType.I64, 4)
+ oprot.writeI64(self.timestamp)
+ oprot.writeFieldEnd()
+ if self.numVersions != None:
+ oprot.writeFieldBegin('numVersions', TType.I32, 5)
+ oprot.writeI32(self.numVersions)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getVerTs_result:
+
+ thrift_spec = (
+ (0, TType.LIST, 'success', (TType.STRING,None), None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'nf', (NotFound, NotFound.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ self.nf = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+ if 'nf' in d:
+ self.nf = d['nf']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.LIST:
+ self.success = []
+ (_etype56, _size53) = iprot.readListBegin()
+ for _i57 in xrange(_size53):
+ _elem58 = iprot.readString();
+ self.success.append(_elem58)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.nf = NotFound()
+ self.nf.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getVerTs_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.LIST, 0)
+ oprot.writeListBegin(TType.STRING, len(self.success))
+ for iter59 in self.success:
+ oprot.writeString(iter59)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.nf != None:
+ oprot.writeFieldBegin('nf', TType.STRUCT, 2)
+ self.nf.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getRow_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getRow_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getRow_result:
+
+ thrift_spec = (
+ (0, TType.MAP, 'success', (TType.STRING,None,TType.STRING,None), None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.MAP:
+ self.success = {}
+ (_ktype61, _vtype62, _size60 ) = iprot.readMapBegin()
+ for _i64 in xrange(_size60):
+ _key65 = iprot.readString();
+ _val66 = iprot.readString();
+ self.success[_key65] = _val66
+ iprot.readMapEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getRow_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.MAP, 0)
+ oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.success))
+ for kiter67,viter68 in self.success.items():
+ oprot.writeString(kiter67)
+ oprot.writeString(viter68)
+ oprot.writeMapEnd()
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getRowTs_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.I64, 'timestamp', None, None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.timestamp = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'timestamp' in d:
+ self.timestamp = d['timestamp']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.I64:
+ self.timestamp = iprot.readI64();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getRowTs_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.timestamp != None:
+ oprot.writeFieldBegin('timestamp', TType.I64, 3)
+ oprot.writeI64(self.timestamp)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class getRowTs_result:
+
+ thrift_spec = (
+ (0, TType.MAP, 'success', (TType.STRING,None,TType.STRING,None), None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.MAP:
+ self.success = {}
+ (_ktype70, _vtype71, _size69 ) = iprot.readMapBegin()
+ for _i73 in xrange(_size69):
+ _key74 = iprot.readString();
+ _val75 = iprot.readString();
+ self.success[_key74] = _val75
+ iprot.readMapEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('getRowTs_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.MAP, 0)
+ oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.success))
+ for kiter76,viter77 in self.success.items():
+ oprot.writeString(kiter76)
+ oprot.writeString(viter77)
+ oprot.writeMapEnd()
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class put_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.STRING, 'column', None, None, ), # 3
+ (4, TType.STRING, 'value', None, None, ), # 4
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.column = None
+ self.value = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'column' in d:
+ self.column = d['column']
+ if 'value' in d:
+ self.value = d['value']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.column = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.STRING:
+ self.value = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('put_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.column != None:
+ oprot.writeFieldBegin('column', TType.STRING, 3)
+ oprot.writeString(self.column)
+ oprot.writeFieldEnd()
+ if self.value != None:
+ oprot.writeFieldBegin('value', TType.STRING, 4)
+ oprot.writeString(self.value)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class put_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'ia', (IllegalArgument, IllegalArgument.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ self.ia = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+ if 'ia' in d:
+ self.ia = d['ia']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.ia = IllegalArgument()
+ self.ia.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('put_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.ia != None:
+ oprot.writeFieldBegin('ia', TType.STRUCT, 2)
+ self.ia.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class mutateRow_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.LIST, 'mutations', (TType.STRUCT,(Mutation, Mutation.thrift_spec)), None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.mutations = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'mutations' in d:
+ self.mutations = d['mutations']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.LIST:
+ self.mutations = []
+ (_etype81, _size78) = iprot.readListBegin()
+ for _i82 in xrange(_size78):
+ _elem83 = Mutation()
+ _elem83.read(iprot)
+ self.mutations.append(_elem83)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('mutateRow_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.mutations != None:
+ oprot.writeFieldBegin('mutations', TType.LIST, 3)
+ oprot.writeListBegin(TType.STRUCT, len(self.mutations))
+ for iter84 in self.mutations:
+ iter84.write(oprot)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class mutateRow_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'ia', (IllegalArgument, IllegalArgument.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ self.ia = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+ if 'ia' in d:
+ self.ia = d['ia']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.ia = IllegalArgument()
+ self.ia.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('mutateRow_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.ia != None:
+ oprot.writeFieldBegin('ia', TType.STRUCT, 2)
+ self.ia.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class mutateRowTs_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.LIST, 'mutations', (TType.STRUCT,(Mutation, Mutation.thrift_spec)), None, ), # 3
+ (4, TType.I64, 'timestamp', None, None, ), # 4
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.mutations = None
+ self.timestamp = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'mutations' in d:
+ self.mutations = d['mutations']
+ if 'timestamp' in d:
+ self.timestamp = d['timestamp']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.LIST:
+ self.mutations = []
+ (_etype88, _size85) = iprot.readListBegin()
+ for _i89 in xrange(_size85):
+ _elem90 = Mutation()
+ _elem90.read(iprot)
+ self.mutations.append(_elem90)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.I64:
+ self.timestamp = iprot.readI64();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('mutateRowTs_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.mutations != None:
+ oprot.writeFieldBegin('mutations', TType.LIST, 3)
+ oprot.writeListBegin(TType.STRUCT, len(self.mutations))
+ for iter91 in self.mutations:
+ iter91.write(oprot)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ if self.timestamp != None:
+ oprot.writeFieldBegin('timestamp', TType.I64, 4)
+ oprot.writeI64(self.timestamp)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class mutateRowTs_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'ia', (IllegalArgument, IllegalArgument.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ self.ia = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+ if 'ia' in d:
+ self.ia = d['ia']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.ia = IllegalArgument()
+ self.ia.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('mutateRowTs_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.ia != None:
+ oprot.writeFieldBegin('ia', TType.STRUCT, 2)
+ self.ia.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class mutateRows_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.LIST, 'rowBatches', (TType.STRUCT,(BatchMutation, BatchMutation.thrift_spec)), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.rowBatches = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'rowBatches' in d:
+ self.rowBatches = d['rowBatches']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.LIST:
+ self.rowBatches = []
+ (_etype95, _size92) = iprot.readListBegin()
+ for _i96 in xrange(_size92):
+ _elem97 = BatchMutation()
+ _elem97.read(iprot)
+ self.rowBatches.append(_elem97)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('mutateRows_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.rowBatches != None:
+ oprot.writeFieldBegin('rowBatches', TType.LIST, 2)
+ oprot.writeListBegin(TType.STRUCT, len(self.rowBatches))
+ for iter98 in self.rowBatches:
+ iter98.write(oprot)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class mutateRows_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'ia', (IllegalArgument, IllegalArgument.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ self.ia = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+ if 'ia' in d:
+ self.ia = d['ia']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.ia = IllegalArgument()
+ self.ia.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('mutateRows_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.ia != None:
+ oprot.writeFieldBegin('ia', TType.STRUCT, 2)
+ self.ia.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class mutateRowsTs_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.LIST, 'rowBatches', (TType.STRUCT,(BatchMutation, BatchMutation.thrift_spec)), None, ), # 2
+ (3, TType.I64, 'timestamp', None, None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.rowBatches = None
+ self.timestamp = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'rowBatches' in d:
+ self.rowBatches = d['rowBatches']
+ if 'timestamp' in d:
+ self.timestamp = d['timestamp']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.LIST:
+ self.rowBatches = []
+ (_etype102, _size99) = iprot.readListBegin()
+ for _i103 in xrange(_size99):
+ _elem104 = BatchMutation()
+ _elem104.read(iprot)
+ self.rowBatches.append(_elem104)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.I64:
+ self.timestamp = iprot.readI64();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('mutateRowsTs_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.rowBatches != None:
+ oprot.writeFieldBegin('rowBatches', TType.LIST, 2)
+ oprot.writeListBegin(TType.STRUCT, len(self.rowBatches))
+ for iter105 in self.rowBatches:
+ iter105.write(oprot)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ if self.timestamp != None:
+ oprot.writeFieldBegin('timestamp', TType.I64, 3)
+ oprot.writeI64(self.timestamp)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class mutateRowsTs_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'ia', (IllegalArgument, IllegalArgument.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ self.ia = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+ if 'ia' in d:
+ self.ia = d['ia']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.ia = IllegalArgument()
+ self.ia.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('mutateRowsTs_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.ia != None:
+ oprot.writeFieldBegin('ia', TType.STRUCT, 2)
+ self.ia.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteAll_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.STRING, 'column', None, None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.column = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'column' in d:
+ self.column = d['column']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.column = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteAll_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.column != None:
+ oprot.writeFieldBegin('column', TType.STRING, 3)
+ oprot.writeString(self.column)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteAll_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteAll_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteAllTs_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.STRING, 'column', None, None, ), # 3
+ (4, TType.I64, 'timestamp', None, None, ), # 4
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.column = None
+ self.timestamp = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'column' in d:
+ self.column = d['column']
+ if 'timestamp' in d:
+ self.timestamp = d['timestamp']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.column = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.I64:
+ self.timestamp = iprot.readI64();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteAllTs_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.column != None:
+ oprot.writeFieldBegin('column', TType.STRING, 3)
+ oprot.writeString(self.column)
+ oprot.writeFieldEnd()
+ if self.timestamp != None:
+ oprot.writeFieldBegin('timestamp', TType.I64, 4)
+ oprot.writeI64(self.timestamp)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteAllTs_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteAllTs_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteAllRow_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteAllRow_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteAllRow_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteAllRow_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteAllRowTs_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'row', None, None, ), # 2
+ (3, TType.I64, 'timestamp', None, None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.row = None
+ self.timestamp = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'row' in d:
+ self.row = d['row']
+ if 'timestamp' in d:
+ self.timestamp = d['timestamp']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.I64:
+ self.timestamp = iprot.readI64();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteAllRowTs_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 2)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.timestamp != None:
+ oprot.writeFieldBegin('timestamp', TType.I64, 3)
+ oprot.writeI64(self.timestamp)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class deleteAllRowTs_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('deleteAllRowTs_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerOpen_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'startRow', None, None, ), # 2
+ (3, TType.LIST, 'columns', (TType.STRING,None), None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.startRow = None
+ self.columns = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'startRow' in d:
+ self.startRow = d['startRow']
+ if 'columns' in d:
+ self.columns = d['columns']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.startRow = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.LIST:
+ self.columns = []
+ (_etype109, _size106) = iprot.readListBegin()
+ for _i110 in xrange(_size106):
+ _elem111 = iprot.readString();
+ self.columns.append(_elem111)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerOpen_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.startRow != None:
+ oprot.writeFieldBegin('startRow', TType.STRING, 2)
+ oprot.writeString(self.startRow)
+ oprot.writeFieldEnd()
+ if self.columns != None:
+ oprot.writeFieldBegin('columns', TType.LIST, 3)
+ oprot.writeListBegin(TType.STRING, len(self.columns))
+ for iter112 in self.columns:
+ oprot.writeString(iter112)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerOpen_result:
+
+ thrift_spec = (
+ (0, TType.I32, 'success', None, None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.I32:
+ self.success = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerOpen_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.I32, 0)
+ oprot.writeI32(self.success)
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerOpenWithStop_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'startRow', None, None, ), # 2
+ (3, TType.STRING, 'stopRow', None, None, ), # 3
+ (4, TType.LIST, 'columns', (TType.STRING,None), None, ), # 4
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.startRow = None
+ self.stopRow = None
+ self.columns = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'startRow' in d:
+ self.startRow = d['startRow']
+ if 'stopRow' in d:
+ self.stopRow = d['stopRow']
+ if 'columns' in d:
+ self.columns = d['columns']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.startRow = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.stopRow = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.LIST:
+ self.columns = []
+ (_etype116, _size113) = iprot.readListBegin()
+ for _i117 in xrange(_size113):
+ _elem118 = iprot.readString();
+ self.columns.append(_elem118)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerOpenWithStop_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.startRow != None:
+ oprot.writeFieldBegin('startRow', TType.STRING, 2)
+ oprot.writeString(self.startRow)
+ oprot.writeFieldEnd()
+ if self.stopRow != None:
+ oprot.writeFieldBegin('stopRow', TType.STRING, 3)
+ oprot.writeString(self.stopRow)
+ oprot.writeFieldEnd()
+ if self.columns != None:
+ oprot.writeFieldBegin('columns', TType.LIST, 4)
+ oprot.writeListBegin(TType.STRING, len(self.columns))
+ for iter119 in self.columns:
+ oprot.writeString(iter119)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerOpenWithStop_result:
+
+ thrift_spec = (
+ (0, TType.I32, 'success', None, None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.I32:
+ self.success = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerOpenWithStop_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.I32, 0)
+ oprot.writeI32(self.success)
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerOpenTs_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'startRow', None, None, ), # 2
+ (3, TType.LIST, 'columns', (TType.STRING,None), None, ), # 3
+ (4, TType.I64, 'timestamp', None, None, ), # 4
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.startRow = None
+ self.columns = None
+ self.timestamp = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'startRow' in d:
+ self.startRow = d['startRow']
+ if 'columns' in d:
+ self.columns = d['columns']
+ if 'timestamp' in d:
+ self.timestamp = d['timestamp']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.startRow = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.LIST:
+ self.columns = []
+ (_etype123, _size120) = iprot.readListBegin()
+ for _i124 in xrange(_size120):
+ _elem125 = iprot.readString();
+ self.columns.append(_elem125)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.I64:
+ self.timestamp = iprot.readI64();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerOpenTs_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.startRow != None:
+ oprot.writeFieldBegin('startRow', TType.STRING, 2)
+ oprot.writeString(self.startRow)
+ oprot.writeFieldEnd()
+ if self.columns != None:
+ oprot.writeFieldBegin('columns', TType.LIST, 3)
+ oprot.writeListBegin(TType.STRING, len(self.columns))
+ for iter126 in self.columns:
+ oprot.writeString(iter126)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ if self.timestamp != None:
+ oprot.writeFieldBegin('timestamp', TType.I64, 4)
+ oprot.writeI64(self.timestamp)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerOpenTs_result:
+
+ thrift_spec = (
+ (0, TType.I32, 'success', None, None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.I32:
+ self.success = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerOpenTs_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.I32, 0)
+ oprot.writeI32(self.success)
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerOpenWithStopTs_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'tableName', None, None, ), # 1
+ (2, TType.STRING, 'startRow', None, None, ), # 2
+ (3, TType.STRING, 'stopRow', None, None, ), # 3
+ (4, TType.LIST, 'columns', (TType.STRING,None), None, ), # 4
+ (5, TType.I64, 'timestamp', None, None, ), # 5
+ )
+
+ def __init__(self, d=None):
+ self.tableName = None
+ self.startRow = None
+ self.stopRow = None
+ self.columns = None
+ self.timestamp = None
+ if isinstance(d, dict):
+ if 'tableName' in d:
+ self.tableName = d['tableName']
+ if 'startRow' in d:
+ self.startRow = d['startRow']
+ if 'stopRow' in d:
+ self.stopRow = d['stopRow']
+ if 'columns' in d:
+ self.columns = d['columns']
+ if 'timestamp' in d:
+ self.timestamp = d['timestamp']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.tableName = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.startRow = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.stopRow = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.LIST:
+ self.columns = []
+ (_etype130, _size127) = iprot.readListBegin()
+ for _i131 in xrange(_size127):
+ _elem132 = iprot.readString();
+ self.columns.append(_elem132)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ elif fid == 5:
+ if ftype == TType.I64:
+ self.timestamp = iprot.readI64();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerOpenWithStopTs_args')
+ if self.tableName != None:
+ oprot.writeFieldBegin('tableName', TType.STRING, 1)
+ oprot.writeString(self.tableName)
+ oprot.writeFieldEnd()
+ if self.startRow != None:
+ oprot.writeFieldBegin('startRow', TType.STRING, 2)
+ oprot.writeString(self.startRow)
+ oprot.writeFieldEnd()
+ if self.stopRow != None:
+ oprot.writeFieldBegin('stopRow', TType.STRING, 3)
+ oprot.writeString(self.stopRow)
+ oprot.writeFieldEnd()
+ if self.columns != None:
+ oprot.writeFieldBegin('columns', TType.LIST, 4)
+ oprot.writeListBegin(TType.STRING, len(self.columns))
+ for iter133 in self.columns:
+ oprot.writeString(iter133)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ if self.timestamp != None:
+ oprot.writeFieldBegin('timestamp', TType.I64, 5)
+ oprot.writeI64(self.timestamp)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerOpenWithStopTs_result:
+
+ thrift_spec = (
+ (0, TType.I32, 'success', None, None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.I32:
+ self.success = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerOpenWithStopTs_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.I32, 0)
+ oprot.writeI32(self.success)
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerGet_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.I32, 'id', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.id = None
+ if isinstance(d, dict):
+ if 'id' in d:
+ self.id = d['id']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.I32:
+ self.id = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerGet_args')
+ if self.id != None:
+ oprot.writeFieldBegin('id', TType.I32, 1)
+ oprot.writeI32(self.id)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerGet_result:
+
+ thrift_spec = (
+ (0, TType.STRUCT, 'success', (ScanEntry, ScanEntry.thrift_spec), None, ), # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'ia', (IllegalArgument, IllegalArgument.thrift_spec), None, ), # 2
+ (3, TType.STRUCT, 'nf', (NotFound, NotFound.thrift_spec), None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.success = None
+ self.io = None
+ self.ia = None
+ self.nf = None
+ if isinstance(d, dict):
+ if 'success' in d:
+ self.success = d['success']
+ if 'io' in d:
+ self.io = d['io']
+ if 'ia' in d:
+ self.ia = d['ia']
+ if 'nf' in d:
+ self.nf = d['nf']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 0:
+ if ftype == TType.STRUCT:
+ self.success = ScanEntry()
+ self.success.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.ia = IllegalArgument()
+ self.ia.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRUCT:
+ self.nf = NotFound()
+ self.nf.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerGet_result')
+ if self.success != None:
+ oprot.writeFieldBegin('success', TType.STRUCT, 0)
+ self.success.write(oprot)
+ oprot.writeFieldEnd()
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.ia != None:
+ oprot.writeFieldBegin('ia', TType.STRUCT, 2)
+ self.ia.write(oprot)
+ oprot.writeFieldEnd()
+ if self.nf != None:
+ oprot.writeFieldBegin('nf', TType.STRUCT, 3)
+ self.nf.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerClose_args:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.I32, 'id', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.id = None
+ if isinstance(d, dict):
+ if 'id' in d:
+ self.id = d['id']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.I32:
+ self.id = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerClose_args')
+ if self.id != None:
+ oprot.writeFieldBegin('id', TType.I32, 1)
+ oprot.writeI32(self.id)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class scannerClose_result:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRUCT, 'io', (IOError, IOError.thrift_spec), None, ), # 1
+ (2, TType.STRUCT, 'ia', (IllegalArgument, IllegalArgument.thrift_spec), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.io = None
+ self.ia = None
+ if isinstance(d, dict):
+ if 'io' in d:
+ self.io = d['io']
+ if 'ia' in d:
+ self.ia = d['ia']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRUCT:
+ self.io = IOError()
+ self.io.read(iprot)
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRUCT:
+ self.ia = IllegalArgument()
+ self.ia.read(iprot)
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('scannerClose_result')
+ if self.io != None:
+ oprot.writeFieldBegin('io', TType.STRUCT, 1)
+ self.io.write(oprot)
+ oprot.writeFieldEnd()
+ if self.ia != None:
+ oprot.writeFieldBegin('ia', TType.STRUCT, 2)
+ self.ia.write(oprot)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+
diff --git a/src/examples/uploaders/hbrep/Hbase/__init__.py b/src/examples/uploaders/hbrep/Hbase/__init__.py
new file mode 100644
index 0000000..31dc15c
--- /dev/null
+++ b/src/examples/uploaders/hbrep/Hbase/__init__.py
@@ -0,0 +1 @@
+__all__ = ['ttypes', 'constants', 'Hbase']
diff --git a/src/examples/uploaders/hbrep/Hbase/constants.py b/src/examples/uploaders/hbrep/Hbase/constants.py
new file mode 100644
index 0000000..2f17ec3
--- /dev/null
+++ b/src/examples/uploaders/hbrep/Hbase/constants.py
@@ -0,0 +1,9 @@
+#
+# Autogenerated by Thrift
+#
+# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+#
+
+from thrift.Thrift import *
+from ttypes import *
+
diff --git a/src/examples/uploaders/hbrep/Hbase/ttypes.py b/src/examples/uploaders/hbrep/Hbase/ttypes.py
new file mode 100644
index 0000000..96df804
--- /dev/null
+++ b/src/examples/uploaders/hbrep/Hbase/ttypes.py
@@ -0,0 +1,708 @@
+#
+# Autogenerated by Thrift
+#
+# DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+#
+
+from thrift.Thrift import *
+
+from thrift.transport import TTransport
+from thrift.protocol import TBinaryProtocol
+try:
+ from thrift.protocol import fastbinary
+except:
+ fastbinary = None
+
+
+class ColumnDescriptor:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'name', None, None, ), # 1
+ (2, TType.I32, 'maxVersions', None, None, ), # 2
+ (3, TType.STRING, 'compression', None, None, ), # 3
+ (4, TType.BOOL, 'inMemory', None, None, ), # 4
+ (5, TType.I32, 'maxValueLength', None, None, ), # 5
+ (6, TType.STRING, 'bloomFilterType', None, None, ), # 6
+ (7, TType.I32, 'bloomFilterVectorSize', None, None, ), # 7
+ (8, TType.I32, 'bloomFilterNbHashes', None, None, ), # 8
+ (9, TType.BOOL, 'blockCacheEnabled', None, None, ), # 9
+ (10, TType.I32, 'timeToLive', None, None, ), # 10
+ )
+
+ def __init__(self, d=None):
+ self.name = None
+ self.maxVersions = 3
+ self.compression = 'NONE'
+ self.inMemory = False
+ self.maxValueLength = 2147483647
+ self.bloomFilterType = 'NONE'
+ self.bloomFilterVectorSize = 0
+ self.bloomFilterNbHashes = 0
+ self.blockCacheEnabled = False
+ self.timeToLive = -1
+ if isinstance(d, dict):
+ if 'name' in d:
+ self.name = d['name']
+ if 'maxVersions' in d:
+ self.maxVersions = d['maxVersions']
+ if 'compression' in d:
+ self.compression = d['compression']
+ if 'inMemory' in d:
+ self.inMemory = d['inMemory']
+ if 'maxValueLength' in d:
+ self.maxValueLength = d['maxValueLength']
+ if 'bloomFilterType' in d:
+ self.bloomFilterType = d['bloomFilterType']
+ if 'bloomFilterVectorSize' in d:
+ self.bloomFilterVectorSize = d['bloomFilterVectorSize']
+ if 'bloomFilterNbHashes' in d:
+ self.bloomFilterNbHashes = d['bloomFilterNbHashes']
+ if 'blockCacheEnabled' in d:
+ self.blockCacheEnabled = d['blockCacheEnabled']
+ if 'timeToLive' in d:
+ self.timeToLive = d['timeToLive']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.name = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.I32:
+ self.maxVersions = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.compression = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 4:
+ if ftype == TType.BOOL:
+ self.inMemory = iprot.readBool();
+ else:
+ iprot.skip(ftype)
+ elif fid == 5:
+ if ftype == TType.I32:
+ self.maxValueLength = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ elif fid == 6:
+ if ftype == TType.STRING:
+ self.bloomFilterType = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 7:
+ if ftype == TType.I32:
+ self.bloomFilterVectorSize = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ elif fid == 8:
+ if ftype == TType.I32:
+ self.bloomFilterNbHashes = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ elif fid == 9:
+ if ftype == TType.BOOL:
+ self.blockCacheEnabled = iprot.readBool();
+ else:
+ iprot.skip(ftype)
+ elif fid == 10:
+ if ftype == TType.I32:
+ self.timeToLive = iprot.readI32();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('ColumnDescriptor')
+ if self.name != None:
+ oprot.writeFieldBegin('name', TType.STRING, 1)
+ oprot.writeString(self.name)
+ oprot.writeFieldEnd()
+ if self.maxVersions != None:
+ oprot.writeFieldBegin('maxVersions', TType.I32, 2)
+ oprot.writeI32(self.maxVersions)
+ oprot.writeFieldEnd()
+ if self.compression != None:
+ oprot.writeFieldBegin('compression', TType.STRING, 3)
+ oprot.writeString(self.compression)
+ oprot.writeFieldEnd()
+ if self.inMemory != None:
+ oprot.writeFieldBegin('inMemory', TType.BOOL, 4)
+ oprot.writeBool(self.inMemory)
+ oprot.writeFieldEnd()
+ if self.maxValueLength != None:
+ oprot.writeFieldBegin('maxValueLength', TType.I32, 5)
+ oprot.writeI32(self.maxValueLength)
+ oprot.writeFieldEnd()
+ if self.bloomFilterType != None:
+ oprot.writeFieldBegin('bloomFilterType', TType.STRING, 6)
+ oprot.writeString(self.bloomFilterType)
+ oprot.writeFieldEnd()
+ if self.bloomFilterVectorSize != None:
+ oprot.writeFieldBegin('bloomFilterVectorSize', TType.I32, 7)
+ oprot.writeI32(self.bloomFilterVectorSize)
+ oprot.writeFieldEnd()
+ if self.bloomFilterNbHashes != None:
+ oprot.writeFieldBegin('bloomFilterNbHashes', TType.I32, 8)
+ oprot.writeI32(self.bloomFilterNbHashes)
+ oprot.writeFieldEnd()
+ if self.blockCacheEnabled != None:
+ oprot.writeFieldBegin('blockCacheEnabled', TType.BOOL, 9)
+ oprot.writeBool(self.blockCacheEnabled)
+ oprot.writeFieldEnd()
+ if self.timeToLive != None:
+ oprot.writeFieldBegin('timeToLive', TType.I32, 10)
+ oprot.writeI32(self.timeToLive)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class RegionDescriptor:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'startKey', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.startKey = None
+ if isinstance(d, dict):
+ if 'startKey' in d:
+ self.startKey = d['startKey']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.startKey = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('RegionDescriptor')
+ if self.startKey != None:
+ oprot.writeFieldBegin('startKey', TType.STRING, 1)
+ oprot.writeString(self.startKey)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class Mutation:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.BOOL, 'isDelete', None, None, ), # 1
+ (2, TType.STRING, 'column', None, None, ), # 2
+ (3, TType.STRING, 'value', None, None, ), # 3
+ )
+
+ def __init__(self, d=None):
+ self.isDelete = False
+ self.column = None
+ self.value = None
+ if isinstance(d, dict):
+ if 'isDelete' in d:
+ self.isDelete = d['isDelete']
+ if 'column' in d:
+ self.column = d['column']
+ if 'value' in d:
+ self.value = d['value']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.BOOL:
+ self.isDelete = iprot.readBool();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.STRING:
+ self.column = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 3:
+ if ftype == TType.STRING:
+ self.value = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('Mutation')
+ if self.isDelete != None:
+ oprot.writeFieldBegin('isDelete', TType.BOOL, 1)
+ oprot.writeBool(self.isDelete)
+ oprot.writeFieldEnd()
+ if self.column != None:
+ oprot.writeFieldBegin('column', TType.STRING, 2)
+ oprot.writeString(self.column)
+ oprot.writeFieldEnd()
+ if self.value != None:
+ oprot.writeFieldBegin('value', TType.STRING, 3)
+ oprot.writeString(self.value)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class BatchMutation:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'row', None, None, ), # 1
+ (2, TType.LIST, 'mutations', (TType.STRUCT,(Mutation, Mutation.thrift_spec)), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.row = None
+ self.mutations = None
+ if isinstance(d, dict):
+ if 'row' in d:
+ self.row = d['row']
+ if 'mutations' in d:
+ self.mutations = d['mutations']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.LIST:
+ self.mutations = []
+ (_etype3, _size0) = iprot.readListBegin()
+ for _i4 in xrange(_size0):
+ _elem5 = Mutation()
+ _elem5.read(iprot)
+ self.mutations.append(_elem5)
+ iprot.readListEnd()
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('BatchMutation')
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 1)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.mutations != None:
+ oprot.writeFieldBegin('mutations', TType.LIST, 2)
+ oprot.writeListBegin(TType.STRUCT, len(self.mutations))
+ for iter6 in self.mutations:
+ iter6.write(oprot)
+ oprot.writeListEnd()
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class ScanEntry:
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'row', None, None, ), # 1
+ (2, TType.MAP, 'columns', (TType.STRING,None,TType.STRING,None), None, ), # 2
+ )
+
+ def __init__(self, d=None):
+ self.row = None
+ self.columns = None
+ if isinstance(d, dict):
+ if 'row' in d:
+ self.row = d['row']
+ if 'columns' in d:
+ self.columns = d['columns']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.row = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ elif fid == 2:
+ if ftype == TType.MAP:
+ self.columns = {}
+ (_ktype8, _vtype9, _size7 ) = iprot.readMapBegin()
+ for _i11 in xrange(_size7):
+ _key12 = iprot.readString();
+ _val13 = iprot.readString();
+ self.columns[_key12] = _val13
+ iprot.readMapEnd()
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('ScanEntry')
+ if self.row != None:
+ oprot.writeFieldBegin('row', TType.STRING, 1)
+ oprot.writeString(self.row)
+ oprot.writeFieldEnd()
+ if self.columns != None:
+ oprot.writeFieldBegin('columns', TType.MAP, 2)
+ oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.columns))
+ for kiter14,viter15 in self.columns.items():
+ oprot.writeString(kiter14)
+ oprot.writeString(viter15)
+ oprot.writeMapEnd()
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class IOError(Exception):
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'message', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.message = None
+ if isinstance(d, dict):
+ if 'message' in d:
+ self.message = d['message']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.message = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('IOError')
+ if self.message != None:
+ oprot.writeFieldBegin('message', TType.STRING, 1)
+ oprot.writeString(self.message)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class IllegalArgument(Exception):
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'message', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.message = None
+ if isinstance(d, dict):
+ if 'message' in d:
+ self.message = d['message']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.message = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('IllegalArgument')
+ if self.message != None:
+ oprot.writeFieldBegin('message', TType.STRING, 1)
+ oprot.writeString(self.message)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class NotFound(Exception):
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'message', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.message = None
+ if isinstance(d, dict):
+ if 'message' in d:
+ self.message = d['message']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.message = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('NotFound')
+ if self.message != None:
+ oprot.writeFieldBegin('message', TType.STRING, 1)
+ oprot.writeString(self.message)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
+class AlreadyExists(Exception):
+
+ thrift_spec = (
+ None, # 0
+ (1, TType.STRING, 'message', None, None, ), # 1
+ )
+
+ def __init__(self, d=None):
+ self.message = None
+ if isinstance(d, dict):
+ if 'message' in d:
+ self.message = d['message']
+
+ def read(self, iprot):
+ if iprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None and fastbinary is not None:
+ fastbinary.decode_binary(self, iprot.trans, (self.__class__, self.thrift_spec))
+ return
+ iprot.readStructBegin()
+ while True:
+ (fname, ftype, fid) = iprot.readFieldBegin()
+ if ftype == TType.STOP:
+ break
+ if fid == 1:
+ if ftype == TType.STRING:
+ self.message = iprot.readString();
+ else:
+ iprot.skip(ftype)
+ else:
+ iprot.skip(ftype)
+ iprot.readFieldEnd()
+ iprot.readStructEnd()
+
+ def write(self, oprot):
+ if oprot.__class__ == TBinaryProtocol.TBinaryProtocolAccelerated and self.thrift_spec is not None and fastbinary is not None:
+ oprot.trans.write(fastbinary.encode_binary(self, (self.__class__, self.thrift_spec)))
+ return
+ oprot.writeStructBegin('AlreadyExists')
+ if self.message != None:
+ oprot.writeFieldBegin('message', TType.STRING, 1)
+ oprot.writeString(self.message)
+ oprot.writeFieldEnd()
+ oprot.writeFieldStop()
+ oprot.writeStructEnd()
+
+ def __str__(self):
+ return str(self.__dict__)
+
+ def __repr__(self):
+ return repr(self.__dict__)
+
+ def __eq__(self, other):
+ return isinstance(other, self.__class__) and self.__dict__ == other.__dict__
+
+ def __ne__(self, other):
+ return not (self == other)
+
diff --git a/src/examples/uploaders/hbrep/README b/src/examples/uploaders/hbrep/README
new file mode 100644
index 0000000..6cfab16
--- /dev/null
+++ b/src/examples/uploaders/hbrep/README
@@ -0,0 +1,107 @@
+hbrep is a tool for replicating data from postgresql tables to hbase tables.
+
+Dependancies:
+ - python 2.4
+ - hbase 0.2.0
+ - skytools 2.1.7
+ - postgresql
+
+It has two main functions.
+ - bootstrap, which bootstraps all the data from specified columns of a table
+ - play, which processes incoming insert, update and delete events and applies them to hbase.
+
+Example usage:
+install triggers:
+ ./hbrep.py hbrep.ini install schema1.table1 schema2.table2
+now that future updates are queuing, bootstrap the tables.
+ ./hbrep.py hbrep.ini bootstrap schema1.table1 schema2.table2
+start pgq ticker
+ pgqadm.py pgq.ini ticker
+play our queue consumer
+ ./hbrep.py hbrep.ini play schema1.table1 schema2.table2
+
+
+More details follow.
+
+
+All functions require an ini file (say hbrep.ini) with a HBaseReplic section, and a section for each postgresql table you wish to replicate containing the table mapping. Note the table mapping section names should match the name of the postgresql table.
+
+eg. ini file:
+####################
+[HBaseReplic]
+job_name = hbase_replic_job
+logfile = %(job_name)s.log
+pidfile = %(job_name)s.pid
+postgresql_db = dbname=source_database user=dbuser
+pgq_queue_name = hbase_replic_queue
+hbase_hostname = localhost
+hbase_port = 9090
+# If omitted, default is 10000
+max_batch_size = 10000
+# file to use when copying a table, if omitted a select columns will be done instead.
+bootstrap_tmpfile = tabledump.dat
+
+# For each table mapping, there must be the same number psql_columns as hbase_column_descriptors
+[public.users]
+psql_schema = public
+psql_table_name = users
+psql_key_column = user_id
+psql_columns = dob
+hbase_table_name = stuff
+hbase_column_descriptors = users:dob
+hbase_row_prefix = user_id:
+####################
+
+Bootstrapping:
+To bootstrap the public.users table from postgresql to hbase,
+
+ ./hbrep.py hbrep.ini bootstrap public.users
+
+you can specify multiple tables as arguments.
+
+
+Play:
+This mode uses pgq from the skytools package to create and manage event queues on postgresql.
+You need to have pgq installed on the database you are replicating.
+
+With a pgq.ini file like this:
+####################
+[pgqadm]
+job_name = sourcedb_ticker
+db = dbname=source_database user=dbuser
+# how often to run maintenance [minutes]
+maint_delay_min = 1
+# how often to check for activity [secs]
+loop_delay = 0.2
+logfile = %(job_name)s.log
+pidfile = %(job_name)s.pid
+use_skylog = 0
+####################
+
+You install pgq on the database by,
+
+ pgqadm.py pgq.ini install
+
+Next you install hbrep.
+
+ hbrep.py hbrep.ini install public.users
+
+This creates a queue using pgq, which in this case will be called hbase_replic_queue. It also registers the hbrep consumer (called HBaseReplic) with that queue. Then finally it creates triggers on each table specified to add an event for each insert, update or delete.
+
+Start the pgq event ticker,
+
+ pgqadm.py pgq.ini ticker
+
+Finally, run the hbreplic consumer
+ ./hbrep.py hbrep.ini play public.users
+
+Now any inserts, updates or deletes on the postgresql users table will be processed and sent to the
+hbase table.
+
+
+uninstall:
+You can remove the triggers from a table by
+ ./hbrep.py hbrep.ini uninstall public.users
+
+
+
diff --git a/src/examples/uploaders/hbrep/__init__.py b/src/examples/uploaders/hbrep/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/src/examples/uploaders/hbrep/__init__.py
diff --git a/src/examples/uploaders/hbrep/bootstrap.py b/src/examples/uploaders/hbrep/bootstrap.py
new file mode 100644
index 0000000..f65a66d
--- /dev/null
+++ b/src/examples/uploaders/hbrep/bootstrap.py
@@ -0,0 +1,190 @@
+import sys, os
+
+import pgq, pgq.producer
+import skytools
+
+from HBaseConnection import *
+import tablemapping
+
+class HBaseBootstrap(skytools.DBScript):
+ """Bootstrapping script for loading columns from a table in postgresql to hbase."""
+
+ def __init__(self, service_name, args):
+ # This will process any options eg -k -v -d
+ skytools.DBScript.__init__(self, service_name, args)
+
+ config_file = self.args[0]
+ if len(self.args) < 2:
+ print "need table names"
+ sys.exit(1)
+ else:
+ self.table_names = self.args[1:]
+
+ #just to check this option exists
+ self.cf.get("postgresql_db")
+
+ self.max_batch_size = int(self.cf.get("max_batch_size", "10000"))
+ self.hbase_hostname = self.cf.get("hbase_hostname", "localhost")
+ self.hbase_port = int(self.cf.get("hbase_port", "9090"))
+ self.table_mappings = tablemapping.load_table_mappings(config_file, self.table_names)
+
+ def startup(self):
+ # make sure the script loops only once.
+ self.set_single_loop(1)
+ self.log.info("Starting " + self.job_name)
+
+ def work(self):
+ for t in self.table_names:
+ self.bootstrap_table(t)
+
+ def bootstrap_table(self, table_name):
+ try:
+ self.log.info("Bootstrapping table %s" % table_name)
+ hbase = HBaseConnection(self.hbase_hostname, self.hbase_port)
+ try:
+ table_mapping = self.table_mappings[table_name]
+
+ self.log.debug("Connecting to HBase")
+ hbase.connect()
+
+ # Fetch postgresql cursor
+ self.log.debug("Getting postgresql cursor")
+ db = self.get_database("postgresql_db")
+ curs = db.cursor()
+
+ hbase.validate_table_name(table_mapping.hbase_table_name)
+ hbase.validate_column_descriptors(table_mapping.hbase_table_name, table_mapping.hbase_column_descriptors)
+
+ try:
+ dump_file = self.cf.get("bootstrap_tmpfile")
+ except:
+ dump_file = None
+
+ if dump_file != None:
+ row_source = CopiedRows(self.log, curs, dump_file)
+ else:
+ row_source = SelectedRows(self.log, curs)
+
+ table_name = table_mapping.psql_schema+"."+table_mapping.psql_table_name
+ # we are careful to make sure that the first column will be the key.
+ column_list = [table_mapping.psql_key_column] + table_mapping.psql_columns
+
+ # Load the rows either via a select or via a table copy to file.
+ # Either way, it does not load it all into memory.
+ # copy is faster, but may incorrectly handle data with tabs in it.
+ row_source.load_rows(table_name, column_list)
+
+ # max number of rows to fetch at once
+ batch_size = self.max_batch_size
+ total_rows = 0L
+
+ self.log.debug("Starting puts to hbase")
+ rows = row_source.get_rows(batch_size)
+ while rows != []:
+ batches = []
+ for row in rows:
+ batches.append(self.createRowBatch(table_mapping, row))
+
+ hbase.client.mutateRows(table_mapping.hbase_table_name, batches)
+ total_rows = total_rows + len(batches)
+ self.log.debug("total rows put = %d" % (total_rows))
+ # get next batch of rows
+ rows = row_source.get_rows(batch_size)
+
+ self.log.info("total rows put = %d" % (total_rows))
+ self.log.info("Bootstrapping table %s complete" % table_name)
+
+
+ except Exception, e:
+ #self.log.info(e)
+ sys.exit(e)
+
+ finally:
+ hbase.disconnect()
+
+ def createRowBatch(self, table_mapping, row):
+ batch = BatchMutation()
+ batch.row = table_mapping.hbase_row_prefix + str(row[0])
+ batch.mutations = []
+ for column, value in zip(table_mapping.hbase_column_descriptors, row[1:]):
+ if value != 'NULL' and value != None:
+ m = Mutation()
+ m.column = column
+ m.value = str(value)
+ batch.mutations.append(m)
+ return batch
+
+
+## Helper classes to fetch rows from a select, or from a table dumped by copy
+
+class RowSource:
+ """ Base class for fetching rows from somewhere. """
+
+ def __init__(self, log):
+ self.log = log
+
+ def make_column_str(self, column_list):
+ i = 0
+ while i < len(column_list):
+ column_list[i] = '"%s"' % column_list[i]
+ i += 1
+ return ",".join(column_list)
+
+
+class CopiedRows(RowSource):
+ """
+ Class for fetching rows from a postgresql database,
+ rows are dumped to a copied to a file first
+ """
+ def __init__(self, log, curs, dump_file):
+ RowSource.__init__(self, log)
+ self.dump_file = dump_file
+ # Set DBAPI-2.0 cursor
+ self.curs = curs
+
+ def load_rows(self, table_name, column_list):
+ columns = self.make_column_str(column_list)
+ self.log.debug("starting dump to file:%s. table:%s. columns:%s" % (self.dump_file, table_name, columns))
+ dump_out = open(self.dump_file, 'w')
+ self.curs.copy_to(dump_out, table_name + "(%s)" % columns, '\t', 'NULL')
+ dump_out.close()
+ self.log.debug("table %s dump complete" % table_name)
+
+ self.dump_in = open(self.dump_file, 'r')
+
+ def get_rows(self, no_of_rows):
+ rows = []
+ if not self.dump_in.closed:
+ for line in self.dump_in:
+ rows.append(line.split())
+ if len(rows) >= no_of_rows:
+ break
+ if rows == []:
+ self.dump_in.close()
+ return rows
+
+
+class SelectedRows(RowSource):
+ """
+ Class for fetching rows from a postgresql database,
+ rows are fetched via a select on the entire table.
+ """
+ def __init__(self, log, curs):
+ RowSource.__init__(self, log)
+ # Set DBAPI-2.0 cursor
+ self.curs = curs
+
+ def load_rows(self, table_name, column_list):
+ columns = self.make_column_str(column_list)
+ q = "SELECT %s FROM %s" % (columns,table_name)
+ self.log.debug("Executing query %s" % q)
+ self.curs.execute(q)
+ self.log.debug("query finished")
+
+ def get_rows(self, no_of_rows):
+ return self.curs.fetchmany(no_of_rows)
+
+
+if __name__ == '__main__':
+ bootstrap = HBaseBootstrap("HBaseReplic",sys.argv[1:])
+ bootstrap.start()
diff --git a/src/examples/uploaders/hbrep/hbrep.ini b/src/examples/uploaders/hbrep/hbrep.ini
new file mode 100644
index 0000000..0839897
--- /dev/null
+++ b/src/examples/uploaders/hbrep/hbrep.ini
@@ -0,0 +1,22 @@
+[HBaseReplic]
+job_name = hbase_replic_job
+logfile = %(job_name)s.log
+pidfile = %(job_name)s.pid
+postgresql_db = dbname=source_database user=dbuser
+pgq_queue_name = hbase_replic_queue
+hbase_hostname = localhost
+hbase_port = 9090
+# If omitted, default is 10000
+max_batch_size = 10000
+# file to use when copying a table, if omitted a select columns will be done instead.
+bootstrap_tmpfile = tabledump.dat
+
+# For each table mapping, there must be the same number psql_columns as hbase_column_descriptors
+[public.users]
+psql_schema = public
+psql_table_name = users
+psql_key_column = user_id
+psql_columns = dob
+hbase_table_name = stuff
+hbase_column_descriptors = users:dob
+hbase_row_prefix = user_id:
diff --git a/src/examples/uploaders/hbrep/hbrep.py b/src/examples/uploaders/hbrep/hbrep.py
new file mode 100755
index 0000000..665387f
--- /dev/null
+++ b/src/examples/uploaders/hbrep/hbrep.py
@@ -0,0 +1,126 @@
+#!/usr/bin/env python
+import sys, os
+
+import pgq, pgq.producer
+import skytools, skytools._pyquoting
+
+from bootstrap import HBaseBootstrap
+from HBaseConsumer import HBaseConsumer
+
+command_usage = """
+%prog [options] inifile command [tablenames]
+
+commands:
+ play Run event consumer to update specified tables with hbase.
+ bootstrap Bootstrap specified tables args into hbase.
+ install Setup the pgq queue, and install trigger on each table.
+ uninstall Remove the triggers from each specified table.
+"""
+
+class HBaseReplic(skytools.DBScript):
+ def __init__(self, service_name, args):
+ try:
+ self.run_script = 0
+
+ # This will process any options eg -k -v -d
+ skytools.DBScript.__init__(self, service_name, args)
+
+ self.config_file = self.args[0]
+
+ if len(self.args) < 2:
+ self.print_usage()
+ print "need command"
+ sys.exit(0)
+ cmd = self.args[1]
+
+ if not cmd in ["play","bootstrap","install", "uninstall"]:
+ self.print_usage()
+ print "unknown command"
+ sys.exit(0)
+
+ if len(self.args) < 3:
+ self.print_usage()
+ print "need table names"
+ sys.exit(0)
+ else:
+ self.table_names = self.args[2:]
+
+ if cmd == "play":
+ self.run_script = HBaseConsumer(service_name, [self.config_file] + self.table_names)
+ elif cmd == "bootstrap":
+ self.run_script = HBaseBootstrap(service_name, [self.config_file] + self.table_names)
+ elif cmd == "install":
+ self.work = self.do_install
+ elif cmd == "uninstall":
+ self.work = self.do_uninstall
+
+ except Exception, e:
+ sys.exit(e)
+
+ def print_usage(self):
+ print "Usage: " + command_usage
+
+ def init_optparse(self, parser=None):
+ p = skytools.DBScript.init_optparse(self, parser)
+ p.set_usage(command_usage.strip())
+ return p
+
+ def start(self):
+ if self.run_script:
+ self.run_script.start()
+ else:
+ skytools.DBScript.start(self)
+
+ def startup(self):
+ # make sure the script loops only once.
+ self.set_single_loop(1)
+
+ def do_install(self):
+ try:
+ queue_name = self.cf.get("pgq_queue_name")
+ consumer = self.job_name
+
+ self.log.info('Creating queue: %s' % queue_name)
+ self.exec_sql("select pgq.create_queue(%s)", [queue_name])
+
+ self.log.info('Registering consumer %s on queue %s' % (consumer, queue_name))
+ self.exec_sql("select pgq.register_consumer(%s, %s)", [queue_name, consumer])
+
+ for table_name in self.table_names:
+ self.log.info('Creating trigger hbase_replic on table %s' % (table_name))
+ q = """
+ CREATE TRIGGER hbase_replic
+ AFTER INSERT OR UPDATE OR DELETE
+ ON %s
+ FOR EACH ROW
+ EXECUTE PROCEDURE pgq.logutriga('%s')"""
+ self.exec_sql(q % (table_name, queue_name), [])
+ except Exception, e:
+ sys.exit(e)
+
+ def do_uninstall(self):
+ try:
+ queue_name = self.cf.get("pgq_queue_name")
+ consumer = "HBaseReplic"
+
+ #self.log.info('Unregistering consumer %s on queue %s' % (consumer, queue_name))
+ #self.exec_sql("select pgq.unregister_consumer(%s, %s)", [queue_name, consumer])
+
+ for table_name in self.table_names:
+ self.log.info('Dropping trigger hbase_replic on table %s' % (table_name))
+ q = "DROP TRIGGER hbase_replic ON %s" % table_name
+ self.exec_sql(q, [])
+
+ except Exception, e:
+ sys.exit(e)
+
+ def exec_sql(self, q, args):
+ self.log.debug(q)
+ db = self.get_database('postgresql_db')
+ curs = db.cursor()
+ curs.execute(q, args)
+ db.commit()
+
+if __name__ == '__main__':
+ script = HBaseReplic("HBaseReplic",sys.argv[1:])
+ script.start()
diff --git a/src/examples/uploaders/hbrep/pgq.ini b/src/examples/uploaders/hbrep/pgq.ini
new file mode 100644
index 0000000..d11b5dd
--- /dev/null
+++ b/src/examples/uploaders/hbrep/pgq.ini
@@ -0,0 +1,10 @@
+[pgqadm]
+job_name = sourcedb_ticker
+db = dbname=source_database user=dbuser
+# how often to run maintenance [minutes]
+maint_delay_min = 1
+# how often to check for activity [secs]
+loop_delay = 0.2
+logfile = %(job_name)s.log
+pidfile = %(job_name)s.pid
+use_skylog = 0
diff --git a/src/examples/uploaders/hbrep/tablemapping.py b/src/examples/uploaders/hbrep/tablemapping.py
new file mode 100644
index 0000000..d85cbfb
--- /dev/null
+++ b/src/examples/uploaders/hbrep/tablemapping.py
@@ -0,0 +1,33 @@
+import sys, os
+from skytools.config import *
+
+PSQL_SCHEMA = "psql_schema"
+PSQL_TABLENAME = "psql_table_name"
+PSQL_KEYCOL = "psql_key_column"
+PSQL_COLUMNS = "psql_columns"
+HBASE_TABLENAME = "hbase_table_name"
+HBASE_COLUMNDESCS = "hbase_column_descriptors"
+HBASE_ROWPREFIX = "hbase_row_prefix"
+
+def load_table_mappings(config_file, table_names):
+ table_mappings = {}
+ for table_name in table_names:
+ conf = Config(table_name, config_file)
+ table_mappings[table_name] = PSqlHBaseTableMapping(conf)
+ return table_mappings
+
+class PSqlHBaseTableMapping:
+ # conf can be anything with a get function eg, a dictionary
+ def __init__(self, conf):
+ self.psql_schema = conf.get(PSQL_SCHEMA)
+ self.psql_table_name = conf.get(PSQL_TABLENAME)
+ self.psql_key_column = conf.get(PSQL_KEYCOL)
+ self.psql_columns = conf.get(PSQL_COLUMNS).split()
+ self.hbase_table_name = conf.get(HBASE_TABLENAME)
+ self.hbase_column_descriptors = conf.get(HBASE_COLUMNDESCS).split()
+ self.hbase_row_prefix = conf.get(HBASE_ROWPREFIX)
+
+ if len(self.psql_columns) != len(self.hbase_column_descriptors):
+ raise Exception("psql_columns and hbase_column_descriptors must have same length")
+
+
diff --git a/src/java/org/apache/hadoop/hbase/Chore.java b/src/java/org/apache/hadoop/hbase/Chore.java
new file mode 100644
index 0000000..06a97d7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/Chore.java
@@ -0,0 +1,106 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Sleeper;
+
+/**
+ * Chore is a task performed on a period in hbase. The chore is run in its own
+ * thread. This base abstract class provides while loop and sleeping facility.
+ * If an unhandled exception, the threads exit is logged.
+ * Implementers just need to add checking if there is work to be done and if
+ * so, do it. Its the base of most of the chore threads in hbase.
+ *
+ * Don't subclass Chore if the task relies on being woken up for something to
+ * do, such as an entry being added to a queue, etc.
+ */
+public abstract class Chore extends Thread {
+ private final Log LOG = LogFactory.getLog(this.getClass());
+ private final Sleeper sleeper;
+ protected volatile AtomicBoolean stop;
+
+ /**
+ * @param p Period at which we should run. Will be adjusted appropriately
+ * should we find work and it takes time to complete.
+ * @param s When this flag is set to true, this thread will cleanup and exit
+ * cleanly.
+ */
+ public Chore(final int p, final AtomicBoolean s) {
+ super();
+ this.sleeper = new Sleeper(p, s);
+ this.stop = s;
+ }
+
+ /**
+ * @see java.lang.Thread#run()
+ */
+ @Override
+ public void run() {
+ try {
+ boolean initialChoreComplete = false;
+ while (!this.stop.get()) {
+ long startTime = System.currentTimeMillis();
+ try {
+ if (!initialChoreComplete) {
+ initialChoreComplete = initialChore();
+ } else {
+ chore();
+ }
+ } catch (Exception e) {
+ LOG.error("Caught exception", e);
+ if (this.stop.get()) {
+ continue;
+ }
+ }
+ this.sleeper.sleep(startTime);
+ }
+ } catch (Throwable t) {
+ LOG.fatal("Caught error. Starting shutdown.", t);
+ this.stop.set(true);
+ } finally {
+ LOG.info(getName() + " exiting");
+ }
+ }
+
+ /**
+ * Override to run a task before we start looping.
+ * @return true if initial chore was successful
+ */
+ protected boolean initialChore() {
+ // Default does nothing.
+ return true;
+ }
+
+ /**
+ * Look for chores. If any found, do them else just return.
+ */
+ protected abstract void chore();
+
+ /**
+ * Sleep for period.
+ */
+ protected void sleep() {
+ this.sleeper.sleep();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/ColumnNameParseException.java b/src/java/org/apache/hadoop/hbase/ColumnNameParseException.java
new file mode 100644
index 0000000..943c1e3
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ColumnNameParseException.java
@@ -0,0 +1,40 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/**
+ * Thrown if issue with passed column name.
+ */
+public class ColumnNameParseException extends DoNotRetryIOException {
+
+ private static final long serialVersionUID = -2897373353949942302L;
+
+ /** default constructor */
+ public ColumnNameParseException() {
+ super();
+ }
+
+ /**
+ * @param message
+ */
+ public ColumnNameParseException(String message) {
+ super(message);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/DoNotRetryIOException.java b/src/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
new file mode 100644
index 0000000..f976f52
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
@@ -0,0 +1,45 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Subclass if exception is not meant to be retried: e.g.
+ * {@link UnknownScannerException}
+ */
+public class DoNotRetryIOException extends IOException {
+
+ private static final long serialVersionUID = 1197446454511704139L;
+
+ /**
+ * default constructor
+ */
+ public DoNotRetryIOException() {
+ super();
+ }
+
+ /**
+ * @param message
+ */
+ public DoNotRetryIOException(String message) {
+ super(message);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/DroppedSnapshotException.java b/src/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
new file mode 100644
index 0000000..5ddfd0b
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+import java.io.IOException;
+
+
+/**
+ * Thrown during flush if the possibility snapshot content was not properly
+ * persisted into store files. Response should include replay of hlog content.
+ */
+public class DroppedSnapshotException extends IOException {
+
+ private static final long serialVersionUID = -5463156580831677374L;
+
+ /**
+ * @param msg
+ */
+ public DroppedSnapshotException(String msg) {
+ super(msg);
+ }
+
+ /**
+ * default constructor
+ */
+ public DroppedSnapshotException() {
+ super();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/HBaseConfiguration.java b/src/java/org/apache/hadoop/hbase/HBaseConfiguration.java
new file mode 100644
index 0000000..4b54c36
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HBaseConfiguration.java
@@ -0,0 +1,70 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.util.Iterator;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Adds HBase configuration files to a Configuration
+ */
+public class HBaseConfiguration extends Configuration {
+ /** constructor */
+ public HBaseConfiguration() {
+ super();
+ addHbaseResources();
+ }
+
+ /**
+ * Create a clone of passed configuration.
+ * @param c Configuration to clone.
+ */
+ public HBaseConfiguration(final Configuration c) {
+ this();
+ for (Entry<String, String>e: c) {
+ set(e.getKey(), e.getValue());
+ }
+ }
+
+ private void addHbaseResources() {
+ addResource("hbase-default.xml");
+ addResource("hbase-site.xml");
+ }
+
+ /**
+ * Returns the hash code value for this HBaseConfiguration. The hash code of a
+ * HBaseConfiguration is defined by the sum of the hash codes of its entries.
+ *
+ * @see Configuration#iterator() How the entries are obtained.
+ */
+ @Override
+ public int hashCode() {
+ int hash = 0;
+
+ Iterator<Entry<String, String>> propertyIterator = this.iterator();
+ while (propertyIterator.hasNext()) {
+ hash ^= propertyIterator.next().hashCode();
+ }
+ return hash;
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java b/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
new file mode 100644
index 0000000..9553c6e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
@@ -0,0 +1,691 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+
+import agilejson.TOJSON;
+
+/**
+ * An HColumnDescriptor contains information about a column family such as the
+ * number of versions, compression settings, etc.
+ *
+ * It is used as input when creating a table or adding a column. Once set, the
+ * parameters that specify a column cannot be changed without deleting the
+ * column and recreating it. If there is data stored in the column, it will be
+ * deleted when the column is deleted.
+ */
+public class HColumnDescriptor implements ISerializable, WritableComparable<HColumnDescriptor> {
+ // For future backward compatibility
+
+ // Version 3 was when column names become byte arrays and when we picked up
+ // Time-to-live feature. Version 4 was when we moved to byte arrays, HBASE-82.
+ // Version 5 was when bloom filter descriptors were removed.
+ // Version 6 adds metadata as a map where keys and values are byte[].
+ private static final byte COLUMN_DESCRIPTOR_VERSION = (byte)7;
+
+ /**
+ * The type of compression.
+ * @see org.apache.hadoop.io.SequenceFile.Writer
+ * @deprecated Compression now means which compression library
+ * rather than 'what' to cmopress. See {@link Compression.Algorithm}
+ */
+ @Deprecated
+ public static enum CompressionType {
+ /** Do not compress records. */
+ NONE,
+ /** Compress values only, each separately. */
+ RECORD,
+ /** Compress sequences of records together in blocks. */
+ BLOCK
+ }
+
+ public static final String COMPRESSION = "COMPRESSION";
+ public static final String BLOCKCACHE = "BLOCKCACHE";
+ public static final String BLOCKSIZE = "BLOCKSIZE";
+ public static final String LENGTH = "LENGTH";
+ public static final String TTL = "TTL";
+ public static final String BLOOMFILTER = "BLOOMFILTER";
+ public static final String FOREVER = "FOREVER";
+ public static final String MAPFILE_INDEX_INTERVAL =
+ "MAPFILE_INDEX_INTERVAL";
+
+ /**
+ * Default compression type.
+ */
+ public static final String DEFAULT_COMPRESSION =
+ Compression.Algorithm.NONE.getName();
+
+ /**
+ * Default number of versions of a record to keep.
+ */
+ public static final int DEFAULT_VERSIONS = 3;
+
+ /**
+ * Default maximum cell length.
+ */
+ public static final int DEFAULT_LENGTH = Integer.MAX_VALUE;
+ /** Default maximum cell length as an Integer. */
+ public static final Integer DEFAULT_LENGTH_INTEGER =
+ Integer.valueOf(DEFAULT_LENGTH);
+
+ /*
+ * Cache here the HCD value.
+ * Question: its OK to cache since when we're reenable, we create a new HCD?
+ */
+ private volatile Integer maxValueLength = null;
+
+ /*
+ * Cache here the HCD value.
+ * Question: its OK to cache since when we're reenable, we create a new HCD?
+ */
+ private volatile Integer blocksize = null;
+
+ /**
+ * Default setting for whether to serve from memory or not.
+ */
+ public static final boolean DEFAULT_IN_MEMORY = false;
+
+ /**
+ * Default setting for whether to use a block cache or not.
+ */
+ public static final boolean DEFAULT_BLOCKCACHE = false;
+
+ /**
+ * Default size of blocks in files store to the filesytem. Use smaller for
+ * faster random-access at expense of larger indices (more memory consumption).
+ */
+ public static final int DEFAULT_BLOCKSIZE = HFile.DEFAULT_BLOCKSIZE;
+
+ /**
+ * Default setting for whether or not to use bloomfilters.
+ */
+ public static final boolean DEFAULT_BLOOMFILTER = false;
+
+ /**
+ * Default time to live of cell contents.
+ */
+ public static final int DEFAULT_TTL = HConstants.FOREVER;
+
+ // Column family name
+ private byte [] name;
+
+ // Column metadata
+ protected Map<ImmutableBytesWritable,ImmutableBytesWritable> values =
+ new HashMap<ImmutableBytesWritable,ImmutableBytesWritable>();
+
+ /*
+ * Cache the max versions rather than calculate it every time.
+ */
+ private int cachedMaxVersions = -1;
+
+ /**
+ * Default constructor. Must be present for Writable.
+ */
+ public HColumnDescriptor() {
+ this.name = null;
+ }
+
+ /**
+ * Construct a column descriptor specifying only the family name
+ * The other attributes are defaulted.
+ *
+ * @param familyName Column family name. Must be 'printable' -- digit or
+ * letter -- and end in a <code>:<code>
+ */
+ public HColumnDescriptor(final String familyName) {
+ this(Bytes.toBytes(familyName));
+ }
+
+ /**
+ * Construct a column descriptor specifying only the family name
+ * The other attributes are defaulted.
+ *
+ * @param familyName Column family name. Must be 'printable' -- digit or
+ * letter -- and end in a <code>:<code>
+ */
+ public HColumnDescriptor(final byte [] familyName) {
+ this (familyName == null || familyName.length <= 0?
+ HConstants.EMPTY_BYTE_ARRAY: familyName, DEFAULT_VERSIONS,
+ DEFAULT_COMPRESSION, DEFAULT_IN_MEMORY, DEFAULT_BLOCKCACHE,
+ Integer.MAX_VALUE, DEFAULT_TTL, false);
+ }
+
+ /**
+ * Constructor.
+ * Makes a deep copy of the supplied descriptor.
+ * Can make a modifiable descriptor from an UnmodifyableHColumnDescriptor.
+ * @param desc The descriptor.
+ */
+ public HColumnDescriptor(HColumnDescriptor desc) {
+ super();
+ this.name = desc.name.clone();
+ for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+ desc.values.entrySet()) {
+ this.values.put(e.getKey(), e.getValue());
+ }
+ }
+
+ /**
+ * Constructor
+ * @param familyName Column family name. Must be 'printable' -- digit or
+ * letter -- and end in a <code>:<code>
+ * @param maxVersions Maximum number of versions to keep
+ * @param compression Compression type
+ * @param inMemory If true, column data should be kept in an HRegionServer's
+ * cache
+ * @param blockCacheEnabled If true, MapFile blocks should be cached
+ * @param maxValueLength Restrict values to <= this value
+ * @param timeToLive Time-to-live of cell contents, in seconds
+ * (use HConstants.FOREVER for unlimited TTL)
+ * @param bloomFilter Enable the specified bloom filter for this column
+ *
+ * @throws IllegalArgumentException if passed a family name that is made of
+ * other than 'word' characters: i.e. <code>[a-zA-Z_0-9]</code> and does not
+ * end in a <code>:</code>
+ * @throws IllegalArgumentException if the number of versions is <= 0
+ */
+ public HColumnDescriptor(final byte [] familyName, final int maxVersions,
+ final String compression, final boolean inMemory,
+ final boolean blockCacheEnabled, final int maxValueLength,
+ final int timeToLive, final boolean bloomFilter) {
+ this(familyName, maxVersions, compression, inMemory, blockCacheEnabled,
+ DEFAULT_BLOCKSIZE, maxValueLength, timeToLive, bloomFilter);
+ }
+
+ /**
+ * Constructor
+ * @param familyName Column family name. Must be 'printable' -- digit or
+ * letter -- and end in a <code>:<code>
+ * @param maxVersions Maximum number of versions to keep
+ * @param compression Compression type
+ * @param inMemory If true, column data should be kept in an HRegionServer's
+ * cache
+ * @param blockCacheEnabled If true, MapFile blocks should be cached
+ * @param blocksize
+ * @param maxValueLength Restrict values to <= this value
+ * @param timeToLive Time-to-live of cell contents, in seconds
+ * (use HConstants.FOREVER for unlimited TTL)
+ * @param bloomFilter Enable the specified bloom filter for this column
+ *
+ * @throws IllegalArgumentException if passed a family name that is made of
+ * other than 'word' characters: i.e. <code>[a-zA-Z_0-9]</code> and does not
+ * end in a <code>:</code>
+ * @throws IllegalArgumentException if the number of versions is <= 0
+ */
+ public HColumnDescriptor(final byte [] familyName, final int maxVersions,
+ final String compression, final boolean inMemory,
+ final boolean blockCacheEnabled, final int blocksize,
+ final int maxValueLength,
+ final int timeToLive, final boolean bloomFilter) {
+ isLegalFamilyName(familyName);
+ this.name = stripColon(familyName);
+ if (maxVersions <= 0) {
+ // TODO: Allow maxVersion of 0 to be the way you say "Keep all versions".
+ // Until there is support, consider 0 or < 0 -- a configuration error.
+ throw new IllegalArgumentException("Maximum versions must be positive");
+ }
+ setMaxVersions(maxVersions);
+ setInMemory(inMemory);
+ setBlockCacheEnabled(blockCacheEnabled);
+ setMaxValueLength(maxValueLength);
+ setTimeToLive(timeToLive);
+ setCompressionType(Compression.Algorithm.
+ valueOf(compression.toUpperCase()));
+ setBloomfilter(bloomFilter);
+ setBlocksize(blocksize);
+ }
+
+ private static byte [] stripColon(final byte [] n) {
+ byte [] result = new byte [n.length - 1];
+ // Have the stored family name be absent the colon delimiter
+ System.arraycopy(n, 0, result, 0, n.length - 1);
+ return result;
+ }
+
+ /**
+ * @param b Family name.
+ * @return <code>b</code>
+ * @throws IllegalArgumentException If not null and not a legitimate family
+ * name: i.e. 'printable' and ends in a ':' (Null passes are allowed because
+ * <code>b</code> can be null when deserializing). Cannot start with a '.'
+ * either.
+ */
+ public static byte [] isLegalFamilyName(final byte [] b) {
+ if (b == null) {
+ return b;
+ }
+ if (b[b.length - 1] != ':') {
+ throw new IllegalArgumentException("Family names must end in a colon: " +
+ Bytes.toString(b));
+ }
+ if (b[0] == '.') {
+ throw new IllegalArgumentException("Family names cannot start with a " +
+ "period: " + Bytes.toString(b));
+ }
+ for (int i = 0; i < (b.length - 1); i++) {
+ if (Character.isISOControl(b[i])) {
+ throw new IllegalArgumentException("Illegal character <" + b[i] +
+ ">. Family names cannot contain control characters: " +
+ Bytes.toString(b));
+ }
+ }
+ return b;
+ }
+
+ /**
+ * @return Name of this column family
+ */
+ public byte [] getName() {
+ return name;
+ }
+
+ /**
+ * @return Name of this column family with colon as required by client API
+ */
+ @TOJSON(fieldName = "name", base64=true)
+ public byte [] getNameWithColon() {
+ return HStoreKey.addDelimiter(this.name);
+ }
+
+ /**
+ * @return Name of this column family
+ */
+ public String getNameAsString() {
+ return Bytes.toString(this.name);
+ }
+
+ /**
+ * @param key The key.
+ * @return The value.
+ */
+ public byte[] getValue(byte[] key) {
+ ImmutableBytesWritable ibw = values.get(new ImmutableBytesWritable(key));
+ if (ibw == null)
+ return null;
+ return ibw.get();
+ }
+
+ /**
+ * @param key The key.
+ * @return The value as a string.
+ */
+ public String getValue(String key) {
+ byte[] value = getValue(Bytes.toBytes(key));
+ if (value == null)
+ return null;
+ return Bytes.toString(value);
+ }
+
+ /**
+ * @return All values.
+ */
+ public Map<ImmutableBytesWritable,ImmutableBytesWritable> getValues() {
+ return Collections.unmodifiableMap(values);
+ }
+
+ /**
+ * @param key The key.
+ * @param value The value.
+ */
+ public void setValue(byte[] key, byte[] value) {
+ values.put(new ImmutableBytesWritable(key),
+ new ImmutableBytesWritable(value));
+ }
+
+ /**
+ * @param key The key.
+ * @param value The value.
+ */
+ public void setValue(String key, String value) {
+ setValue(Bytes.toBytes(key), Bytes.toBytes(value));
+ }
+
+ /** @return compression type being used for the column family */
+ @TOJSON
+ public Compression.Algorithm getCompression() {
+ return Compression.Algorithm.valueOf(getValue(COMPRESSION));
+ }
+
+ /** @return maximum number of versions */
+ @TOJSON
+ public synchronized int getMaxVersions() {
+ if (this.cachedMaxVersions == -1) {
+ String value = getValue(HConstants.VERSIONS);
+ this.cachedMaxVersions = (value != null)?
+ Integer.valueOf(value).intValue(): DEFAULT_VERSIONS;
+ }
+ return this.cachedMaxVersions;
+ }
+
+ /**
+ * @param maxVersions maximum number of versions
+ */
+ public void setMaxVersions(int maxVersions) {
+ setValue(HConstants.VERSIONS, Integer.toString(maxVersions));
+ }
+
+ /**
+ * @return Blocksize.
+ */
+ @TOJSON
+ public synchronized int getBlocksize() {
+ if (this.blocksize == null) {
+ String value = getValue(BLOCKSIZE);
+ this.blocksize = (value != null)?
+ Integer.decode(value): Integer.valueOf(DEFAULT_BLOCKSIZE);
+ }
+ return this.blocksize.intValue();
+ }
+
+ /**
+ * @param s
+ */
+ public void setBlocksize(int s) {
+ setValue(BLOCKSIZE, Integer.toString(s));
+ this.blocksize = null;
+ }
+
+ /**
+ * @return Compression type setting.
+ */
+ @TOJSON
+ public Compression.Algorithm getCompressionType() {
+ return getCompression();
+ }
+
+ /**
+ * Compression types supported in hbase.
+ * LZO is not bundled as part of the hbase distribution.
+ * See <a href="http://wiki.apache.org/hadoop/UsingLzoCompression">LZO Compression</a>
+ * for how to enable it.
+ * @param type Compression type setting.
+ */
+ public void setCompressionType(Compression.Algorithm type) {
+ String compressionType;
+ switch (type) {
+ case LZO: compressionType = "LZO"; break;
+ case GZ: compressionType = "GZ"; break;
+ default: compressionType = "NONE"; break;
+ }
+ setValue(COMPRESSION, compressionType);
+ }
+
+ /**
+ * @return True if we are to keep all in use HRegionServer cache.
+ */
+ @TOJSON(prefixLength = 2)
+ public boolean isInMemory() {
+ String value = getValue(HConstants.IN_MEMORY);
+ if (value != null)
+ return Boolean.valueOf(value).booleanValue();
+ return DEFAULT_IN_MEMORY;
+ }
+
+ /**
+ * @param inMemory True if we are to keep all values in the HRegionServer
+ * cache
+ */
+ public void setInMemory(boolean inMemory) {
+ setValue(HConstants.IN_MEMORY, Boolean.toString(inMemory));
+ }
+
+ /**
+ * @return Maximum value length.
+ */
+ @TOJSON
+ public synchronized int getMaxValueLength() {
+ if (this.maxValueLength == null) {
+ String value = getValue(LENGTH);
+ this.maxValueLength = (value != null)?
+ Integer.decode(value): DEFAULT_LENGTH_INTEGER;
+ }
+ return this.maxValueLength.intValue();
+ }
+
+ /**
+ * @param maxLength Maximum value length.
+ */
+ public void setMaxValueLength(int maxLength) {
+ setValue(LENGTH, Integer.toString(maxLength));
+ this.maxValueLength = null;
+ }
+
+ /**
+ * @return Time-to-live of cell contents, in seconds.
+ */
+ @TOJSON
+ public int getTimeToLive() {
+ String value = getValue(TTL);
+ return (value != null)? Integer.valueOf(value).intValue(): DEFAULT_TTL;
+ }
+
+ /**
+ * @param timeToLive Time-to-live of cell contents, in seconds.
+ */
+ public void setTimeToLive(int timeToLive) {
+ setValue(TTL, Integer.toString(timeToLive));
+ }
+
+ /**
+ * @return True if MapFile blocks should be cached.
+ */
+ @TOJSON(prefixLength = 2)
+ public boolean isBlockCacheEnabled() {
+ String value = getValue(BLOCKCACHE);
+ if (value != null)
+ return Boolean.valueOf(value).booleanValue();
+ return DEFAULT_BLOCKCACHE;
+ }
+
+ /**
+ * @param blockCacheEnabled True if MapFile blocks should be cached.
+ */
+ public void setBlockCacheEnabled(boolean blockCacheEnabled) {
+ setValue(BLOCKCACHE, Boolean.toString(blockCacheEnabled));
+ }
+
+ /**
+ * @return true if a bloom filter is enabled
+ */
+ @TOJSON(prefixLength = 2)
+ public boolean isBloomfilter() {
+ String value = getValue(BLOOMFILTER);
+ if (value != null)
+ return Boolean.valueOf(value).booleanValue();
+ return DEFAULT_BLOOMFILTER;
+ }
+
+ /**
+ * @param onOff Enable/Disable bloom filter
+ */
+ public void setBloomfilter(final boolean onOff) {
+ setValue(BLOOMFILTER, Boolean.toString(onOff));
+ }
+
+ /**
+ * @param interval The number of entries that are added to the store MapFile before
+ * an index entry is added.
+ */
+ public void setMapFileIndexInterval(int interval) {
+ setValue(MAPFILE_INDEX_INTERVAL, Integer.toString(interval));
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ StringBuffer s = new StringBuffer();
+ s.append('{');
+ s.append(HConstants.NAME);
+ s.append(" => '");
+ s.append(Bytes.toString(name));
+ s.append("'");
+ for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+ values.entrySet()) {
+ String key = Bytes.toString(e.getKey().get());
+ String value = Bytes.toString(e.getValue().get());
+ if (key != null && key.toUpperCase().equals(BLOOMFILTER)) {
+ // Don't emit bloomfilter. Its not working.
+ continue;
+ }
+ s.append(", ");
+ s.append(key);
+ s.append(" => '");
+ s.append(value);
+ s.append("'");
+ }
+ s.append('}');
+ return s.toString();
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null) {
+ return false;
+ }
+ if (!(obj instanceof HColumnDescriptor)) {
+ return false;
+ }
+ return compareTo((HColumnDescriptor)obj) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ int result = Bytes.hashCode(this.name);
+ result ^= Byte.valueOf(COLUMN_DESCRIPTOR_VERSION).hashCode();
+ result ^= values.hashCode();
+ return result;
+ }
+
+ // Writable
+
+ public void readFields(DataInput in) throws IOException {
+ int version = in.readByte();
+ if (version < 6) {
+ if (version <= 2) {
+ Text t = new Text();
+ t.readFields(in);
+ this.name = t.getBytes();
+ if (HStoreKey.getFamilyDelimiterIndex(this.name) > 0) {
+ this.name = stripColon(this.name);
+ }
+ } else {
+ this.name = Bytes.readByteArray(in);
+ }
+ this.values.clear();
+ setMaxVersions(in.readInt());
+ int ordinal = in.readInt();
+ setCompressionType(Compression.Algorithm.values()[ordinal]);
+ setInMemory(in.readBoolean());
+ setMaxValueLength(in.readInt());
+ setBloomfilter(in.readBoolean());
+ if (isBloomfilter() && version < 5) {
+ // If a bloomFilter is enabled and the column descriptor is less than
+ // version 5, we need to skip over it to read the rest of the column
+ // descriptor. There are no BloomFilterDescriptors written to disk for
+ // column descriptors with a version number >= 5
+ throw new UnsupportedClassVersionError(this.getClass().getName() +
+ " does not support backward compatibility with versions older " +
+ "than version 5");
+ }
+ if (version > 1) {
+ setBlockCacheEnabled(in.readBoolean());
+ }
+ if (version > 2) {
+ setTimeToLive(in.readInt());
+ }
+ } else {
+ // version 7+
+ this.name = Bytes.readByteArray(in);
+ this.values.clear();
+ int numValues = in.readInt();
+ for (int i = 0; i < numValues; i++) {
+ ImmutableBytesWritable key = new ImmutableBytesWritable();
+ ImmutableBytesWritable value = new ImmutableBytesWritable();
+ key.readFields(in);
+ value.readFields(in);
+ values.put(key, value);
+ }
+ if (version == 6) {
+ // Convert old values.
+ setValue(COMPRESSION, Compression.Algorithm.NONE.getName());
+ }
+ }
+ }
+
+ public void write(DataOutput out) throws IOException {
+ out.writeByte(COLUMN_DESCRIPTOR_VERSION);
+ Bytes.writeByteArray(out, this.name);
+ out.writeInt(values.size());
+ for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+ values.entrySet()) {
+ e.getKey().write(out);
+ e.getValue().write(out);
+ }
+ }
+
+ // Comparable
+
+ public int compareTo(HColumnDescriptor o) {
+ int result = Bytes.compareTo(this.name, o.getName());
+ if (result == 0) {
+ // punt on comparison for ordering, just calculate difference
+ result = this.values.hashCode() - o.values.hashCode();
+ if (result < 0)
+ result = -1;
+ else if (result > 0)
+ result = 1;
+ }
+ return result;
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.rest.xml.IOutputXML#toXML()
+ */
+ public void restSerialize(IRestSerializer serializer) throws HBaseRestException {
+ serializer.serializeColumnDescriptor(this);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/HConstants.java b/src/java/org/apache/hadoop/hbase/HConstants.java
new file mode 100644
index 0000000..2762d91
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HConstants.java
@@ -0,0 +1,288 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * HConstants holds a bunch of HBase-related constants
+ */
+public interface HConstants {
+
+ /** long constant for zero */
+ static final Long ZERO_L = Long.valueOf(0L);
+
+ //TODO: NINES is only used in HBaseAdmin and HConnectionManager. Move to client
+ // package and change visibility to default
+ static final String NINES = "99999999999999";
+ //TODO: ZEROS is only used in HConnectionManager and MetaScanner. Move to
+ // client package and change visibility to default
+ static final String ZEROES = "00000000000000";
+
+ // For migration
+
+ /** name of version file */
+ static final String VERSION_FILE_NAME = "hbase.version";
+
+ /**
+ * Current version of file system.
+ * Version 4 supports only one kind of bloom filter.
+ * Version 5 changes versions in catalog table regions.
+ * Version 6 enables blockcaching on catalog tables.
+ */
+ public static final String FILE_SYSTEM_VERSION = "6";
+
+ // Configuration parameters
+
+ // TODO: URL for hbase master like hdfs URLs with host and port.
+ // Like jdbc URLs? URLs could be used to refer to table cells?
+ // jdbc:mysql://[host][,failoverhost...][:port]/[database]
+ // jdbc:mysql://[host][,failoverhost...][:port]/[database][?propertyName1][=propertyValue1][&propertyName2][=propertyValue2]...
+
+ // Key into HBaseConfiguration for the hbase.master address.
+ // TODO: Support 'local': i.e. default of all running in single
+ // process. Same for regionserver. TODO: Is having HBase homed
+ // on port 60k OK?
+
+ /** Parameter name for master address */
+ static final String MASTER_ADDRESS = "hbase.master";
+
+ /** Parameter name for master host name. */
+ static final String MASTER_HOST_NAME = "hbase.master.hostname";
+
+ /** default host address */
+ static final String DEFAULT_HOST = "0.0.0.0";
+
+ /** default port that the master listens on */
+ static final int DEFAULT_MASTER_PORT = 60000;
+
+ /** default port for master web api */
+ static final int DEFAULT_MASTER_INFOPORT = 60010;
+
+ /** Name of ZooKeeper config file in conf/ directory. */
+ static final String ZOOKEEPER_CONFIG_NAME = "zoo.cfg";
+
+ /** Parameter name for number of times to retry writes to ZooKeeper. */
+ static final String ZOOKEEPER_RETRIES = "zookeeper.retries";
+ /** Default number of times to retry writes to ZooKeeper. */
+ static final int DEFAULT_ZOOKEEPER_RETRIES = 5;
+
+ /** Parameter name for ZooKeeper pause between retries. In milliseconds. */
+ static final String ZOOKEEPER_PAUSE = "zookeeper.pause";
+ /** Default ZooKeeper pause value. In milliseconds. */
+ static final int DEFAULT_ZOOKEEPER_PAUSE = 2 * 1000;
+
+ /** Parameter name for hbase.regionserver address. */
+ static final String REGIONSERVER_ADDRESS = "hbase.regionserver";
+
+ /** Default region server address */
+ static final String DEFAULT_REGIONSERVER_ADDRESS = DEFAULT_HOST + ":60020";
+
+ /** default port for region server web api */
+ static final int DEFAULT_REGIONSERVER_INFOPORT = 60030;
+
+ /** Parameter name for what region server interface to use. */
+ static final String REGION_SERVER_CLASS = "hbase.regionserver.class";
+
+ /** Parameter name for what region server implementation to use. */
+ static final String REGION_SERVER_IMPL= "hbase.regionserver.impl";
+
+ /** Default region server interface class name. */
+ static final String DEFAULT_REGION_SERVER_CLASS = HRegionInterface.class.getName();
+
+ /** Parameter name for how often threads should wake up */
+ static final String THREAD_WAKE_FREQUENCY = "hbase.server.thread.wakefrequency";
+
+ /** Parameter name for how often a region should should perform a major compaction */
+ static final String MAJOR_COMPACTION_PERIOD = "hbase.hregion.majorcompaction";
+
+ /** Parameter name for HBase instance root directory */
+ static final String HBASE_DIR = "hbase.rootdir";
+
+ /** Used to construct the name of the log directory for a region server
+ * Use '.' as a special character to seperate the log files from table data */
+ static final String HREGION_LOGDIR_NAME = ".logs";
+
+ /** Name of old log file for reconstruction */
+ static final String HREGION_OLDLOGFILE_NAME = "oldlogfile.log";
+
+ /** Used to construct the name of the compaction directory during compaction */
+ static final String HREGION_COMPACTIONDIR_NAME = "compaction.dir";
+
+ /** Default maximum file size */
+ static final long DEFAULT_MAX_FILE_SIZE = 256 * 1024 * 1024;
+
+ /** Default size of a reservation block */
+ static final int DEFAULT_SIZE_RESERVATION_BLOCK = 1024 * 1024 * 5;
+
+ // Always store the location of the root table's HRegion.
+ // This HRegion is never split.
+
+ // region name = table + startkey + regionid. This is the row key.
+ // each row in the root and meta tables describes exactly 1 region
+ // Do we ever need to know all the information that we are storing?
+
+ // Note that the name of the root table starts with "-" and the name of the
+ // meta table starts with "." Why? it's a trick. It turns out that when we
+ // store region names in memory, we use a SortedMap. Since "-" sorts before
+ // "." (and since no other table name can start with either of these
+ // characters, the root region will always be the first entry in such a Map,
+ // followed by all the meta regions (which will be ordered by their starting
+ // row key as well), followed by all user tables. So when the Master is
+ // choosing regions to assign, it will always choose the root region first,
+ // followed by the meta regions, followed by user regions. Since the root
+ // and meta regions always need to be on-line, this ensures that they will
+ // be the first to be reassigned if the server(s) they are being served by
+ // should go down.
+
+ /** The root table's name.*/
+ static final byte [] ROOT_TABLE_NAME = Bytes.toBytes("-ROOT-");
+
+ /** The META table's name. */
+ static final byte [] META_TABLE_NAME = Bytes.toBytes(".META.");
+
+ /** delimiter used between portions of a region name */
+ public static final int META_ROW_DELIMITER = ',';
+
+ // Defines for the column names used in both ROOT and META HBase 'meta' tables.
+
+ /** The ROOT and META column family (string) */
+ static final String COLUMN_FAMILY_STR = "info:";
+
+ /** The META historian column family (string) */
+ static final String COLUMN_FAMILY_HISTORIAN_STR = "historian:";
+
+ /** The ROOT and META column family */
+ static final byte [] COLUMN_FAMILY = Bytes.toBytes(COLUMN_FAMILY_STR);
+
+ /** The META historian column family */
+ static final byte [] COLUMN_FAMILY_HISTORIAN = Bytes.toBytes(COLUMN_FAMILY_HISTORIAN_STR);
+
+ /** Array of meta column names */
+ static final byte[][] COLUMN_FAMILY_ARRAY = new byte[][] {COLUMN_FAMILY};
+
+ /** ROOT/META column family member - contains HRegionInfo */
+ static final byte [] COL_REGIONINFO =
+ Bytes.toBytes(COLUMN_FAMILY_STR + "regioninfo");
+
+ /** Array of column - contains HRegionInfo */
+ static final byte[][] COL_REGIONINFO_ARRAY = new byte[][] {COL_REGIONINFO};
+
+ /** ROOT/META column family member - contains HServerAddress.toString() */
+ static final byte[] COL_SERVER = Bytes.toBytes(COLUMN_FAMILY_STR + "server");
+
+ /** ROOT/META column family member - contains server start code (a long) */
+ static final byte [] COL_STARTCODE =
+ Bytes.toBytes(COLUMN_FAMILY_STR + "serverstartcode");
+
+ /** the lower half of a split region */
+ static final byte [] COL_SPLITA = Bytes.toBytes(COLUMN_FAMILY_STR + "splitA");
+
+ /** the upper half of a split region */
+ static final byte [] COL_SPLITB = Bytes.toBytes(COLUMN_FAMILY_STR + "splitB");
+
+ /** All the columns in the catalog -ROOT- and .META. tables.
+ */
+ static final byte[][] ALL_META_COLUMNS = {COL_REGIONINFO, COL_SERVER,
+ COL_STARTCODE, COL_SPLITA, COL_SPLITB};
+
+ // Other constants
+
+ /**
+ * An empty instance.
+ */
+ static final byte [] EMPTY_BYTE_ARRAY = new byte [0];
+
+ /**
+ * Used by scanners, etc when they want to start at the beginning of a region
+ */
+ static final byte [] EMPTY_START_ROW = EMPTY_BYTE_ARRAY;
+
+ /**
+ * Last row in a table.
+ */
+ static final byte [] EMPTY_END_ROW = EMPTY_START_ROW;
+
+ /**
+ * Used by scanners and others when they're trying to detect the end of a
+ * table
+ */
+ static final byte [] LAST_ROW = EMPTY_BYTE_ARRAY;
+
+ /**
+ * Max length a row can have because of the limitation in TFile.
+ */
+ static final int MAX_ROW_LENGTH = 1024*64;
+
+ /** When we encode strings, we always specify UTF8 encoding */
+ static final String UTF8_ENCODING = "UTF-8";
+
+ /**
+ * Timestamp to use when we want to refer to the latest cell.
+ * This is the timestamp sent by clients when no timestamp is specified on
+ * commit.
+ */
+ static final long LATEST_TIMESTAMP = Long.MAX_VALUE;
+
+ /**
+ * Define for 'return-all-versions'.
+ */
+ static final int ALL_VERSIONS = Integer.MAX_VALUE;
+
+ /**
+ * Unlimited time-to-live.
+ */
+ static final int FOREVER = -1;
+
+ public static final int WEEK_IN_SECONDS = 7 * 24 * 3600;
+
+ //TODO: HBASE_CLIENT_RETRIES_NUMBER_KEY is only used by TestMigrate. Move it
+ // there.
+ public static final String HBASE_CLIENT_RETRIES_NUMBER_KEY =
+ "hbase.client.retries.number";
+
+ //TODO: although the following are referenced widely to format strings for
+ // the shell. They really aren't a part of the public API. It would be
+ // nice if we could put them somewhere where they did not need to be
+ // public. They could have package visibility
+ static final String NAME = "NAME";
+ static final String VERSIONS = "VERSIONS";
+ static final String IN_MEMORY = "IN_MEMORY";
+
+ /**
+ * This is a retry backoff multiplier table similar to the BSD TCP syn
+ * backoff table, a bit more aggressive than simple exponential backoff.
+ */
+ public static int RETRY_BACKOFF[] = { 1, 1, 1, 2, 2, 4, 4, 8, 16, 32 };
+
+ /** modifyTable op for replacing the table descriptor */
+ public static final int MODIFY_TABLE_SET_HTD = 1;
+ /** modifyTable op for forcing a split */
+ public static final int MODIFY_TABLE_SPLIT = 2;
+ /** modifyTable op for forcing a compaction */
+ public static final int MODIFY_TABLE_COMPACT = 3;
+
+ // Messages client can send master.
+ public static final int MODIFY_CLOSE_REGION = MODIFY_TABLE_COMPACT + 1;
+
+ public static final int MODIFY_TABLE_FLUSH = MODIFY_CLOSE_REGION + 1;
+ public static final int MODIFY_TABLE_MAJOR_COMPACT = MODIFY_TABLE_FLUSH + 1;
+}
diff --git a/src/java/org/apache/hadoop/hbase/HMerge.java b/src/java/org/apache/hadoop/hbase/HMerge.java
new file mode 100644
index 0000000..ca87d16
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HMerge.java
@@ -0,0 +1,391 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * A non-instantiable class that has a static method capable of compacting
+ * a table by merging adjacent regions.
+ */
+class HMerge implements HConstants {
+ static final Log LOG = LogFactory.getLog(HMerge.class);
+ static final Random rand = new Random();
+
+ /*
+ * Not instantiable
+ */
+ private HMerge() {
+ super();
+ }
+
+ /**
+ * Scans the table and merges two adjacent regions if they are small. This
+ * only happens when a lot of rows are deleted.
+ *
+ * When merging the META region, the HBase instance must be offline.
+ * When merging a normal table, the HBase instance must be online, but the
+ * table must be disabled.
+ *
+ * @param conf - configuration object for HBase
+ * @param fs - FileSystem where regions reside
+ * @param tableName - Table to be compacted
+ * @throws IOException
+ */
+ public static void merge(HBaseConfiguration conf, FileSystem fs,
+ final byte [] tableName)
+ throws IOException {
+ HConnection connection = HConnectionManager.getConnection(conf);
+ boolean masterIsRunning = connection.isMasterRunning();
+ HConnectionManager.deleteConnectionInfo(conf, false);
+ if (Bytes.equals(tableName, META_TABLE_NAME)) {
+ if (masterIsRunning) {
+ throw new IllegalStateException(
+ "Can not compact META table if instance is on-line");
+ }
+ new OfflineMerger(conf, fs).process();
+ } else {
+ if(!masterIsRunning) {
+ throw new IllegalStateException(
+ "HBase instance must be running to merge a normal table");
+ }
+ new OnlineMerger(conf, fs, tableName).process();
+ }
+ }
+
+ private static abstract class Merger {
+ protected final HBaseConfiguration conf;
+ protected final FileSystem fs;
+ protected final Path tabledir;
+ protected final HLog hlog;
+ private final long maxFilesize;
+
+
+ protected Merger(HBaseConfiguration conf, FileSystem fs,
+ final byte [] tableName)
+ throws IOException {
+ this.conf = conf;
+ this.fs = fs;
+ this.maxFilesize =
+ conf.getLong("hbase.hregion.max.filesize", DEFAULT_MAX_FILE_SIZE);
+
+ this.tabledir = new Path(
+ fs.makeQualified(new Path(conf.get(HBASE_DIR))),
+ Bytes.toString(tableName)
+ );
+ Path logdir = new Path(tabledir, "merge_" + System.currentTimeMillis() +
+ HREGION_LOGDIR_NAME);
+ this.hlog =
+ new HLog(fs, logdir, conf, null);
+ }
+
+ void process() throws IOException {
+ try {
+ for(HRegionInfo[] regionsToMerge = next();
+ regionsToMerge != null;
+ regionsToMerge = next()) {
+ if (!merge(regionsToMerge)) {
+ return;
+ }
+ }
+ } finally {
+ try {
+ hlog.closeAndDelete();
+
+ } catch(IOException e) {
+ LOG.error(e);
+ }
+ }
+ }
+
+ protected boolean merge(final HRegionInfo[] info) throws IOException {
+ if(info.length < 2) {
+ LOG.info("only one region - nothing to merge");
+ return false;
+ }
+
+ HRegion currentRegion = null;
+ long currentSize = 0;
+ HRegion nextRegion = null;
+ long nextSize = 0;
+ for (int i = 0; i < info.length - 1; i++) {
+ if (currentRegion == null) {
+ currentRegion =
+ new HRegion(tabledir, hlog, fs, conf, info[i], null);
+ currentRegion.initialize(null, null);
+ currentSize = currentRegion.getLargestHStoreSize();
+ }
+ nextRegion =
+ new HRegion(tabledir, hlog, fs, conf, info[i + 1], null);
+ nextRegion.initialize(null, null);
+ nextSize = nextRegion.getLargestHStoreSize();
+
+ if ((currentSize + nextSize) <= (maxFilesize / 2)) {
+ // We merge two adjacent regions if their total size is less than
+ // one half of the desired maximum size
+ LOG.info("merging regions " + Bytes.toString(currentRegion.getRegionName())
+ + " and " + Bytes.toString(nextRegion.getRegionName()));
+ HRegion mergedRegion =
+ HRegion.mergeAdjacent(currentRegion, nextRegion);
+ updateMeta(currentRegion.getRegionName(), nextRegion.getRegionName(),
+ mergedRegion);
+ break;
+ }
+ LOG.info("not merging regions " + Bytes.toString(currentRegion.getRegionName())
+ + " and " + Bytes.toString(nextRegion.getRegionName()));
+ currentRegion.close();
+ currentRegion = nextRegion;
+ currentSize = nextSize;
+ }
+ if(currentRegion != null) {
+ currentRegion.close();
+ }
+ return true;
+ }
+
+ protected abstract HRegionInfo[] next() throws IOException;
+
+ protected abstract void updateMeta(final byte [] oldRegion1,
+ final byte [] oldRegion2, HRegion newRegion)
+ throws IOException;
+
+ }
+
+ /** Instantiated to compact a normal user table */
+ private static class OnlineMerger extends Merger {
+ private final byte [] tableName;
+ private final HTable table;
+ private final Scanner metaScanner;
+ private HRegionInfo latestRegion;
+
+ OnlineMerger(HBaseConfiguration conf, FileSystem fs,
+ final byte [] tableName)
+ throws IOException {
+ super(conf, fs, tableName);
+ this.tableName = tableName;
+ this.table = new HTable(conf, META_TABLE_NAME);
+ this.metaScanner = table.getScanner(COL_REGIONINFO_ARRAY, tableName);
+ this.latestRegion = null;
+ }
+
+ private HRegionInfo nextRegion() throws IOException {
+ try {
+ RowResult results = getMetaRow();
+ if (results == null) {
+ return null;
+ }
+ Cell regionInfo = results.get(COL_REGIONINFO);
+ if (regionInfo == null || regionInfo.getValue().length == 0) {
+ throw new NoSuchElementException("meta region entry missing " +
+ Bytes.toString(COL_REGIONINFO));
+ }
+ HRegionInfo region = Writables.getHRegionInfo(regionInfo.getValue());
+ if (!Bytes.equals(region.getTableDesc().getName(), this.tableName)) {
+ return null;
+ }
+ checkOfflined(region);
+ return region;
+ } catch (IOException e) {
+ e = RemoteExceptionHandler.checkIOException(e);
+ LOG.error("meta scanner error", e);
+ metaScanner.close();
+ throw e;
+ }
+ }
+
+ protected void checkOfflined(final HRegionInfo hri)
+ throws TableNotDisabledException {
+ if (!hri.isOffline()) {
+ throw new TableNotDisabledException("Region " +
+ hri.getRegionNameAsString() + " is not disabled");
+ }
+ }
+
+ /*
+ * Check current row has a HRegionInfo. Skip to next row if HRI is empty.
+ * @return A Map of the row content else null if we are off the end.
+ * @throws IOException
+ */
+ private RowResult getMetaRow() throws IOException {
+ RowResult currentRow = metaScanner.next();
+ boolean foundResult = false;
+ while (currentRow != null) {
+ LOG.info("Row: <" + Bytes.toString(currentRow.getRow()) + ">");
+ Cell regionInfo = currentRow.get(COL_REGIONINFO);
+ if (regionInfo == null || regionInfo.getValue().length == 0) {
+ currentRow = metaScanner.next();
+ continue;
+ }
+ foundResult = true;
+ break;
+ }
+ return foundResult ? currentRow : null;
+ }
+
+ @Override
+ protected HRegionInfo[] next() throws IOException {
+ List<HRegionInfo> regions = new ArrayList<HRegionInfo>();
+ if(latestRegion == null) {
+ latestRegion = nextRegion();
+ }
+ if(latestRegion != null) {
+ regions.add(latestRegion);
+ }
+ latestRegion = nextRegion();
+ if(latestRegion != null) {
+ regions.add(latestRegion);
+ }
+ return regions.toArray(new HRegionInfo[regions.size()]);
+ }
+
+ @Override
+ protected void updateMeta(final byte [] oldRegion1,
+ final byte [] oldRegion2,
+ HRegion newRegion)
+ throws IOException {
+ byte[][] regionsToDelete = {oldRegion1, oldRegion2};
+ for (int r = 0; r < regionsToDelete.length; r++) {
+ if(Bytes.equals(regionsToDelete[r], latestRegion.getRegionName())) {
+ latestRegion = null;
+ }
+ table.deleteAll(regionsToDelete[r]);
+ if(LOG.isDebugEnabled()) {
+ LOG.debug("updated columns in row: " + Bytes.toString(regionsToDelete[r]));
+ }
+ }
+ newRegion.getRegionInfo().setOffline(true);
+
+ BatchUpdate update = new BatchUpdate(newRegion.getRegionName());
+ update.put(COL_REGIONINFO,
+ Writables.getBytes(newRegion.getRegionInfo()));
+ table.commit(update);
+
+ if(LOG.isDebugEnabled()) {
+ LOG.debug("updated columns in row: "
+ + Bytes.toString(newRegion.getRegionName()));
+ }
+ }
+ }
+
+ /** Instantiated to compact the meta region */
+ private static class OfflineMerger extends Merger {
+ private final List<HRegionInfo> metaRegions = new ArrayList<HRegionInfo>();
+ private final HRegion root;
+
+ OfflineMerger(HBaseConfiguration conf, FileSystem fs)
+ throws IOException {
+
+ super(conf, fs, META_TABLE_NAME);
+
+ Path rootTableDir = HTableDescriptor.getTableDir(
+ fs.makeQualified(new Path(conf.get(HBASE_DIR))),
+ ROOT_TABLE_NAME);
+
+ // Scan root region to find all the meta regions
+
+ root = new HRegion(rootTableDir, hlog, fs, conf,
+ HRegionInfo.ROOT_REGIONINFO, null);
+ root.initialize(null, null);
+
+ InternalScanner rootScanner =
+ root.getScanner(COL_REGIONINFO_ARRAY, HConstants.EMPTY_START_ROW,
+ HConstants.LATEST_TIMESTAMP, null);
+
+ try {
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ while(rootScanner.next(results)) {
+ for(KeyValue kv: results) {
+ HRegionInfo info = Writables.getHRegionInfoOrNull(kv.getValue());
+ if (info != null) {
+ metaRegions.add(info);
+ }
+ }
+ }
+ } finally {
+ rootScanner.close();
+ try {
+ root.close();
+
+ } catch(IOException e) {
+ LOG.error(e);
+ }
+ }
+ }
+
+ @Override
+ protected HRegionInfo[] next() {
+ HRegionInfo[] results = null;
+ if (metaRegions.size() > 0) {
+ results = metaRegions.toArray(new HRegionInfo[metaRegions.size()]);
+ metaRegions.clear();
+ }
+ return results;
+ }
+
+ @Override
+ protected void updateMeta(final byte [] oldRegion1,
+ final byte [] oldRegion2, HRegion newRegion)
+ throws IOException {
+ byte[][] regionsToDelete = {oldRegion1, oldRegion2};
+ for(int r = 0; r < regionsToDelete.length; r++) {
+ BatchUpdate b = new BatchUpdate(regionsToDelete[r]);
+ b.delete(COL_REGIONINFO);
+ b.delete(COL_SERVER);
+ b.delete(COL_STARTCODE);
+ b.delete(COL_SPLITA);
+ b.delete(COL_SPLITB);
+ root.batchUpdate(b,null);
+
+ if(LOG.isDebugEnabled()) {
+ LOG.debug("updated columns in row: " + Bytes.toString(regionsToDelete[r]));
+ }
+ }
+ HRegionInfo newInfo = newRegion.getRegionInfo();
+ newInfo.setOffline(true);
+ BatchUpdate b = new BatchUpdate(newRegion.getRegionName());
+ b.put(COL_REGIONINFO, Writables.getBytes(newInfo));
+ root.batchUpdate(b,null);
+ if(LOG.isDebugEnabled()) {
+ LOG.debug("updated columns in row: " + Bytes.toString(newRegion.getRegionName()));
+ }
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/HMsg.java b/src/java/org/apache/hadoop/hbase/HMsg.java
new file mode 100644
index 0000000..11c7a3d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HMsg.java
@@ -0,0 +1,264 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * HMsg is for communicating instructions between the HMaster and the
+ * HRegionServers.
+ *
+ * Most of the time the messages are simple but some messages are accompanied
+ * by the region affected. HMsg may also carry optional message.
+ */
+public class HMsg implements Writable {
+ /**
+ * Message types sent between master and regionservers
+ */
+ public static enum Type {
+ /** null message */
+ MSG_NONE,
+
+ // Message types sent from master to region server
+ /** Start serving the specified region */
+ MSG_REGION_OPEN,
+
+ /** Stop serving the specified region */
+ MSG_REGION_CLOSE,
+
+ /** Split the specified region */
+ MSG_REGION_SPLIT,
+
+ /** Compact the specified region */
+ MSG_REGION_COMPACT,
+
+ /** Region server is unknown to master. Restart */
+ MSG_CALL_SERVER_STARTUP,
+
+ /** Master tells region server to stop */
+ MSG_REGIONSERVER_STOP,
+
+ /** Stop serving the specified region and don't report back that it's
+ * closed
+ */
+ MSG_REGION_CLOSE_WITHOUT_REPORT,
+
+ /** Stop serving user regions */
+ MSG_REGIONSERVER_QUIESCE,
+
+ // Message types sent from the region server to the master
+ /** region server is now serving the specified region */
+ MSG_REPORT_OPEN,
+
+ /** region server is no longer serving the specified region */
+ MSG_REPORT_CLOSE,
+
+ /** region server is processing open request */
+ MSG_REPORT_PROCESS_OPEN,
+
+ /**
+ * Region server split the region associated with this message.
+ *
+ * Note that this message is immediately followed by two MSG_REPORT_OPEN
+ * messages, one for each of the new regions resulting from the split
+ */
+ MSG_REPORT_SPLIT,
+
+ /**
+ * Region server is shutting down
+ *
+ * Note that this message is followed by MSG_REPORT_CLOSE messages for each
+ * region the region server was serving, unless it was told to quiesce.
+ */
+ MSG_REPORT_EXITING,
+
+ /** Region server has closed all user regions but is still serving meta
+ * regions
+ */
+ MSG_REPORT_QUIESCED,
+
+ /**
+ * Flush
+ */
+ MSG_REGION_FLUSH,
+
+ /**
+ * Run Major Compaction
+ */
+ MSG_REGION_MAJOR_COMPACT,
+ }
+
+ private Type type = null;
+ private HRegionInfo info = null;
+ private byte[] message = null;
+
+ /** Default constructor. Used during deserialization */
+ public HMsg() {
+ this(Type.MSG_NONE);
+ }
+
+ /**
+ * Construct a message with the specified message and empty HRegionInfo
+ * @param type Message type
+ */
+ public HMsg(final HMsg.Type type) {
+ this(type, new HRegionInfo(), null);
+ }
+
+ /**
+ * Construct a message with the specified message and HRegionInfo
+ * @param type Message type
+ * @param hri Region to which message <code>type</code> applies
+ */
+ public HMsg(final HMsg.Type type, final HRegionInfo hri) {
+ this(type, hri, null);
+ }
+
+ /**
+ * Construct a message with the specified message and HRegionInfo
+ *
+ * @param type Message type
+ * @param hri Region to which message <code>type</code> applies. Cannot be
+ * null. If no info associated, used other Constructor.
+ * @param msg Optional message (Stringified exception, etc.)
+ */
+ public HMsg(final HMsg.Type type, final HRegionInfo hri, final byte[] msg) {
+ if (type == null) {
+ throw new NullPointerException("Message type cannot be null");
+ }
+ this.type = type;
+ if (hri == null) {
+ throw new NullPointerException("Region cannot be null");
+ }
+ this.info = hri;
+ this.message = msg;
+ }
+
+ /**
+ * @return Region info or null if none associated with this message type.
+ */
+ public HRegionInfo getRegionInfo() {
+ return this.info;
+ }
+
+ /** @return the type of message */
+ public Type getType() {
+ return this.type;
+ }
+
+ /**
+ * @param other Message type to compare to
+ * @return True if we are of same message type as <code>other</code>
+ */
+ public boolean isType(final HMsg.Type other) {
+ return this.type.equals(other);
+ }
+
+ /** @return the message type */
+ public byte[] getMessage() {
+ return this.message;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append(this.type.toString());
+ // If null or empty region, don't bother printing it out.
+ if (this.info != null && this.info.getRegionName().length > 0) {
+ sb.append(": ");
+ sb.append(this.info.toString());
+ }
+ if (this.message != null && this.message.length > 0) {
+ sb.append(": " + Bytes.toString(this.message));
+ }
+ return sb.toString();
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null) {
+ return false;
+ }
+ if (getClass() != obj.getClass()) {
+ return false;
+ }
+ HMsg that = (HMsg)obj;
+ return this.type.equals(that.type) &&
+ (this.info != null)? this.info.equals(that.info):
+ that.info == null;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ int result = this.type.hashCode();
+ if (this.info != null) {
+ result ^= this.info.hashCode();
+ }
+ return result;
+ }
+
+ // ////////////////////////////////////////////////////////////////////////////
+ // Writable
+ //////////////////////////////////////////////////////////////////////////////
+
+ /**
+ * @see org.apache.hadoop.io.Writable#write(java.io.DataOutput)
+ */
+ public void write(DataOutput out) throws IOException {
+ out.writeInt(this.type.ordinal());
+ this.info.write(out);
+ if (this.message == null || this.message.length == 0) {
+ out.writeBoolean(false);
+ } else {
+ out.writeBoolean(true);
+ Bytes.writeByteArray(out, this.message);
+ }
+ }
+
+ /**
+ * @see org.apache.hadoop.io.Writable#readFields(java.io.DataInput)
+ */
+ public void readFields(DataInput in) throws IOException {
+ int ordinal = in.readInt();
+ this.type = HMsg.Type.values()[ordinal];
+ this.info.readFields(in);
+ boolean hasMessage = in.readBoolean();
+ if (hasMessage) {
+ this.message = Bytes.readByteArray(in);
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/HRegionInfo.java b/src/java/org/apache/hadoop/hbase/HRegionInfo.java
new file mode 100644
index 0000000..11bae89
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HRegionInfo.java
@@ -0,0 +1,477 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.KeyValue.KVComparator;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JenkinsHash;
+import org.apache.hadoop.io.VersionedWritable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * HRegion information.
+ * Contains HRegion id, start and end keys, a reference to this
+ * HRegions' table descriptor, etc.
+ */
+public class HRegionInfo extends VersionedWritable implements WritableComparable<HRegionInfo>{
+ private static final byte VERSION = 0;
+
+ /**
+ * @param regionName
+ * @return the encodedName
+ */
+ public static int encodeRegionName(final byte [] regionName) {
+ return Math.abs(JenkinsHash.getInstance().hash(regionName, regionName.length, 0));
+ }
+
+ /** delimiter used between portions of a region name */
+ public static final int DELIMITER = ',';
+
+ /** HRegionInfo for root region */
+ public static final HRegionInfo ROOT_REGIONINFO =
+ new HRegionInfo(0L, HTableDescriptor.ROOT_TABLEDESC);
+
+ /** HRegionInfo for first meta region */
+ public static final HRegionInfo FIRST_META_REGIONINFO =
+ new HRegionInfo(1L, HTableDescriptor.META_TABLEDESC);
+
+ private byte [] endKey = HConstants.EMPTY_BYTE_ARRAY;
+ private boolean offLine = false;
+ private long regionId = -1;
+ private transient byte [] regionName = HConstants.EMPTY_BYTE_ARRAY;
+ private String regionNameStr = "";
+ private boolean split = false;
+ private byte [] startKey = HConstants.EMPTY_BYTE_ARRAY;
+ protected HTableDescriptor tableDesc = null;
+ private int hashCode = -1;
+ //TODO: Move NO_HASH to HStoreFile which is really the only place it is used.
+ public static final int NO_HASH = -1;
+ private volatile int encodedName = NO_HASH;
+ private boolean splitRequest = false;
+
+ private void setHashCode() {
+ int result = Arrays.hashCode(this.regionName);
+ result ^= this.regionId;
+ result ^= Arrays.hashCode(this.startKey);
+ result ^= Arrays.hashCode(this.endKey);
+ result ^= Boolean.valueOf(this.offLine).hashCode();
+ result ^= this.tableDesc.hashCode();
+ this.hashCode = result;
+ }
+
+ /**
+ * Private constructor used constructing HRegionInfo for the catalog root and
+ * first meta regions
+ */
+ private HRegionInfo(long regionId, HTableDescriptor tableDesc) {
+ super();
+ this.regionId = regionId;
+ this.tableDesc = tableDesc;
+ this.regionName = createRegionName(tableDesc.getName(), null, regionId);
+ this.regionNameStr = Bytes.toString(this.regionName);
+ setHashCode();
+ }
+
+ /** Default constructor - creates empty object */
+ public HRegionInfo() {
+ super();
+ this.tableDesc = new HTableDescriptor();
+ }
+
+ /**
+ * Construct HRegionInfo with explicit parameters
+ *
+ * @param tableDesc the table descriptor
+ * @param startKey first key in region
+ * @param endKey end of key range
+ * @throws IllegalArgumentException
+ */
+ public HRegionInfo(final HTableDescriptor tableDesc, final byte [] startKey,
+ final byte [] endKey)
+ throws IllegalArgumentException {
+ this(tableDesc, startKey, endKey, false);
+ }
+
+ /**
+ * Construct HRegionInfo with explicit parameters
+ *
+ * @param tableDesc the table descriptor
+ * @param startKey first key in region
+ * @param endKey end of key range
+ * @param split true if this region has split and we have daughter regions
+ * regions that may or may not hold references to this region.
+ * @throws IllegalArgumentException
+ */
+ public HRegionInfo(HTableDescriptor tableDesc, final byte [] startKey,
+ final byte [] endKey, final boolean split)
+ throws IllegalArgumentException {
+ this(tableDesc, startKey, endKey, split, System.currentTimeMillis());
+ }
+
+ /**
+ * Construct HRegionInfo with explicit parameters
+ *
+ * @param tableDesc the table descriptor
+ * @param startKey first key in region
+ * @param endKey end of key range
+ * @param split true if this region has split and we have daughter regions
+ * regions that may or may not hold references to this region.
+ * @param regionid Region id to use.
+ * @throws IllegalArgumentException
+ */
+ public HRegionInfo(HTableDescriptor tableDesc, final byte [] startKey,
+ final byte [] endKey, final boolean split, final long regionid)
+ throws IllegalArgumentException {
+ super();
+ if (tableDesc == null) {
+ throw new IllegalArgumentException("tableDesc cannot be null");
+ }
+ this.offLine = false;
+ this.regionId = regionid;
+ this.regionName = createRegionName(tableDesc.getName(), startKey, regionId);
+ this.regionNameStr = Bytes.toString(this.regionName);
+ this.split = split;
+ this.endKey = endKey == null? HConstants.EMPTY_END_ROW: endKey.clone();
+ this.startKey = startKey == null?
+ HConstants.EMPTY_START_ROW: startKey.clone();
+ this.tableDesc = tableDesc;
+ setHashCode();
+ }
+
+ /**
+ * Costruct a copy of another HRegionInfo
+ *
+ * @param other
+ */
+ public HRegionInfo(HRegionInfo other) {
+ super();
+ this.endKey = other.getEndKey();
+ this.offLine = other.isOffline();
+ this.regionId = other.getRegionId();
+ this.regionName = other.getRegionName();
+ this.regionNameStr = Bytes.toString(this.regionName);
+ this.split = other.isSplit();
+ this.startKey = other.getStartKey();
+ this.tableDesc = other.getTableDesc();
+ this.hashCode = other.hashCode();
+ this.encodedName = other.getEncodedName();
+ }
+
+ private static byte [] createRegionName(final byte [] tableName,
+ final byte [] startKey, final long regionid) {
+ return createRegionName(tableName, startKey, Long.toString(regionid));
+ }
+
+ /**
+ * Make a region name of passed parameters.
+ * @param tableName
+ * @param startKey Can be null
+ * @param id Region id.
+ * @return Region name made of passed tableName, startKey and id
+ */
+ public static byte [] createRegionName(final byte [] tableName,
+ final byte [] startKey, final String id) {
+ return createRegionName(tableName, startKey, Bytes.toBytes(id));
+ }
+ /**
+ * Make a region name of passed parameters.
+ * @param tableName
+ * @param startKey Can be null
+ * @param id Region id
+ * @return Region name made of passed tableName, startKey and id
+ */
+ public static byte [] createRegionName(final byte [] tableName,
+ final byte [] startKey, final byte [] id) {
+ byte [] b = new byte [tableName.length + 2 + id.length +
+ (startKey == null? 0: startKey.length)];
+ int offset = tableName.length;
+ System.arraycopy(tableName, 0, b, 0, offset);
+ b[offset++] = DELIMITER;
+ if (startKey != null && startKey.length > 0) {
+ System.arraycopy(startKey, 0, b, offset, startKey.length);
+ offset += startKey.length;
+ }
+ b[offset++] = DELIMITER;
+ System.arraycopy(id, 0, b, offset, id.length);
+ return b;
+ }
+
+ /**
+ * Separate elements of a regionName.
+ * @param regionName
+ * @return Array of byte[] containing tableName, startKey and id
+ * @throws IOException
+ */
+ public static byte [][] parseRegionName(final byte [] regionName)
+ throws IOException {
+ int offset = -1;
+ for (int i = 0; i < regionName.length; i++) {
+ if (regionName[i] == DELIMITER) {
+ offset = i;
+ break;
+ }
+ }
+ if(offset == -1) throw new IOException("Invalid regionName format");
+ byte [] tableName = new byte[offset];
+ System.arraycopy(regionName, 0, tableName, 0, offset);
+ offset = -1;
+ for (int i = regionName.length - 1; i > 0; i--) {
+ if(regionName[i] == DELIMITER) {
+ offset = i;
+ break;
+ }
+ }
+ if(offset == -1) throw new IOException("Invalid regionName format");
+ byte [] startKey = HConstants.EMPTY_BYTE_ARRAY;
+ if(offset != tableName.length + 1) {
+ startKey = new byte[offset - tableName.length - 1];
+ System.arraycopy(regionName, tableName.length + 1, startKey, 0,
+ offset - tableName.length - 1);
+ }
+ byte [] id = new byte[regionName.length - offset - 1];
+ System.arraycopy(regionName, offset + 1, id, 0,
+ regionName.length - offset - 1);
+ byte [][] elements = new byte[3][];
+ elements[0] = tableName;
+ elements[1] = startKey;
+ elements[2] = id;
+ return elements;
+ }
+
+ /** @return the endKey */
+ public byte [] getEndKey(){
+ return endKey;
+ }
+
+ /** @return the regionId */
+ public long getRegionId(){
+ return regionId;
+ }
+
+ /**
+ * @return the regionName as an array of bytes.
+ * @see #getRegionNameAsString()
+ */
+ public byte [] getRegionName(){
+ return regionName;
+ }
+
+ /**
+ * @return Region name as a String for use in logging, etc.
+ */
+ public String getRegionNameAsString() {
+ return this.regionNameStr;
+ }
+
+ /** @return the encoded region name */
+ public synchronized int getEncodedName() {
+ if (this.encodedName == NO_HASH) {
+ this.encodedName = encodeRegionName(this.regionName);
+ }
+ return this.encodedName;
+ }
+
+ /** @return the startKey */
+ public byte [] getStartKey(){
+ return startKey;
+ }
+
+ /** @return the tableDesc */
+ public HTableDescriptor getTableDesc(){
+ return tableDesc;
+ }
+
+ /**
+ * @param newDesc new table descriptor to use
+ */
+ public void setTableDesc(HTableDescriptor newDesc) {
+ this.tableDesc = newDesc;
+ }
+
+ /** @return true if this is the root region */
+ public boolean isRootRegion() {
+ return this.tableDesc.isRootRegion();
+ }
+
+ /** @return true if this is the meta table */
+ public boolean isMetaTable() {
+ return this.tableDesc.isMetaTable();
+ }
+
+ /** @return true if this region is a meta region */
+ public boolean isMetaRegion() {
+ return this.tableDesc.isMetaRegion();
+ }
+
+ /**
+ * @return True if has been split and has daughters.
+ */
+ public boolean isSplit() {
+ return this.split;
+ }
+
+ /**
+ * @param split set split status
+ */
+ public void setSplit(boolean split) {
+ this.split = split;
+ }
+
+ /**
+ * @return True if this region is offline.
+ */
+ public boolean isOffline() {
+ return this.offLine;
+ }
+
+ /**
+ * @param offLine set online - offline status
+ */
+ public void setOffline(boolean offLine) {
+ this.offLine = offLine;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return "REGION => {" + HConstants.NAME + " => '" +
+ this.regionNameStr +
+ "', STARTKEY => '" +
+ Bytes.toString(this.startKey) + "', ENDKEY => '" +
+ Bytes.toString(this.endKey) +
+ "', ENCODED => " + getEncodedName() + "," +
+ (isOffline()? " OFFLINE => true,": "") +
+ (isSplit()? " SPLIT => true,": "") +
+ " TABLE => {" + this.tableDesc.toString() + "}";
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object o) {
+ if (this == o) {
+ return true;
+ }
+ if (o == null) {
+ return false;
+ }
+ if (!(o instanceof HRegionInfo)) {
+ return false;
+ }
+ return this.compareTo((HRegionInfo)o) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ return this.hashCode;
+ }
+
+ /** @return the object version number */
+ @Override
+ public byte getVersion() {
+ return VERSION;
+ }
+
+ //
+ // Writable
+ //
+
+ @Override
+ public void write(DataOutput out) throws IOException {
+ super.write(out);
+ Bytes.writeByteArray(out, endKey);
+ out.writeBoolean(offLine);
+ out.writeLong(regionId);
+ Bytes.writeByteArray(out, regionName);
+ out.writeBoolean(split);
+ Bytes.writeByteArray(out, startKey);
+ tableDesc.write(out);
+ out.writeInt(hashCode);
+ }
+
+ @Override
+ public void readFields(DataInput in) throws IOException {
+ super.readFields(in);
+ this.endKey = Bytes.readByteArray(in);
+ this.offLine = in.readBoolean();
+ this.regionId = in.readLong();
+ this.regionName = Bytes.readByteArray(in);
+ this.regionNameStr = Bytes.toString(this.regionName);
+ this.split = in.readBoolean();
+ this.startKey = Bytes.readByteArray(in);
+ this.tableDesc.readFields(in);
+ this.hashCode = in.readInt();
+ }
+
+ //
+ // Comparable
+ //
+
+ public int compareTo(HRegionInfo o) {
+ if (o == null) {
+ return 1;
+ }
+
+ // Are regions of same table?
+ int result = this.tableDesc.compareTo(o.tableDesc);
+ if (result != 0) {
+ return result;
+ }
+
+ // Compare start keys.
+ result = Bytes.compareTo(this.startKey, o.startKey);
+ if (result != 0) {
+ return result;
+ }
+
+ // Compare end keys.
+ return Bytes.compareTo(this.endKey, o.endKey);
+ }
+
+ /**
+ * For internal use in forcing splits ahead of file size limit.
+ * @param b
+ * @return previous value
+ */
+ public boolean shouldSplit(boolean b) {
+ boolean old = this.splitRequest;
+ this.splitRequest = b;
+ return old;
+ }
+
+ /**
+ * @return Comparator to use comparing {@link KeyValue}s.
+ */
+ public KVComparator getComparator() {
+ return isRootRegion()? KeyValue.ROOT_COMPARATOR: isMetaRegion()?
+ KeyValue.META_COMPARATOR: KeyValue.COMPARATOR;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/HRegionLocation.java b/src/java/org/apache/hadoop/hbase/HRegionLocation.java
new file mode 100644
index 0000000..6be0cff
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HRegionLocation.java
@@ -0,0 +1,98 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/**
+ * Contains the HRegionInfo for the region and the HServerAddress for the
+ * HRegionServer serving the region
+ */
+public class HRegionLocation implements Comparable<HRegionLocation> {
+ private HRegionInfo regionInfo;
+ private HServerAddress serverAddress;
+
+ /**
+ * Constructor
+ *
+ * @param regionInfo the HRegionInfo for the region
+ * @param serverAddress the HServerAddress for the region server
+ */
+ public HRegionLocation(HRegionInfo regionInfo, HServerAddress serverAddress) {
+ this.regionInfo = regionInfo;
+ this.serverAddress = serverAddress;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return "address: " + this.serverAddress.toString() + ", regioninfo: " +
+ this.regionInfo;
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object o) {
+ if (this == o) {
+ return true;
+ }
+ if (o == null) {
+ return false;
+ }
+ if (!(o instanceof HRegionLocation)) {
+ return false;
+ }
+ return this.compareTo((HRegionLocation)o) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ int result = this.regionInfo.hashCode();
+ result ^= this.serverAddress.hashCode();
+ return result;
+ }
+
+ /** @return HRegionInfo */
+ public HRegionInfo getRegionInfo(){
+ return regionInfo;
+ }
+
+ /** @return HServerAddress */
+ public HServerAddress getServerAddress(){
+ return serverAddress;
+ }
+
+ //
+ // Comparable
+ //
+
+ public int compareTo(HRegionLocation o) {
+ int result = this.regionInfo.compareTo(o.regionInfo);
+ if(result == 0) {
+ result = this.serverAddress.compareTo(o.serverAddress);
+ }
+ return result;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/HServerAddress.java b/src/java/org/apache/hadoop/hbase/HServerAddress.java
new file mode 100644
index 0000000..eb54b13
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HServerAddress.java
@@ -0,0 +1,187 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.io.*;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+
+/**
+ * HServerAddress is a "label" for a HBase server that combines the host
+ * name and port number.
+ */
+public class HServerAddress implements WritableComparable<HServerAddress> {
+ private InetSocketAddress address;
+ String stringValue;
+
+ /** Empty constructor, used for Writable */
+ public HServerAddress() {
+ this.address = null;
+ this.stringValue = null;
+ }
+
+ /**
+ * Construct a HServerAddress from an InetSocketAddress
+ * @param address InetSocketAddress of server
+ */
+ public HServerAddress(InetSocketAddress address) {
+ this.address = address;
+ this.stringValue = address.getAddress().getHostAddress() + ":" +
+ address.getPort();
+ }
+
+ /**
+ * Construct a HServerAddress from a string of the form hostname:port
+ *
+ * @param hostAndPort format 'hostname:port'
+ */
+ public HServerAddress(String hostAndPort) {
+ int colonIndex = hostAndPort.lastIndexOf(':');
+ if(colonIndex < 0) {
+ throw new IllegalArgumentException("Not a host:port pair: " + hostAndPort);
+ }
+ String host = hostAndPort.substring(0, colonIndex);
+ int port =
+ Integer.valueOf(hostAndPort.substring(colonIndex + 1)).intValue();
+ this.address = new InetSocketAddress(host, port);
+ this.stringValue = hostAndPort;
+ }
+
+ /**
+ * Construct a HServerAddress from hostname, port number
+ * @param bindAddress host name
+ * @param port port number
+ */
+ public HServerAddress(String bindAddress, int port) {
+ this.address = new InetSocketAddress(bindAddress, port);
+ this.stringValue = bindAddress + ":" + port;
+ }
+
+ /**
+ * Construct a HServerAddress from another HServerAddress
+ *
+ * @param other the HServerAddress to copy from
+ */
+ public HServerAddress(HServerAddress other) {
+ String bindAddress = other.getBindAddress();
+ int port = other.getPort();
+ address = new InetSocketAddress(bindAddress, port);
+ stringValue = bindAddress + ":" + port;
+ }
+
+ /** @return bind address */
+ public String getBindAddress() {
+ return address.getAddress().getHostAddress();
+ }
+
+ /** @return port number */
+ public int getPort() {
+ return address.getPort();
+ }
+
+ /** @return host name */
+ public String getHostname() {
+ return address.getHostName();
+ }
+
+ /** @return the InetSocketAddress */
+ public InetSocketAddress getInetSocketAddress() {
+ return address;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return (stringValue == null ? "" : stringValue);
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object o) {
+ if (this == o) {
+ return true;
+ }
+ if (o == null) {
+ return false;
+ }
+ if (getClass() != o.getClass()) {
+ return false;
+ }
+ return this.compareTo((HServerAddress)o) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ int result = this.address.hashCode();
+ result ^= this.stringValue.hashCode();
+ return result;
+ }
+
+ //
+ // Writable
+ //
+
+ public void readFields(DataInput in) throws IOException {
+ String bindAddress = in.readUTF();
+ int port = in.readInt();
+
+ if(bindAddress == null || bindAddress.length() == 0) {
+ address = null;
+ stringValue = null;
+
+ } else {
+ address = new InetSocketAddress(bindAddress, port);
+ stringValue = bindAddress + ":" + port;
+ }
+ }
+
+ public void write(DataOutput out) throws IOException {
+ if(address == null) {
+ out.writeUTF("");
+ out.writeInt(0);
+
+ } else {
+ out.writeUTF(address.getAddress().getHostAddress());
+ out.writeInt(address.getPort());
+ }
+ }
+
+ //
+ // Comparable
+ //
+
+ public int compareTo(HServerAddress o) {
+ // Addresses as Strings may not compare though address is for the one
+ // server with only difference being that one address has hostname
+ // resolved whereas other only has IP.
+ if (this.address.equals(o.address)) return 0;
+ return this.toString().compareTo(o.toString());
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/HServerInfo.java b/src/java/org/apache/hadoop/hbase/HServerInfo.java
new file mode 100644
index 0000000..1c005e1
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HServerInfo.java
@@ -0,0 +1,254 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.io.WritableComparable;
+
+
+/**
+ * HServerInfo contains metainfo about an HRegionServer, Currently it only
+ * contains the server start code.
+ *
+ * In the future it will contain information about the source machine and
+ * load statistics.
+ */
+public class HServerInfo implements WritableComparable<HServerInfo> {
+ private HServerAddress serverAddress;
+ private long startCode;
+ private HServerLoad load;
+ private int infoPort;
+ private transient volatile String serverName = null;
+ private String name;
+
+ /** default constructor - used by Writable */
+ public HServerInfo() {
+ this(new HServerAddress(), 0,
+ HConstants.DEFAULT_REGIONSERVER_INFOPORT, "default name");
+ }
+
+ /**
+ * Constructor
+ * @param serverAddress
+ * @param startCode
+ * @param infoPort Port the info server is listening on.
+ */
+ public HServerInfo(HServerAddress serverAddress, long startCode,
+ final int infoPort, String name) {
+ this.serverAddress = serverAddress;
+ this.startCode = startCode;
+ this.load = new HServerLoad();
+ this.infoPort = infoPort;
+ this.name = name;
+ }
+
+ /**
+ * Construct a new object using another as input (like a copy constructor)
+ * @param other
+ */
+ public HServerInfo(HServerInfo other) {
+ this.serverAddress = new HServerAddress(other.getServerAddress());
+ this.startCode = other.getStartCode();
+ this.load = other.getLoad();
+ this.infoPort = other.getInfoPort();
+ this.name = other.getName();
+ }
+
+ /**
+ * @return the load
+ */
+ public HServerLoad getLoad() {
+ return load;
+ }
+
+ /**
+ * @param load the load to set
+ */
+ public void setLoad(HServerLoad load) {
+ this.load = load;
+ }
+
+ /** @return the server address */
+ public synchronized HServerAddress getServerAddress() {
+ return new HServerAddress(serverAddress);
+ }
+
+ /**
+ * Change the server address.
+ * @param serverAddress New server address
+ */
+ public synchronized void setServerAddress(HServerAddress serverAddress) {
+ this.serverAddress = serverAddress;
+ this.serverName = null;
+ }
+
+ /** @return the server start code */
+ public synchronized long getStartCode() {
+ return startCode;
+ }
+
+ /**
+ * @return Port the info server is listening on.
+ */
+ public int getInfoPort() {
+ return this.infoPort;
+ }
+
+ /**
+ * @param infoPort - new port of info server
+ */
+ public void setInfoPort(int infoPort) {
+ this.infoPort = infoPort;
+ }
+
+ /**
+ * @param startCode the startCode to set
+ */
+ public synchronized void setStartCode(long startCode) {
+ this.startCode = startCode;
+ this.serverName = null;
+ }
+
+ /**
+ * @return the server name in the form hostname_startcode_port
+ */
+ public synchronized String getServerName() {
+ if (this.serverName == null) {
+ this.serverName = getServerName(this.serverAddress, this.startCode);
+ }
+ return this.serverName;
+ }
+
+ /**
+ * Get the hostname of the server
+ * @return hostname
+ */
+ public String getName() {
+ return name;
+ }
+
+ /**
+ * Set the hostname of the server
+ * @param name hostname
+ */
+ public void setName(String name) {
+ this.name = name;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return "address: " + this.serverAddress + ", startcode: " + this.startCode
+ + ", load: (" + this.load.toString() + ")";
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null) {
+ return false;
+ }
+ if (getClass() != obj.getClass()) {
+ return false;
+ }
+ return compareTo((HServerInfo)obj) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ return this.getServerName().hashCode();
+ }
+
+
+ // Writable
+
+ public void readFields(DataInput in) throws IOException {
+ this.serverAddress.readFields(in);
+ this.startCode = in.readLong();
+ this.load.readFields(in);
+ this.infoPort = in.readInt();
+ this.name = in.readUTF();
+ }
+
+ public void write(DataOutput out) throws IOException {
+ this.serverAddress.write(out);
+ out.writeLong(this.startCode);
+ this.load.write(out);
+ out.writeInt(this.infoPort);
+ out.writeUTF(name);
+ }
+
+ public int compareTo(HServerInfo o) {
+ return this.getServerName().compareTo(o.getServerName());
+ }
+
+ /**
+ * @param info
+ * @return the server name in the form hostname_startcode_port
+ */
+ public static String getServerName(HServerInfo info) {
+ return getServerName(info.getServerAddress(), info.getStartCode());
+ }
+
+ /**
+ * @param serverAddress in the form hostname:port
+ * @param startCode
+ * @return the server name in the form hostname_startcode_port
+ */
+ public static String getServerName(String serverAddress, long startCode) {
+ String name = null;
+ if (serverAddress != null) {
+ HServerAddress address = new HServerAddress(serverAddress);
+ name = getServerName(address.getHostname(), address.getPort(), startCode);
+ }
+ return name;
+ }
+
+ /**
+ * @param address
+ * @param startCode
+ * @return the server name in the form hostname_startcode_port
+ */
+ public static String getServerName(HServerAddress address, long startCode) {
+ return getServerName(address.getHostname(), address.getPort(), startCode);
+ }
+
+ private static String getServerName(String hostName, int port, long startCode) {
+ StringBuilder name = new StringBuilder(hostName);
+ name.append("_");
+ name.append(startCode);
+ name.append("_");
+ name.append(port);
+ return name.toString();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/HServerLoad.java b/src/java/org/apache/hadoop/hbase/HServerLoad.java
new file mode 100644
index 0000000..f1bae42
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HServerLoad.java
@@ -0,0 +1,437 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.hadoop.hbase.util.Strings;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * This class encapsulates metrics for determining the load on a HRegionServer
+ */
+public class HServerLoad implements WritableComparable<HServerLoad> {
+ /** number of regions */
+ // could just use regionLoad.size() but master.RegionManager likes to play
+ // around with this value while passing HServerLoad objects around during
+ // balancer calculations
+ private int numberOfRegions;
+ /** number of requests since last report */
+ private int numberOfRequests;
+ /** the amount of used heap, in MB */
+ private int usedHeapMB;
+ /** the maximum allowable size of the heap, in MB */
+ private int maxHeapMB;
+ /** per-region load metrics */
+ private ArrayList<RegionLoad> regionLoad = new ArrayList<RegionLoad>();
+
+ /**
+ * Encapsulates per-region loading metrics.
+ */
+ public static class RegionLoad implements Writable {
+ /** the region name */
+ private byte[] name;
+ /** the number of stores for the region */
+ private int stores;
+ /** the number of storefiles for the region */
+ private int storefiles;
+ /** the current size of the memcache for the region, in MB */
+ private int memcacheSizeMB;
+ /** the current total size of storefile indexes for the region, in MB */
+ private int storefileIndexSizeMB;
+
+ /**
+ * Constructor, for Writable
+ */
+ public RegionLoad() {
+ super();
+ }
+
+ /**
+ * @param name
+ * @param stores
+ * @param storefiles
+ * @param memcacheSizeMB
+ * @param storefileIndexSizeMB
+ */
+ public RegionLoad(final byte[] name, final int stores,
+ final int storefiles, final int memcacheSizeMB,
+ final int storefileIndexSizeMB) {
+ this.name = name;
+ this.stores = stores;
+ this.storefiles = storefiles;
+ this.memcacheSizeMB = memcacheSizeMB;
+ this.storefileIndexSizeMB = storefileIndexSizeMB;
+ }
+
+ // Getters
+
+ /**
+ * @return the region name
+ */
+ public byte[] getName() {
+ return name;
+ }
+
+ /**
+ * @return the number of stores
+ */
+ public int getStores() {
+ return stores;
+ }
+
+ /**
+ * @return the number of storefiles
+ */
+ public int getStorefiles() {
+ return storefiles;
+ }
+
+ /**
+ * @return the memcache size, in MB
+ */
+ public int getMemcacheSizeMB() {
+ return memcacheSizeMB;
+ }
+
+ /**
+ * @return the approximate size of storefile indexes on the heap, in MB
+ */
+ public int getStorefileIndexSizeMB() {
+ return storefileIndexSizeMB;
+ }
+
+ // Setters
+
+ /**
+ * @param name the region name
+ */
+ public void setName(byte[] name) {
+ this.name = name;
+ }
+
+ /**
+ * @param stores the number of stores
+ */
+ public void setStores(int stores) {
+ this.stores = stores;
+ }
+
+ /**
+ * @param storefiles the number of storefiles
+ */
+ public void setStorefiles(int storefiles) {
+ this.storefiles = storefiles;
+ }
+
+ /**
+ * @param memcacheSizeMB the memcache size, in MB
+ */
+ public void setMemcacheSizeMB(int memcacheSizeMB) {
+ this.memcacheSizeMB = memcacheSizeMB;
+ }
+
+ /**
+ * @param storefileIndexSizeMB the approximate size of storefile indexes
+ * on the heap, in MB
+ */
+ public void setStorefileIndexSizeMB(int storefileIndexSizeMB) {
+ this.storefileIndexSizeMB = storefileIndexSizeMB;
+ }
+
+ // Writable
+ public void readFields(DataInput in) throws IOException {
+ int namelen = in.readInt();
+ this.name = new byte[namelen];
+ in.readFully(this.name);
+ this.stores = in.readInt();
+ this.storefiles = in.readInt();
+ this.memcacheSizeMB = in.readInt();
+ this.storefileIndexSizeMB = in.readInt();
+ }
+
+ public void write(DataOutput out) throws IOException {
+ out.writeInt(name.length);
+ out.write(name);
+ out.writeInt(stores);
+ out.writeInt(storefiles);
+ out.writeInt(memcacheSizeMB);
+ out.writeInt(storefileIndexSizeMB);
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ StringBuilder sb = Strings.appendKeyValue(new StringBuilder(), "stores",
+ Integer.valueOf(this.stores));
+ sb = Strings.appendKeyValue(sb, "storefiles",
+ Integer.valueOf(this.storefiles));
+ sb = Strings.appendKeyValue(sb, "memcacheSize",
+ Integer.valueOf(this.memcacheSizeMB));
+ sb = Strings.appendKeyValue(sb, "storefileIndexSize",
+ Integer.valueOf(this.storefileIndexSizeMB));
+ return sb.toString();
+ }
+ }
+
+ /*
+ * TODO: Other metrics that might be considered when the master is actually
+ * doing load balancing instead of merely trying to decide where to assign
+ * a region:
+ * <ul>
+ * <li># of CPUs, heap size (to determine the "class" of machine). For
+ * now, we consider them to be homogeneous.</li>
+ * <li>#requests per region (Map<{String|HRegionInfo}, Integer>)</li>
+ * <li>#compactions and/or #splits (churn)</li>
+ * <li>server death rate (maybe there is something wrong with this server)</li>
+ * </ul>
+ */
+
+ /** default constructor (used by Writable) */
+ public HServerLoad() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param numberOfRequests
+ * @param usedHeapMB
+ * @param maxHeapMB
+ */
+ public HServerLoad(final int numberOfRequests, final int usedHeapMB,
+ final int maxHeapMB) {
+ this.numberOfRequests = numberOfRequests;
+ this.usedHeapMB = usedHeapMB;
+ this.maxHeapMB = maxHeapMB;
+ }
+
+ /**
+ * Constructor
+ * @param hsl the template HServerLoad
+ */
+ public HServerLoad(final HServerLoad hsl) {
+ this(hsl.numberOfRequests, hsl.usedHeapMB, hsl.maxHeapMB);
+ this.regionLoad.addAll(hsl.regionLoad);
+ }
+
+ /**
+ * Originally, this method factored in the effect of requests going to the
+ * server as well. However, this does not interact very well with the current
+ * region rebalancing code, which only factors number of regions. For the
+ * interim, until we can figure out how to make rebalancing use all the info
+ * available, we're just going to make load purely the number of regions.
+ *
+ * @return load factor for this server
+ */
+ public int getLoad() {
+ // int load = numberOfRequests == 0 ? 1 : numberOfRequests;
+ // load *= numberOfRegions == 0 ? 1 : numberOfRegions;
+ // return load;
+ return numberOfRegions;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return toString(1);
+ }
+
+ /**
+ * Returns toString() with the number of requests divided by the message
+ * interval in seconds
+ * @param msgInterval
+ * @return The load as a String
+ */
+ public String toString(int msgInterval) {
+ StringBuilder sb = new StringBuilder();
+ sb = Strings.appendKeyValue(sb, "requests",
+ Integer.valueOf(numberOfRequests/msgInterval));
+ sb = Strings.appendKeyValue(sb, "regions",
+ Integer.valueOf(numberOfRegions));
+ sb = Strings.appendKeyValue(sb, "usedHeap",
+ Integer.valueOf(this.usedHeapMB));
+ sb = Strings.appendKeyValue(sb, "maxHeap", Integer.valueOf(maxHeapMB));
+ return sb.toString();
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object o) {
+ if (this == o) {
+ return true;
+ }
+ if (o == null) {
+ return false;
+ }
+ if (getClass() != o.getClass()) {
+ return false;
+ }
+ return compareTo((HServerLoad)o) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ int result = Integer.valueOf(numberOfRequests).hashCode();
+ result ^= Integer.valueOf(numberOfRegions).hashCode();
+ return result;
+ }
+
+ // Getters
+
+ /**
+ * @return the numberOfRegions
+ */
+ public int getNumberOfRegions() {
+ return numberOfRegions;
+ }
+
+ /**
+ * @return the numberOfRequests
+ */
+ public int getNumberOfRequests() {
+ return numberOfRequests;
+ }
+
+ /**
+ * @return Count of storefiles on this regionserver
+ */
+ public int getStorefiles() {
+ int count = 0;
+ for (RegionLoad info: regionLoad)
+ count += info.getStorefiles();
+ return count;
+ }
+
+ /**
+ * @return Size of memcaches in MB
+ */
+ public int getMemcacheSizeInMB() {
+ int count = 0;
+ for (RegionLoad info: regionLoad)
+ count += info.getMemcacheSizeMB();
+ return count;
+ }
+
+ /**
+ * @return Size of store file indexes in MB
+ */
+ public int getStorefileIndexSizeInMB() {
+ int count = 0;
+ for (RegionLoad info: regionLoad)
+ count += info.getStorefileIndexSizeMB();
+ return count;
+ }
+
+ // Setters
+
+ /**
+ * @param numberOfRegions the number of regions
+ */
+ public void setNumberOfRegions(int numberOfRegions) {
+ this.numberOfRegions = numberOfRegions;
+ }
+
+ /**
+ * @param numberOfRequests the number of requests to set
+ */
+ public void setNumberOfRequests(int numberOfRequests) {
+ this.numberOfRequests = numberOfRequests;
+ }
+
+ /**
+ * @param usedHeapMB the amount of heap in use, in MB
+ */
+ public void setUsedHeapMB(int usedHeapMB) {
+ this.usedHeapMB = usedHeapMB;
+ }
+
+ /**
+ * @param maxHeapMB the maximum allowable heap size, in MB
+ */
+ public void setMaxHeapMB(int maxHeapMB) {
+ this.maxHeapMB = maxHeapMB;
+ }
+
+ /**
+ * @param load Instance of HServerLoad
+ */
+ public void addRegionInfo(final HServerLoad.RegionLoad load) {
+ this.numberOfRegions++;
+ this.regionLoad.add(load);
+ }
+
+ /**
+ * @param name
+ * @param stores
+ * @param storefiles
+ * @param memcacheSizeMB
+ * @param storefileIndexSizeMB
+ * @deprecated Use {@link #addRegionInfo(RegionLoad)}
+ */
+ @Deprecated
+ public void addRegionInfo(final byte[] name, final int stores,
+ final int storefiles, final int memcacheSizeMB,
+ final int storefileIndexSizeMB) {
+ this.regionLoad.add(new HServerLoad.RegionLoad(name, stores, storefiles,
+ memcacheSizeMB, storefileIndexSizeMB));
+ }
+
+ // Writable
+
+ public void readFields(DataInput in) throws IOException {
+ numberOfRequests = in.readInt();
+ usedHeapMB = in.readInt();
+ maxHeapMB = in.readInt();
+ numberOfRegions = in.readInt();
+ for (int i = 0; i < numberOfRegions; i++) {
+ RegionLoad rl = new RegionLoad();
+ rl.readFields(in);
+ regionLoad.add(rl);
+ }
+ }
+
+ public void write(DataOutput out) throws IOException {
+ out.writeInt(numberOfRequests);
+ out.writeInt(usedHeapMB);
+ out.writeInt(maxHeapMB);
+ out.writeInt(numberOfRegions);
+ for (int i = 0; i < numberOfRegions; i++)
+ regionLoad.get(i).write(out);
+ }
+
+ // Comparable
+
+ public int compareTo(HServerLoad o) {
+ return this.getLoad() - o.getLoad();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/HStoreKey.java b/src/java/org/apache/hadoop/hbase/HStoreKey.java
new file mode 100644
index 0000000..88b9415
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HStoreKey.java
@@ -0,0 +1,1116 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.io.WritableComparator;
+import org.apache.hadoop.io.WritableUtils;
+
+/**
+ * A Key for a stored row.
+ * @deprecated Replaced by {@link KeyValue}.
+ */
+public class HStoreKey implements WritableComparable<HStoreKey>, HeapSize {
+ /**
+ * Colon character in UTF-8
+ */
+ public static final char COLUMN_FAMILY_DELIMITER = ':';
+
+ /**
+ * Estimated size tax paid for each instance of HSK. Estimate based on
+ * study of jhat and jprofiler numbers.
+ */
+ // In jprofiler, says shallow size is 48 bytes. Add to it cost of two
+ // byte arrays and then something for the HRI hosting.
+ public static final int ESTIMATED_HEAP_TAX = 48;
+
+ private byte [] row = HConstants.EMPTY_BYTE_ARRAY;
+ private byte [] column = HConstants.EMPTY_BYTE_ARRAY;
+ private long timestamp = Long.MAX_VALUE;
+
+ private static final HStoreKey.StoreKeyComparator PLAIN_COMPARATOR =
+ new HStoreKey.StoreKeyComparator();
+ private static final HStoreKey.StoreKeyComparator META_COMPARATOR =
+ new HStoreKey.MetaStoreKeyComparator();
+ private static final HStoreKey.StoreKeyComparator ROOT_COMPARATOR =
+ new HStoreKey.RootStoreKeyComparator();
+
+ /** Default constructor used in conjunction with Writable interface */
+ public HStoreKey() {
+ super();
+ }
+
+ /**
+ * Create an HStoreKey specifying only the row
+ * The column defaults to the empty string, the time stamp defaults to
+ * Long.MAX_VALUE and the table defaults to empty string
+ *
+ * @param row - row key
+ */
+ public HStoreKey(final byte [] row) {
+ this(row, Long.MAX_VALUE);
+ }
+
+ /**
+ * Create an HStoreKey specifying only the row
+ * The column defaults to the empty string, the time stamp defaults to
+ * Long.MAX_VALUE and the table defaults to empty string
+ *
+ * @param row - row key
+ */
+ public HStoreKey(final String row) {
+ this(Bytes.toBytes(row), Long.MAX_VALUE);
+ }
+
+ /**
+ * Create an HStoreKey specifying the row and timestamp
+ * The column and table names default to the empty string
+ *
+ * @param row row key
+ * @param timestamp timestamp value
+ */
+ public HStoreKey(final byte [] row, final long timestamp) {
+ this(row, HConstants.EMPTY_BYTE_ARRAY, timestamp);
+ }
+
+ /**
+ * Create an HStoreKey specifying the row and column names
+ * The timestamp defaults to LATEST_TIMESTAMP
+ * and table name defaults to the empty string
+ *
+ * @param row row key
+ * @param column column key
+ */
+ public HStoreKey(final String row, final String column) {
+ this(row, column, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Create an HStoreKey specifying the row and column names
+ * The timestamp defaults to LATEST_TIMESTAMP
+ * and table name defaults to the empty string
+ *
+ * @param row row key
+ * @param column column key
+ */
+ public HStoreKey(final byte [] row, final byte [] column) {
+ this(row, column, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Create an HStoreKey specifying all the fields
+ * Does not make copies of the passed byte arrays. Presumes the passed
+ * arrays immutable.
+ * @param row row key
+ * @param column column key
+ * @param timestamp timestamp value
+ */
+ public HStoreKey(final String row, final String column, final long timestamp) {
+ this (Bytes.toBytes(row), Bytes.toBytes(column), timestamp);
+ }
+
+ /**
+ * Create an HStoreKey specifying all the fields with specified table
+ * Does not make copies of the passed byte arrays. Presumes the passed
+ * arrays immutable.
+ * @param row row key
+ * @param column column key
+ * @param timestamp timestamp value
+ */
+ public HStoreKey(final byte [] row, final byte [] column, final long timestamp) {
+ // Make copies
+ this.row = row;
+ this.column = column;
+ this.timestamp = timestamp;
+ }
+
+ /**
+ * Constructs a new HStoreKey from another
+ *
+ * @param other the source key
+ */
+ public HStoreKey(final HStoreKey other) {
+ this(other.getRow(), other.getColumn(), other.getTimestamp());
+ }
+
+ public HStoreKey(final ByteBuffer bb) {
+ this(getRow(bb), getColumn(bb), getTimestamp(bb));
+ }
+
+ /**
+ * Change the value of the row key
+ *
+ * @param newrow new row key value
+ */
+ public void setRow(final byte [] newrow) {
+ this.row = newrow;
+ }
+
+ /**
+ * Change the value of the column in this key
+ *
+ * @param c new column family value
+ */
+ public void setColumn(final byte [] c) {
+ this.column = c;
+ }
+
+ /**
+ * Change the value of the timestamp field
+ *
+ * @param timestamp new timestamp value
+ */
+ public void setVersion(final long timestamp) {
+ this.timestamp = timestamp;
+ }
+
+ /**
+ * Set the value of this HStoreKey from the supplied key
+ *
+ * @param k key value to copy
+ */
+ public void set(final HStoreKey k) {
+ this.row = k.getRow();
+ this.column = k.getColumn();
+ this.timestamp = k.getTimestamp();
+ }
+
+ /** @return value of row key */
+ public byte [] getRow() {
+ return row;
+ }
+
+ /** @return value of column */
+ public byte [] getColumn() {
+ return this.column;
+ }
+
+ /** @return value of timestamp */
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ /**
+ * Compares the row and column of two keys
+ * @param other Key to compare against. Compares row and column.
+ * @return True if same row and column.
+ * @see #matchesWithoutColumn(HStoreKey)
+ * @see #matchesRowFamily(HStoreKey)
+ */
+ public boolean matchesRowCol(final HStoreKey other) {
+ return HStoreKey.equalsTwoRowKeys(getRow(), other.getRow()) &&
+ Bytes.equals(getColumn(), other.getColumn());
+ }
+
+ /**
+ * Compares the row and timestamp of two keys
+ *
+ * @param other Key to copmare against. Compares row and timestamp.
+ *
+ * @return True if same row and timestamp is greater than <code>other</code>
+ * @see #matchesRowCol(HStoreKey)
+ * @see #matchesRowFamily(HStoreKey)
+ */
+ public boolean matchesWithoutColumn(final HStoreKey other) {
+ return equalsTwoRowKeys(getRow(), other.getRow()) &&
+ getTimestamp() >= other.getTimestamp();
+ }
+
+ /**
+ * Compares the row and column family of two keys
+ *
+ * @param that Key to compare against. Compares row and column family
+ *
+ * @return true if same row and column family
+ * @see #matchesRowCol(HStoreKey)
+ * @see #matchesWithoutColumn(HStoreKey)
+ */
+ public boolean matchesRowFamily(final HStoreKey that) {
+ final int delimiterIndex = getFamilyDelimiterIndex(getColumn());
+ return equalsTwoRowKeys(getRow(), that.getRow()) &&
+ Bytes.compareTo(getColumn(), 0, delimiterIndex, that.getColumn(), 0,
+ delimiterIndex) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return Bytes.toString(this.row) + "/" + Bytes.toString(this.column) + "/" +
+ timestamp;
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(final Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null) {
+ return false;
+ }
+ if (getClass() != obj.getClass()) {
+ return false;
+ }
+ final HStoreKey other = (HStoreKey)obj;
+ // Do a quick check.
+ if (this.row.length != other.row.length ||
+ this.column.length != other.column.length ||
+ this.timestamp != other.timestamp) {
+ return false;
+ }
+ return compareTo(other) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ int c = Bytes.hashCode(getRow());
+ c ^= Bytes.hashCode(getColumn());
+ c ^= getTimestamp();
+ return c;
+ }
+
+ // Comparable
+
+ /**
+ * @param o
+ * @return int
+ * @deprecated Use Comparators instead. This can give wrong results.
+ */
+ @Deprecated
+ public int compareTo(final HStoreKey o) {
+ return compareTo(this, o);
+ }
+
+ /**
+ * @param left
+ * @param right
+ * @return
+ * @deprecated Use Comparators instead. This can give wrong results because
+ * does not take into account special handling needed for meta and root rows.
+ */
+ @Deprecated
+ static int compareTo(final HStoreKey left, final HStoreKey right) {
+ // We can be passed null
+ if (left == null && right == null) return 0;
+ if (left == null) return -1;
+ if (right == null) return 1;
+
+ int result = Bytes.compareTo(left.getRow(), right.getRow());
+ if (result != 0) {
+ return result;
+ }
+ result = left.getColumn() == null && right.getColumn() == null? 0:
+ left.getColumn() == null && right.getColumn() != null? -1:
+ left.getColumn() != null && right.getColumn() == null? 1:
+ Bytes.compareTo(left.getColumn(), right.getColumn());
+ if (result != 0) {
+ return result;
+ }
+ // The below older timestamps sorting ahead of newer timestamps looks
+ // wrong but it is intentional. This way, newer timestamps are first
+ // found when we iterate over a memcache and newer versions are the
+ // first we trip over when reading from a store file.
+ if (left.getTimestamp() < right.getTimestamp()) {
+ result = 1;
+ } else if (left.getTimestamp() > right.getTimestamp()) {
+ result = -1;
+ }
+ return result;
+ }
+
+ /**
+ * @param column
+ * @return New byte array that holds <code>column</code> family prefix only
+ * (Does not include the colon DELIMITER).
+ * @throws ColumnNameParseException
+ * @see #parseColumn(byte[])
+ */
+ public static byte [] getFamily(final byte [] column)
+ throws ColumnNameParseException {
+ final int index = getFamilyDelimiterIndex(column);
+ if (index <= 0) {
+ throw new ColumnNameParseException("Missing ':' delimiter between " +
+ "column family and qualifier in the passed column name <" +
+ Bytes.toString(column) + ">");
+ }
+ final byte [] result = new byte[index];
+ System.arraycopy(column, 0, result, 0, index);
+ return result;
+ }
+
+ /**
+ * @param column
+ * @return Return hash of family portion of passed column.
+ */
+ public static Integer getFamilyMapKey(final byte [] column) {
+ final int index = getFamilyDelimiterIndex(column);
+ // If index < -1, presume passed column is a family name absent colon
+ // delimiter
+ return Bytes.mapKey(column, index > 0? index: column.length);
+ }
+
+ /**
+ * @param family
+ * @param column
+ * @return True if <code>column</code> has a family of <code>family</code>.
+ */
+ public static boolean matchingFamily(final byte [] family,
+ final byte [] column) {
+ // Make sure index of the ':' is at same offset.
+ final int index = getFamilyDelimiterIndex(column);
+ if (index != family.length) {
+ return false;
+ }
+ return Bytes.compareTo(family, 0, index, column, 0, index) == 0;
+ }
+
+ /**
+ * @param family
+ * @return Return <code>family</code> plus the family delimiter.
+ */
+ public static byte [] addDelimiter(final byte [] family) {
+ // Manufacture key by adding delimiter to the passed in colFamily.
+ final byte [] familyPlusDelimiter = new byte [family.length + 1];
+ System.arraycopy(family, 0, familyPlusDelimiter, 0, family.length);
+ familyPlusDelimiter[family.length] = HStoreKey.COLUMN_FAMILY_DELIMITER;
+ return familyPlusDelimiter;
+ }
+
+ /**
+ * @param column
+ * @return New byte array that holds <code>column</code> qualifier suffix.
+ * @see #parseColumn(byte[])
+ */
+ public static byte [] getQualifier(final byte [] column) {
+ final int index = getFamilyDelimiterIndex(column);
+ final int len = column.length - (index + 1);
+ final byte [] result = new byte[len];
+ System.arraycopy(column, index + 1, result, 0, len);
+ return result;
+ }
+
+ /**
+ * @param c Column name
+ * @return Return array of size two whose first element has the family
+ * prefix of passed column <code>c</code> and whose second element is the
+ * column qualifier.
+ * @throws ColumnNameParseException
+ */
+ public static byte [][] parseColumn(final byte [] c)
+ throws ColumnNameParseException {
+ final byte [][] result = new byte [2][];
+ final int index = getFamilyDelimiterIndex(c);
+ if (index == -1) {
+ throw new ColumnNameParseException("Impossible column name: " + Bytes.toString(c));
+ }
+ result[0] = new byte [index];
+ System.arraycopy(c, 0, result[0], 0, index);
+ final int len = c.length - (index + 1);
+ result[1] = new byte[len];
+ System.arraycopy(c, index + 1 /*Skip delimiter*/, result[1], 0,
+ len);
+ return result;
+ }
+
+ /**
+ * @param b
+ * @return Index of the family-qualifier colon delimiter character in passed
+ * buffer.
+ */
+ public static int getFamilyDelimiterIndex(final byte [] b) {
+ return getDelimiter(b, 0, b.length, COLUMN_FAMILY_DELIMITER);
+ }
+
+ private static int getRequiredDelimiterInReverse(final byte [] b,
+ final int offset, final int length, final int delimiter) {
+ int index = getDelimiterInReverse(b, offset, length, delimiter);
+ if (index < 0) {
+ throw new IllegalArgumentException("No " + delimiter + " in <" +
+ Bytes.toString(b) + ">" + ", length=" + length + ", offset=" + offset);
+ }
+ return index;
+ }
+ /*
+ * @param b
+ * @param delimiter
+ * @return Index of delimiter having started from end of <code>b</code> moving
+ * leftward.
+ */
+ private static int getDelimiter(final byte [] b, int offset, final int length,
+ final int delimiter) {
+ if (b == null) {
+ throw new NullPointerException();
+ }
+ int result = -1;
+ for (int i = offset; i < length + offset; i++) {
+ if (b[i] == delimiter) {
+ result = i;
+ break;
+ }
+ }
+ return result;
+ }
+
+ /*
+ * @param b
+ * @param delimiter
+ * @return Index of delimiter
+ */
+ private static int getDelimiterInReverse(final byte [] b, final int offset,
+ final int length, final int delimiter) {
+ if (b == null) {
+ throw new NullPointerException();
+ }
+ int result = -1;
+ for (int i = (offset + length) - 1; i >= offset; i--) {
+ if (b[i] == delimiter) {
+ result = i;
+ break;
+ }
+ }
+ return result;
+ }
+
+ /**
+ * Utility method to check if two row keys are equal.
+ * This is required because of the meta delimiters
+ * This is a hack
+ * @param rowA
+ * @param rowB
+ * @return if it's equal
+ */
+ public static boolean equalsTwoRowKeys(final byte[] rowA, final byte[] rowB) {
+ return ((rowA == null) && (rowB == null)) ? true:
+ (rowA == null) || (rowB == null) || (rowA.length != rowB.length) ? false:
+ Bytes.compareTo(rowA, rowB) == 0;
+ }
+
+ // Writable
+
+ public void write(final DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, this.row);
+ Bytes.writeByteArray(out, this.column);
+ out.writeLong(timestamp);
+ }
+
+ public void readFields(final DataInput in) throws IOException {
+ this.row = Bytes.readByteArray(in);
+ this.column = Bytes.readByteArray(in);
+ this.timestamp = in.readLong();
+ }
+
+ /**
+ * @param hsk
+ * @return Size of this key in serialized bytes.
+ */
+ public static int getSerializedSize(final HStoreKey hsk) {
+ return getSerializedSize(hsk.getRow()) +
+ getSerializedSize(hsk.getColumn()) +
+ Bytes.SIZEOF_LONG;
+ }
+
+ /**
+ * @param b
+ * @return Length of buffer when its been serialized.
+ */
+ private static int getSerializedSize(final byte [] b) {
+ return b == null? 1: b.length + WritableUtils.getVIntSize(b.length);
+ }
+
+ public long heapSize() {
+ return getRow().length + Bytes.ESTIMATED_HEAP_TAX +
+ getColumn().length + Bytes.ESTIMATED_HEAP_TAX +
+ ESTIMATED_HEAP_TAX;
+ }
+
+ /**
+ * @return The bytes of <code>hsk</code> gotten by running its
+ * {@link Writable#write(java.io.DataOutput)} method.
+ * @throws IOException
+ */
+ public byte [] getBytes() throws IOException {
+ return getBytes(this);
+ }
+
+ /**
+ * Return serialize <code>hsk</code> bytes.
+ * Note, this method's implementation has changed. Used to just return
+ * row and column. This is a customized version of
+ * {@link Writables#getBytes(Writable)}
+ * @param hsk Instance
+ * @return The bytes of <code>hsk</code> gotten by running its
+ * {@link Writable#write(java.io.DataOutput)} method.
+ * @throws IOException
+ */
+ public static byte [] getBytes(final HStoreKey hsk) throws IOException {
+ return getBytes(hsk.getRow(), hsk.getColumn(), hsk.getTimestamp());
+ }
+
+ /**
+ * @param row Can't be null
+ * @return Passed arguments as a serialized HSK.
+ * @throws IOException
+ */
+ public static byte [] getBytes(final byte [] row)
+ throws IOException {
+ return getBytes(row, null, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * @param row Can't be null
+ * @param column Can be null
+ * @param ts
+ * @return Passed arguments as a serialized HSK.
+ * @throws IOException
+ */
+ public static byte [] getBytes(final byte [] row, final byte [] column,
+ final long ts)
+ throws IOException {
+ // TODO: Get vint sizes as I calculate serialized size of hsk.
+ byte [] b = new byte [getSerializedSize(row) +
+ getSerializedSize(column) + Bytes.SIZEOF_LONG];
+ int offset = Bytes.writeByteArray(b, 0, row, 0, row.length);
+ byte [] c = column == null? HConstants.EMPTY_BYTE_ARRAY: column;
+ offset = Bytes.writeByteArray(b, offset, c, 0, c.length);
+ byte [] timestamp = Bytes.toBytes(ts);
+ System.arraycopy(timestamp, 0, b, offset, timestamp.length);
+ return b;
+ }
+
+ /**
+ * @param bb ByteBuffer that contains serialized HStoreKey
+ * @return Row
+ */
+ public static byte [] getRow(final ByteBuffer bb) {
+ byte firstByte = bb.get(0);
+ int vint = firstByte;
+ int vintWidth = WritableUtils.decodeVIntSize(firstByte);
+ if (vintWidth != 1) {
+ vint = getBigVint(vintWidth, firstByte, bb.array(), bb.arrayOffset());
+ }
+ byte [] b = new byte [vint];
+ System.arraycopy(bb.array(), bb.arrayOffset() + vintWidth, b, 0, vint);
+ return b;
+ }
+
+ /**
+ * @param bb ByteBuffer that contains serialized HStoreKey
+ * @return Column
+ */
+ public static byte [] getColumn(final ByteBuffer bb) {
+ // Skip over row.
+ int offset = skipVintdByteArray(bb, 0);
+ byte firstByte = bb.get(offset);
+ int vint = firstByte;
+ int vintWidth = WritableUtils.decodeVIntSize(firstByte);
+ if (vintWidth != 1) {
+ vint = getBigVint(vintWidth, firstByte, bb.array(),
+ bb.arrayOffset() + offset);
+ }
+ byte [] b = new byte [vint];
+ System.arraycopy(bb.array(), bb.arrayOffset() + offset + vintWidth, b, 0,
+ vint);
+ return b;
+ }
+
+ /**
+ * @param bb ByteBuffer that contains serialized HStoreKey
+ * @return Timestamp
+ */
+ public static long getTimestamp(final ByteBuffer bb) {
+ return bb.getLong(bb.limit() - Bytes.SIZEOF_LONG);
+ }
+
+ /*
+ * @param bb
+ * @param offset
+ * @return Amount to skip to get paste a byte array that is preceded by a
+ * vint of how long it is.
+ */
+ private static int skipVintdByteArray(final ByteBuffer bb, final int offset) {
+ byte firstByte = bb.get(offset);
+ int vint = firstByte;
+ int vintWidth = WritableUtils.decodeVIntSize(firstByte);
+ if (vintWidth != 1) {
+ vint = getBigVint(vintWidth, firstByte, bb.array(),
+ bb.arrayOffset() + offset);
+ }
+ return vint + vintWidth + offset;
+ }
+
+ /*
+ * Vint is wider than one byte. Find out how much bigger it is.
+ * @param vintWidth
+ * @param firstByte
+ * @param buffer
+ * @param offset
+ * @return
+ */
+ static int getBigVint(final int vintWidth, final byte firstByte,
+ final byte [] buffer, final int offset) {
+ long i = 0;
+ for (int idx = 0; idx < vintWidth - 1; idx++) {
+ final byte b = buffer[offset + 1 + idx];
+ i = i << 8;
+ i = i | (b & 0xFF);
+ }
+ i = (WritableUtils.isNegativeVInt(firstByte) ? (i ^ -1L) : i);
+ if (i > Integer.MAX_VALUE) {
+ throw new IllegalArgumentException("Calculated vint too large");
+ }
+ return (int)i;
+ }
+
+ /**
+ * Create a store key.
+ * @param bb
+ * @return HStoreKey instance made of the passed <code>b</code>.
+ * @throws IOException
+ */
+ public static HStoreKey create(final ByteBuffer bb)
+ throws IOException {
+ return HStoreKey.create(bb.array(), bb.arrayOffset(), bb.limit());
+ }
+
+ /**
+ * Create a store key.
+ * @param b Serialized HStoreKey; a byte array with a row only in it won't do.
+ * It must have all the vints denoting r/c/ts lengths.
+ * @return HStoreKey instance made of the passed <code>b</code>.
+ * @throws IOException
+ */
+ public static HStoreKey create(final byte [] b) throws IOException {
+ return create(b, 0, b.length);
+ }
+
+ /**
+ * Create a store key.
+ * @param b Serialized HStoreKey
+ * @param offset
+ * @param length
+ * @return HStoreKey instance made of the passed <code>b</code>.
+ * @throws IOException
+ */
+ public static HStoreKey create(final byte [] b, final int offset,
+ final int length)
+ throws IOException {
+ byte firstByte = b[offset];
+ int vint = firstByte;
+ int vintWidth = WritableUtils.decodeVIntSize(firstByte);
+ if (vintWidth != 1) {
+ vint = getBigVint(vintWidth, firstByte, b, offset);
+ }
+ byte [] row = new byte [vint];
+ System.arraycopy(b, offset + vintWidth,
+ row, 0, row.length);
+ // Skip over row.
+ int extraOffset = vint + vintWidth;
+ firstByte = b[offset + extraOffset];
+ vint = firstByte;
+ vintWidth = WritableUtils.decodeVIntSize(firstByte);
+ if (vintWidth != 1) {
+ vint = getBigVint(vintWidth, firstByte, b, offset + extraOffset);
+ }
+ byte [] column = new byte [vint];
+ System.arraycopy(b, offset + extraOffset + vintWidth,
+ column, 0, column.length);
+ // Skip over column
+ extraOffset += (vint + vintWidth);
+ return new HStoreKey(row, column, Bytes.toLong(b, offset + extraOffset));
+ }
+
+ /**
+ * Passed as comparator for memcache and for store files. See HBASE-868.
+ * Use this comparing keys in the -ROOT_ table.
+ */
+ public static class HStoreKeyRootComparator extends HStoreKeyMetaComparator {
+ @Override
+ protected int compareRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ return compareRootRows(left, loffset, llength, right, roffset, rlength);
+ }
+ }
+
+ /**
+ * Passed as comparator for memcache and for store files. See HBASE-868.
+ * Use this comprator for keys in the .META. table.
+ */
+ public static class HStoreKeyMetaComparator extends HStoreKeyComparator {
+ @Override
+ protected int compareRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ return compareMetaRows(left, loffset, llength, right, roffset, rlength);
+ }
+ }
+
+ /**
+ * Passed as comparator for memcache and for store files. See HBASE-868.
+ */
+ public static class HStoreKeyComparator extends WritableComparator {
+ public HStoreKeyComparator() {
+ super(HStoreKey.class);
+ }
+
+ @Override
+ @SuppressWarnings("unchecked")
+ public int compare(final WritableComparable l,
+ final WritableComparable r) {
+ HStoreKey left = (HStoreKey)l;
+ HStoreKey right = (HStoreKey)r;
+ // We can be passed null
+ if (left == null && right == null) return 0;
+ if (left == null) return -1;
+ if (right == null) return 1;
+
+ byte [] lrow = left.getRow();
+ byte [] rrow = right.getRow();
+ int result = compareRows(lrow, 0, lrow.length, rrow, 0, rrow.length);
+ if (result != 0) {
+ return result;
+ }
+ result = left.getColumn() == null && right.getColumn() == null? 0:
+ left.getColumn() == null ? -1:right.getColumn() == null? 1:
+ Bytes.compareTo(left.getColumn(), right.getColumn());
+ if (result != 0) {
+ return result;
+ }
+ // The below older timestamps sorting ahead of newer timestamps looks
+ // wrong but it is intentional. This way, newer timestamps are first
+ // found when we iterate over a memcache and newer versions are the
+ // first we trip over when reading from a store file.
+ if (left.getTimestamp() < right.getTimestamp()) {
+ result = 1;
+ } else if (left.getTimestamp() > right.getTimestamp()) {
+ result = -1;
+ }
+ return result; // are equal
+ }
+
+ protected int compareRows(final byte [] left, final int loffset,
+ final int llength, final byte [] right, final int roffset,
+ final int rlength) {
+ return Bytes.compareTo(left, loffset, llength, right, roffset, rlength);
+ }
+ }
+
+ /**
+ * StoreKeyComparator for the -ROOT- table.
+ */
+ public static class RootStoreKeyComparator
+ extends MetaStoreKeyComparator {
+ private static final long serialVersionUID = 1L;
+
+ @Override
+ public int compareRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ return compareRootRows(left, loffset, llength, right, roffset, rlength);
+ }
+ }
+
+ /**
+ * StoreKeyComparator for the .META. table.
+ */
+ public static class MetaStoreKeyComparator extends StoreKeyComparator {
+ @Override
+ public int compareRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ return compareMetaRows(left, loffset, llength, right, roffset, rlength);
+ }
+ }
+
+ /*
+ * @param left
+ * @param loffset
+ * @param llength
+ * @param right
+ * @param roffset
+ * @param rlength
+ * @return Result of comparing two rows from the -ROOT- table both of which
+ * are of the form .META.,(TABLE,REGIONNAME,REGIONID),REGIONID.
+ */
+ protected static int compareRootRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ // Rows look like this: .META.,ROW_FROM_META,RID
+ // System.out.println("ROOT " + Bytes.toString(left, loffset, llength) +
+ // "---" + Bytes.toString(right, roffset, rlength));
+ int lmetaOffsetPlusDelimiter = loffset + 7; // '.META.,'
+ int leftFarDelimiter = getDelimiterInReverse(left, lmetaOffsetPlusDelimiter,
+ llength - lmetaOffsetPlusDelimiter, HRegionInfo.DELIMITER);
+ int rmetaOffsetPlusDelimiter = roffset + 7; // '.META.,'
+ int rightFarDelimiter = getDelimiterInReverse(right,
+ rmetaOffsetPlusDelimiter, rlength - rmetaOffsetPlusDelimiter,
+ HRegionInfo.DELIMITER);
+ if (leftFarDelimiter < 0 && rightFarDelimiter >= 0) {
+ // Nothing between .META. and regionid. Its first key.
+ return -1;
+ } else if (rightFarDelimiter < 0 && leftFarDelimiter >= 0) {
+ return 1;
+ } else if (leftFarDelimiter < 0 && rightFarDelimiter < 0) {
+ return 0;
+ }
+ int result = compareMetaRows(left, lmetaOffsetPlusDelimiter,
+ leftFarDelimiter - lmetaOffsetPlusDelimiter,
+ right, rmetaOffsetPlusDelimiter,
+ rightFarDelimiter - rmetaOffsetPlusDelimiter);
+ if (result != 0) {
+ return result;
+ }
+ // Compare last part of row, the rowid.
+ leftFarDelimiter++;
+ rightFarDelimiter++;
+ result = compareRowid(left, leftFarDelimiter, llength - leftFarDelimiter,
+ right, rightFarDelimiter, rlength - rightFarDelimiter);
+ return result;
+ }
+
+ /*
+ * @param left
+ * @param loffset
+ * @param llength
+ * @param right
+ * @param roffset
+ * @param rlength
+ * @return Result of comparing two rows from the .META. table both of which
+ * are of the form TABLE,REGIONNAME,REGIONID.
+ */
+ protected static int compareMetaRows(final byte[] left, final int loffset,
+ final int llength, final byte[] right, final int roffset,
+ final int rlength) {
+// System.out.println("META " + Bytes.toString(left, loffset, llength) +
+// "---" + Bytes.toString(right, roffset, rlength));
+ int leftDelimiter = getDelimiter(left, loffset, llength,
+ HRegionInfo.DELIMITER);
+ int rightDelimiter = getDelimiter(right, roffset, rlength,
+ HRegionInfo.DELIMITER);
+ if (leftDelimiter < 0 && rightDelimiter >= 0) {
+ // Nothing between .META. and regionid. Its first key.
+ return -1;
+ } else if (rightDelimiter < 0 && leftDelimiter >= 0) {
+ return 1;
+ } else if (leftDelimiter < 0 && rightDelimiter < 0) {
+ return 0;
+ }
+ // Compare up to the delimiter
+ int result = Bytes.compareTo(left, loffset, leftDelimiter - loffset,
+ right, roffset, rightDelimiter - roffset);
+ if (result != 0) {
+ return result;
+ }
+ // Compare middle bit of the row.
+ // Move past delimiter
+ leftDelimiter++;
+ rightDelimiter++;
+ int leftFarDelimiter = getRequiredDelimiterInReverse(left, leftDelimiter,
+ llength - (leftDelimiter - loffset), HRegionInfo.DELIMITER);
+ int rightFarDelimiter = getRequiredDelimiterInReverse(right,
+ rightDelimiter, rlength - (rightDelimiter - roffset),
+ HRegionInfo.DELIMITER);
+ // Now compare middlesection of row.
+ result = Bytes.compareTo(left, leftDelimiter,
+ leftFarDelimiter - leftDelimiter, right, rightDelimiter,
+ rightFarDelimiter - rightDelimiter);
+ if (result != 0) {
+ return result;
+ }
+ // Compare last part of row, the rowid.
+ leftFarDelimiter++;
+ rightFarDelimiter++;
+ result = compareRowid(left, leftFarDelimiter,
+ llength - (leftFarDelimiter - loffset),
+ right, rightFarDelimiter, rlength - (rightFarDelimiter - roffset));
+ return result;
+ }
+
+ private static int compareRowid(byte[] left, int loffset, int llength,
+ byte[] right, int roffset, int rlength) {
+ return Bytes.compareTo(left, loffset, llength, right, roffset, rlength);
+ }
+
+ /**
+ * RawComparator for plain -- i.e. non-catalog table keys such as
+ * -ROOT- and .META. -- HStoreKeys. Compares at byte level. Knows how to
+ * handle the vints that introduce row and columns in the HSK byte array
+ * representation. Adds
+ * {@link #compareRows(byte[], int, int, byte[], int, int)} to
+ * {@link RawComparator}
+ */
+ public static class StoreKeyComparator implements RawComparator<byte []> {
+ public StoreKeyComparator() {
+ super();
+ }
+
+ public int compare(final byte[] b1, final byte[] b2) {
+ return compare(b1, 0, b1.length, b2, 0, b2.length);
+ }
+
+ public int compare(final byte [] b1, int o1, int l1,
+ final byte [] b2, int o2, int l2) {
+ // Below is byte compare without creating new objects. Its awkward but
+ // seems no way around getting vint width, value, and compare result any
+ // other way. The passed byte arrays, b1 and b2, have a vint, row, vint,
+ // column, timestamp in them. The byte array was written by the
+ // #write(DataOutputStream) method above. See it to better understand the
+ // below.
+
+ // Calculate vint and vint width for rows in b1 and b2.
+ byte firstByte1 = b1[o1];
+ int vint1 = firstByte1;
+ int vintWidth1 = WritableUtils.decodeVIntSize(firstByte1);
+ if (vintWidth1 != 1) {
+ vint1 = getBigVint(vintWidth1, firstByte1, b1, o1);
+ }
+ byte firstByte2 = b2[o2];
+ int vint2 = firstByte2;
+ int vintWidth2 = WritableUtils.decodeVIntSize(firstByte2);
+ if (vintWidth2 != 1) {
+ vint2 = getBigVint(vintWidth2, firstByte2, b2, o2);
+ }
+ // Compare the rows.
+ int result = compareRows(b1, o1 + vintWidth1, vint1,
+ b2, o2 + vintWidth2, vint2);
+ if (result != 0) {
+ return result;
+ }
+
+ // Update offsets and lengths so we are aligned on columns.
+ int diff1 = vintWidth1 + vint1;
+ o1 += diff1;
+ l1 -= diff1;
+ int diff2 = vintWidth2 + vint2;
+ o2 += diff2;
+ l2 -= diff2;
+ // Calculate vint and vint width for columns in b1 and b2.
+ firstByte1 = b1[o1];
+ vint1 = firstByte1;
+ vintWidth1 = WritableUtils.decodeVIntSize(firstByte1);
+ if (vintWidth1 != 1) {
+ vint1 = getBigVint(vintWidth1, firstByte1, b1, o1);
+ }
+ firstByte2 = b2[o2];
+ vint2 = firstByte2;
+ vintWidth2 = WritableUtils.decodeVIntSize(firstByte2);
+ if (vintWidth2 != 1) {
+ vint2 = getBigVint(vintWidth2, firstByte2, b2, o2);
+ }
+ // Compare columns.
+ // System.out.println("COL <" + Bytes.toString(b1, o1 + vintWidth1, vint1) +
+ // "> <" + Bytes.toString(b2, o2 + vintWidth2, vint2) + ">");
+ result = Bytes.compareTo(b1, o1 + vintWidth1, vint1,
+ b2, o2 + vintWidth2, vint2);
+ if (result != 0) {
+ return result;
+ }
+
+ // Update offsets and lengths.
+ diff1 = vintWidth1 + vint1;
+ o1 += diff1;
+ l1 -= diff1;
+ diff2 = vintWidth2 + vint2;
+ o2 += diff2;
+ l2 -= diff2;
+ // The below older timestamps sorting ahead of newer timestamps looks
+ // wrong but it is intentional. This way, newer timestamps are first
+ // found when we iterate over a memcache and newer versions are the
+ // first we trip over when reading from a store file.
+ for (int i = 0; i < l1; i++) {
+ int leftb = b1[o1 + i] & 0xff;
+ int rightb = b2[o2 + i] & 0xff;
+ if (leftb < rightb) {
+ return 1;
+ } else if (leftb > rightb) {
+ return -1;
+ }
+ }
+ return 0;
+ }
+
+ /**
+ * @param left
+ * @param right
+ * @return Result comparing rows.
+ */
+ public int compareRows(final byte [] left, final byte [] right) {
+ return compareRows(left, 0, left.length, right, 0, right.length);
+ }
+
+ /**
+ * @param left
+ * @param loffset
+ * @param llength
+ * @param right
+ * @param roffset
+ * @param rlength
+ * @return Result comparing rows.
+ */
+ public int compareRows(final byte [] left, final int loffset,
+ final int llength, final byte [] right, final int roffset, final int rlength) {
+ return Bytes.compareTo(left, loffset, llength, right, roffset, rlength);
+ }
+ }
+
+ /**
+ * @param hri
+ * @return Compatible comparator
+ */
+ public static WritableComparator getWritableComparator(final HRegionInfo hri) {
+ return hri.isRootRegion()?
+ new HStoreKey.HStoreKeyRootComparator(): hri.isMetaRegion()?
+ new HStoreKey.HStoreKeyMetaComparator():
+ new HStoreKey.HStoreKeyComparator();
+ }
+
+ /**
+ * @param hri
+ * @return Compatible raw comparator
+ */
+ public static StoreKeyComparator getRawComparator(final HRegionInfo hri) {
+ return hri.isRootRegion() ? ROOT_COMPARATOR :
+ hri.isMetaRegion() ? META_COMPARATOR : PLAIN_COMPARATOR;
+ }
+
+ /**
+ * @param tablename
+ * @return Compatible raw comparator
+ */
+ public static HStoreKey.StoreKeyComparator getComparator(final byte [] tablename) {
+ return Bytes.equals(HTableDescriptor.ROOT_TABLEDESC.getName(), tablename)?
+ ROOT_COMPARATOR:
+ (Bytes.equals(HTableDescriptor.META_TABLEDESC.getName(),tablename))?
+ META_COMPARATOR: PLAIN_COMPARATOR;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/HTableDescriptor.java b/src/java/org/apache/hadoop/hbase/HTableDescriptor.java
new file mode 100644
index 0000000..bec2bb1
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/HTableDescriptor.java
@@ -0,0 +1,710 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.tableindexed.IndexSpecification;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.WritableComparable;
+
+import agilejson.TOJSON;
+
+/**
+ * HTableDescriptor contains the name of an HTable, and its
+ * column families.
+ */
+public class HTableDescriptor implements WritableComparable<HTableDescriptor>, ISerializable {
+
+ // Changes prior to version 3 were not recorded here.
+ // Version 3 adds metadata as a map where keys and values are byte[].
+ // Version 4 adds indexes
+ public static final byte TABLE_DESCRIPTOR_VERSION = 4;
+
+ private byte [] name = HConstants.EMPTY_BYTE_ARRAY;
+ private String nameAsString = "";
+
+ // Table metadata
+ protected Map<ImmutableBytesWritable, ImmutableBytesWritable> values =
+ new HashMap<ImmutableBytesWritable, ImmutableBytesWritable>();
+
+ public static final String FAMILIES = "FAMILIES";
+ public static final ImmutableBytesWritable FAMILIES_KEY =
+ new ImmutableBytesWritable(Bytes.toBytes(FAMILIES));
+ public static final String MAX_FILESIZE = "MAX_FILESIZE";
+ public static final ImmutableBytesWritable MAX_FILESIZE_KEY =
+ new ImmutableBytesWritable(Bytes.toBytes(MAX_FILESIZE));
+ public static final String READONLY = "READONLY";
+ public static final ImmutableBytesWritable READONLY_KEY =
+ new ImmutableBytesWritable(Bytes.toBytes(READONLY));
+ public static final String MEMCACHE_FLUSHSIZE = "MEMCACHE_FLUSHSIZE";
+ public static final ImmutableBytesWritable MEMCACHE_FLUSHSIZE_KEY =
+ new ImmutableBytesWritable(Bytes.toBytes(MEMCACHE_FLUSHSIZE));
+ public static final String IS_ROOT = "IS_ROOT";
+ public static final ImmutableBytesWritable IS_ROOT_KEY =
+ new ImmutableBytesWritable(Bytes.toBytes(IS_ROOT));
+ public static final String IS_META = "IS_META";
+
+ public static final ImmutableBytesWritable IS_META_KEY =
+ new ImmutableBytesWritable(Bytes.toBytes(IS_META));
+
+
+ // The below are ugly but better than creating them each time till we
+ // replace booleans being saved as Strings with plain booleans. Need a
+ // migration script to do this. TODO.
+ private static final ImmutableBytesWritable FALSE =
+ new ImmutableBytesWritable(Bytes.toBytes(Boolean.FALSE.toString()));
+ private static final ImmutableBytesWritable TRUE =
+ new ImmutableBytesWritable(Bytes.toBytes(Boolean.TRUE.toString()));
+
+ public static final boolean DEFAULT_IN_MEMORY = false;
+
+ public static final boolean DEFAULT_READONLY = false;
+
+ public static final int DEFAULT_MEMCACHE_FLUSH_SIZE = 1024*1024*64;
+
+ public static final int DEFAULT_MAX_FILESIZE = 1024*1024*256;
+
+ private volatile Boolean meta = null;
+ private volatile Boolean root = null;
+
+ // Key is hash of the family name.
+ private final Map<byte [], HColumnDescriptor> families =
+ new TreeMap<byte [], HColumnDescriptor>(KeyValue.FAMILY_COMPARATOR);
+
+ // Key is indexId
+ private final Map<String, IndexSpecification> indexes =
+ new HashMap<String, IndexSpecification>();
+
+ /**
+ * Private constructor used internally creating table descriptors for
+ * catalog tables: e.g. .META. and -ROOT-.
+ */
+ protected HTableDescriptor(final byte [] name, HColumnDescriptor[] families) {
+ this.name = name.clone();
+ this.nameAsString = Bytes.toString(this.name);
+ setMetaFlags(name);
+ for(HColumnDescriptor descriptor : families) {
+ this.families.put(descriptor.getName(), descriptor);
+ }
+ }
+
+ /**
+ * Private constructor used internally creating table descriptors for
+ * catalog tables: e.g. .META. and -ROOT-.
+ */
+ protected HTableDescriptor(final byte [] name, HColumnDescriptor[] families,
+ Collection<IndexSpecification> indexes,
+ Map<ImmutableBytesWritable,ImmutableBytesWritable> values) {
+ this.name = name.clone();
+ this.nameAsString = Bytes.toString(this.name);
+ setMetaFlags(name);
+ for(HColumnDescriptor descriptor : families) {
+ this.families.put(descriptor.getName(), descriptor);
+ }
+ for(IndexSpecification index : indexes) {
+ this.indexes.put(index.getIndexId(), index);
+ }
+ for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> entry:
+ values.entrySet()) {
+ this.values.put(entry.getKey(), entry.getValue());
+ }
+ }
+
+ /**
+ * Constructs an empty object.
+ * For deserializing an HTableDescriptor instance only.
+ * @see #HTableDescriptor(byte[])
+ */
+ public HTableDescriptor() {
+ super();
+ }
+
+ /**
+ * Constructor.
+ * @param name Table name.
+ * @throws IllegalArgumentException if passed a table name
+ * that is made of other than 'word' characters, underscore or period: i.e.
+ * <code>[a-zA-Z_0-9.].
+ * @see <a href="HADOOP-1581">HADOOP-1581 HBASE: Un-openable tablename bug</a>
+ */
+ public HTableDescriptor(final String name) {
+ this(Bytes.toBytes(name));
+ }
+
+ /**
+ * Constructor.
+ * @param name Table name.
+ * @throws IllegalArgumentException if passed a table name
+ * that is made of other than 'word' characters, underscore or period: i.e.
+ * <code>[a-zA-Z_0-9-.].
+ * @see <a href="HADOOP-1581">HADOOP-1581 HBASE: Un-openable tablename bug</a>
+ */
+ public HTableDescriptor(final byte [] name) {
+ super();
+ setMetaFlags(this.name);
+ this.name = this.isMetaRegion()? name: isLegalTableName(name);
+ this.nameAsString = Bytes.toString(this.name);
+ }
+
+ /**
+ * Constructor.
+ * <p>
+ * Makes a deep copy of the supplied descriptor.
+ * Can make a modifiable descriptor from an UnmodifyableHTableDescriptor.
+ * @param desc The descriptor.
+ */
+ public HTableDescriptor(final HTableDescriptor desc) {
+ super();
+ this.name = desc.name.clone();
+ this.nameAsString = Bytes.toString(this.name);
+ setMetaFlags(this.name);
+ for (HColumnDescriptor c: desc.families.values()) {
+ this.families.put(c.getName(), new HColumnDescriptor(c));
+ }
+ for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+ desc.values.entrySet()) {
+ this.values.put(e.getKey(), e.getValue());
+ }
+ this.indexes.putAll(desc.indexes);
+ }
+
+ /*
+ * Set meta flags on this table.
+ * Called by constructors.
+ * @param name
+ */
+ private void setMetaFlags(final byte [] name) {
+ setRootRegion(Bytes.equals(name, HConstants.ROOT_TABLE_NAME));
+ setMetaRegion(isRootRegion() ||
+ Bytes.equals(name, HConstants.META_TABLE_NAME));
+ }
+
+ /** @return true if this is the root region */
+ public boolean isRootRegion() {
+ if (this.root == null) {
+ this.root = isSomething(IS_ROOT_KEY, false)? Boolean.TRUE: Boolean.FALSE;
+ }
+ return this.root.booleanValue();
+ }
+
+ /** @param isRoot true if this is the root region */
+ protected void setRootRegion(boolean isRoot) {
+ // TODO: Make the value a boolean rather than String of boolean.
+ values.put(IS_ROOT_KEY, isRoot? TRUE: FALSE);
+ }
+
+ /** @return true if this is a meta region (part of the root or meta tables) */
+ public boolean isMetaRegion() {
+ if (this.meta == null) {
+ this.meta = calculateIsMetaRegion();
+ }
+ return this.meta.booleanValue();
+ }
+
+ private synchronized Boolean calculateIsMetaRegion() {
+ byte [] value = getValue(IS_META_KEY);
+ return (value != null)? Boolean.valueOf(Bytes.toString(value)): Boolean.FALSE;
+ }
+
+ private boolean isSomething(final ImmutableBytesWritable key,
+ final boolean valueIfNull) {
+ byte [] value = getValue(key);
+ if (value != null) {
+ // TODO: Make value be a boolean rather than String of boolean.
+ return Boolean.valueOf(Bytes.toString(value)).booleanValue();
+ }
+ return valueIfNull;
+ }
+
+ /**
+ * @param isMeta true if this is a meta region (part of the root or meta
+ * tables) */
+ protected void setMetaRegion(boolean isMeta) {
+ values.put(IS_META_KEY, isMeta? TRUE: FALSE);
+ }
+
+ /** @return true if table is the meta table */
+ public boolean isMetaTable() {
+ return isMetaRegion() && !isRootRegion();
+ }
+
+ /**
+ * Check passed buffer is legal user-space table name.
+ * @param b Table name.
+ * @return Returns passed <code>b</code> param
+ * @throws NullPointerException If passed <code>b</code> is null
+ * @throws IllegalArgumentException if passed a table name
+ * that is made of other than 'word' characters or underscores: i.e.
+ * <code>[a-zA-Z_0-9].
+ */
+ public static byte [] isLegalTableName(final byte [] b) {
+ if (b == null || b.length <= 0) {
+ throw new IllegalArgumentException("Name is null or empty");
+ }
+ if (b[0] == '.' || b[0] == '-') {
+ throw new IllegalArgumentException("Illegal first character <" + b[0] +
+ ">. " + "User-space table names can only start with 'word " +
+ "characters': i.e. [a-zA-Z_0-9]: " + Bytes.toString(b));
+ }
+ for (int i = 0; i < b.length; i++) {
+ if (Character.isLetterOrDigit(b[i]) || b[i] == '_' || b[i] == '-' ||
+ b[i] == '.') {
+ continue;
+ }
+ throw new IllegalArgumentException("Illegal character <" + b[i] + ">. " +
+ "User-space table names can only contain 'word characters':" +
+ "i.e. [a-zA-Z_0-9-.]: " + Bytes.toString(b));
+ }
+ return b;
+ }
+
+ /**
+ * @param key The key.
+ * @return The value.
+ */
+ public byte[] getValue(byte[] key) {
+ return getValue(new ImmutableBytesWritable(key));
+ }
+
+ private byte[] getValue(final ImmutableBytesWritable key) {
+ ImmutableBytesWritable ibw = values.get(key);
+ if (ibw == null)
+ return null;
+ return ibw.get();
+ }
+
+ /**
+ * @param key The key.
+ * @return The value as a string.
+ */
+ public String getValue(String key) {
+ byte[] value = getValue(Bytes.toBytes(key));
+ if (value == null)
+ return null;
+ return Bytes.toString(value);
+ }
+
+ /**
+ * @return All values.
+ */
+ public Map<ImmutableBytesWritable,ImmutableBytesWritable> getValues() {
+ return Collections.unmodifiableMap(values);
+ }
+
+ /**
+ * @param key The key.
+ * @param value The value.
+ */
+ public void setValue(byte[] key, byte[] value) {
+ setValue(new ImmutableBytesWritable(key), value);
+ }
+
+ /*
+ * @param key The key.
+ * @param value The value.
+ */
+ private void setValue(final ImmutableBytesWritable key,
+ final byte[] value) {
+ values.put(key, new ImmutableBytesWritable(value));
+ }
+
+ /*
+ * @param key The key.
+ * @param value The value.
+ */
+ private void setValue(final ImmutableBytesWritable key,
+ final ImmutableBytesWritable value) {
+ values.put(key, value);
+ }
+
+ /**
+ * @param key The key.
+ * @param value The value.
+ */
+ public void setValue(String key, String value) {
+ setValue(Bytes.toBytes(key), Bytes.toBytes(value));
+ }
+
+ /**
+ * @return true if all columns in the table should be kept in the
+ * HRegionServer cache only
+ */
+ public boolean isInMemory() {
+ String value = getValue(HConstants.IN_MEMORY);
+ if (value != null)
+ return Boolean.valueOf(value).booleanValue();
+ return DEFAULT_IN_MEMORY;
+ }
+
+ /**
+ * @param inMemory True if all of the columns in the table should be kept in
+ * the HRegionServer cache only.
+ */
+ public void setInMemory(boolean inMemory) {
+ setValue(HConstants.IN_MEMORY, Boolean.toString(inMemory));
+ }
+
+ /**
+ * @return true if all columns in the table should be read only
+ */
+ public boolean isReadOnly() {
+ return isSomething(READONLY_KEY, DEFAULT_READONLY);
+ }
+
+ /**
+ * @param readOnly True if all of the columns in the table should be read
+ * only.
+ */
+ public void setReadOnly(final boolean readOnly) {
+ setValue(READONLY_KEY, readOnly? TRUE: FALSE);
+ }
+
+ /** @return name of table */
+ @TOJSON
+ public byte [] getName() {
+ return name;
+ }
+
+ /** @return name of table */
+ public String getNameAsString() {
+ return this.nameAsString;
+ }
+
+ /** @return max hregion size for table */
+ public long getMaxFileSize() {
+ byte [] value = getValue(MAX_FILESIZE_KEY);
+ if (value != null)
+ return Long.valueOf(Bytes.toString(value)).longValue();
+ return HConstants.DEFAULT_MAX_FILE_SIZE;
+ }
+
+ /**
+ * @param maxFileSize The maximum file size that a store file can grow to
+ * before a split is triggered.
+ */
+ public void setMaxFileSize(long maxFileSize) {
+ setValue(MAX_FILESIZE_KEY, Bytes.toBytes(Long.toString(maxFileSize)));
+ }
+
+ /**
+ * @return memory cache flush size for each hregion
+ */
+ public int getMemcacheFlushSize() {
+ byte [] value = getValue(MEMCACHE_FLUSHSIZE_KEY);
+ if (value != null)
+ return Integer.valueOf(Bytes.toString(value)).intValue();
+ return DEFAULT_MEMCACHE_FLUSH_SIZE;
+ }
+
+ /**
+ * @param memcacheFlushSize memory cache flush size for each hregion
+ */
+ public void setMemcacheFlushSize(int memcacheFlushSize) {
+ setValue(MEMCACHE_FLUSHSIZE_KEY,
+ Bytes.toBytes(Integer.toString(memcacheFlushSize)));
+ }
+
+ public Collection<IndexSpecification> getIndexes() {
+ return indexes.values();
+ }
+
+ public IndexSpecification getIndex(String indexId) {
+ return indexes.get(indexId);
+ }
+
+ public void addIndex(IndexSpecification index) {
+ indexes.put(index.getIndexId(), index);
+ }
+
+ /**
+ * Adds a column family.
+ * @param family HColumnDescriptor of familyto add.
+ */
+ public void addFamily(final HColumnDescriptor family) {
+ if (family.getName() == null || family.getName().length <= 0) {
+ throw new NullPointerException("Family name cannot be null or empty");
+ }
+ this.families.put(family.getName(), family);
+ }
+
+ /**
+ * Checks to see if this table contains the given column family
+ * @param c Family name or column name.
+ * @return true if the table contains the specified family name
+ */
+ public boolean hasFamily(final byte [] c) {
+ return families.containsKey(c);
+ }
+
+ /**
+ * @return Name of this table and then a map of all of the column family
+ * descriptors.
+ * @see #getNameAsString()
+ */
+ @Override
+ public String toString() {
+ StringBuffer s = new StringBuffer();
+ s.append('{');
+ s.append(HConstants.NAME);
+ s.append(" => '");
+ s.append(Bytes.toString(name));
+ s.append("'");
+ for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+ values.entrySet()) {
+ String key = Bytes.toString(e.getKey().get());
+ String value = Bytes.toString(e.getValue().get());
+ if (key == null) {
+ continue;
+ }
+ String upperCase = key.toUpperCase();
+ if (upperCase.equals(IS_ROOT) || upperCase.equals(IS_META)) {
+ // Skip. Don't bother printing out read-only values if false.
+ if (value.toLowerCase().equals(Boolean.FALSE.toString())) {
+ continue;
+ }
+ }
+ s.append(", ");
+ s.append(Bytes.toString(e.getKey().get()));
+ s.append(" => '");
+ s.append(Bytes.toString(e.getValue().get()));
+ s.append("'");
+ }
+ s.append(", ");
+ s.append(FAMILIES);
+ s.append(" => ");
+ s.append(families.values());
+ if (!indexes.isEmpty()) {
+ // Don't emit if empty. Has to do w/ transactional hbase.
+ s.append(", ");
+ s.append("INDEXES");
+ s.append(" => ");
+ s.append(indexes.values());
+ }
+ s.append('}');
+ return s.toString();
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null) {
+ return false;
+ }
+ if (!(obj instanceof HTableDescriptor)) {
+ return false;
+ }
+ return compareTo((HTableDescriptor)obj) == 0;
+ }
+
+ /**
+ * @see java.lang.Object#hashCode()
+ */
+ @Override
+ public int hashCode() {
+ int result = Bytes.hashCode(this.name);
+ result ^= Byte.valueOf(TABLE_DESCRIPTOR_VERSION).hashCode();
+ if (this.families != null && this.families.size() > 0) {
+ for (HColumnDescriptor e: this.families.values()) {
+ result ^= e.hashCode();
+ }
+ }
+ result ^= values.hashCode();
+ return result;
+ }
+
+ // Writable
+
+ public void readFields(DataInput in) throws IOException {
+ int version = in.readInt();
+ if (version < 3)
+ throw new IOException("versions < 3 are not supported (and never existed!?)");
+ // version 3+
+ name = Bytes.readByteArray(in);
+ nameAsString = Bytes.toString(this.name);
+ setRootRegion(in.readBoolean());
+ setMetaRegion(in.readBoolean());
+ values.clear();
+ int numVals = in.readInt();
+ for (int i = 0; i < numVals; i++) {
+ ImmutableBytesWritable key = new ImmutableBytesWritable();
+ ImmutableBytesWritable value = new ImmutableBytesWritable();
+ key.readFields(in);
+ value.readFields(in);
+ values.put(key, value);
+ }
+ families.clear();
+ int numFamilies = in.readInt();
+ for (int i = 0; i < numFamilies; i++) {
+ HColumnDescriptor c = new HColumnDescriptor();
+ c.readFields(in);
+ families.put(c.getName(), c);
+ }
+ indexes.clear();
+ if (version < 4) {
+ return;
+ }
+ int numIndexes = in.readInt();
+ for (int i = 0; i < numIndexes; i++) {
+ IndexSpecification index = new IndexSpecification();
+ index.readFields(in);
+ addIndex(index);
+ }
+ }
+
+ public void write(DataOutput out) throws IOException {
+ out.writeInt(TABLE_DESCRIPTOR_VERSION);
+ Bytes.writeByteArray(out, name);
+ out.writeBoolean(isRootRegion());
+ out.writeBoolean(isMetaRegion());
+ out.writeInt(values.size());
+ for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+ values.entrySet()) {
+ e.getKey().write(out);
+ e.getValue().write(out);
+ }
+ out.writeInt(families.size());
+ for(Iterator<HColumnDescriptor> it = families.values().iterator();
+ it.hasNext(); ) {
+ HColumnDescriptor family = it.next();
+ family.write(out);
+ }
+ out.writeInt(indexes.size());
+ for(IndexSpecification index : indexes.values()) {
+ index.write(out);
+ }
+ }
+
+ // Comparable
+
+ public int compareTo(final HTableDescriptor other) {
+ int result = Bytes.compareTo(this.name, other.name);
+ if (result == 0) {
+ result = families.size() - other.families.size();
+ }
+ if (result == 0 && families.size() != other.families.size()) {
+ result = Integer.valueOf(families.size()).compareTo(
+ Integer.valueOf(other.families.size()));
+ }
+ if (result == 0) {
+ for (Iterator<HColumnDescriptor> it = families.values().iterator(),
+ it2 = other.families.values().iterator(); it.hasNext(); ) {
+ result = it.next().compareTo(it2.next());
+ if (result != 0) {
+ break;
+ }
+ }
+ }
+ if (result == 0) {
+ // punt on comparison for ordering, just calculate difference
+ result = this.values.hashCode() - other.values.hashCode();
+ if (result < 0)
+ result = -1;
+ else if (result > 0)
+ result = 1;
+ }
+ return result;
+ }
+
+ /**
+ * @return Immutable sorted map of families.
+ */
+ public Collection<HColumnDescriptor> getFamilies() {
+ return Collections.unmodifiableCollection(this.families.values());
+ }
+
+ @TOJSON(fieldName = "columns")
+ public HColumnDescriptor[] getColumnFamilies() {
+ return getFamilies().toArray(new HColumnDescriptor[0]);
+ }
+
+ /**
+ * @param column
+ * @return Column descriptor for the passed family name or the family on
+ * passed in column.
+ */
+ public HColumnDescriptor getFamily(final byte [] column) {
+ return this.families.get(column);
+ }
+
+ /**
+ * @param column
+ * @return Column descriptor for the passed family name or the family on
+ * passed in column.
+ */
+ public HColumnDescriptor removeFamily(final byte [] column) {
+ return this.families.remove(column);
+ }
+
+ /**
+ * @param rootdir qualified path of HBase root directory
+ * @param tableName name of table
+ * @return path for table
+ */
+ public static Path getTableDir(Path rootdir, final byte [] tableName) {
+ return new Path(rootdir, Bytes.toString(tableName));
+ }
+
+ /** Table descriptor for <core>-ROOT-</code> catalog table */
+ public static final HTableDescriptor ROOT_TABLEDESC = new HTableDescriptor(
+ HConstants.ROOT_TABLE_NAME,
+ new HColumnDescriptor[] { new HColumnDescriptor(HConstants.COLUMN_FAMILY,
+ 10, // Ten is arbitrary number. Keep versions to help debuggging.
+ Compression.Algorithm.NONE.getName(), false, true, 8 * 1024,
+ Integer.MAX_VALUE, HConstants.FOREVER, false) });
+
+ /** Table descriptor for <code>.META.</code> catalog table */
+ public static final HTableDescriptor META_TABLEDESC = new HTableDescriptor(
+ HConstants.META_TABLE_NAME, new HColumnDescriptor[] {
+ new HColumnDescriptor(HConstants.COLUMN_FAMILY,
+ 10, // Ten is arbitrary number. Keep versions to help debuggging.
+ Compression.Algorithm.NONE.getName(), false, true, 8 * 1024,
+ Integer.MAX_VALUE, HConstants.FOREVER, false),
+ new HColumnDescriptor(HConstants.COLUMN_FAMILY_HISTORIAN,
+ HConstants.ALL_VERSIONS, Compression.Algorithm.NONE.getName(),
+ false, false, 8 * 1024,
+ Integer.MAX_VALUE, HConstants.WEEK_IN_SECONDS, false)});
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.rest.xml.IOutputXML#toXML()
+ */
+ public void restSerialize(IRestSerializer serializer) throws HBaseRestException {
+ serializer.serializeTableDescriptor(this);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/KeyValue.java b/src/java/org/apache/hadoop/hbase/KeyValue.java
new file mode 100644
index 0000000..d7b4424
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/KeyValue.java
@@ -0,0 +1,1448 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.RawComparator;
+
+/**
+ * An HBase Key/Value. Instances of this class are immutable. They are not
+ * comparable but Comparators are provided Comparators change with context,
+ * whether user table or a catalog table comparison context. Its
+ * important that you use the appropriate comparator comparing rows in
+ * particular. There are Comparators for KeyValue instances and then for
+ * just the Key portion of a KeyValue used mostly in {@link HFile}.
+ *
+ * <p>KeyValue wraps a byte array and has offset and length for passed array
+ * at where to start interpreting the content as a KeyValue blob. The KeyValue
+ * blob format inside the byte array is:
+ * <code><keylength> <valuelength> <key> <value></code>
+ * Key is decomposed as:
+ * <code><rowlength> <row> <columnfamilylength> <columnfamily> <columnqualifier> <timestamp> <keytype></code>
+ * Rowlength maximum is Short.MAX_SIZE, column family length maximum is
+ * Byte.MAX_SIZE, and column qualifier + key length must be < Integer.MAX_SIZE.
+ * The column does not contain the family/qualifier delimiter.
+ *
+ * <p>TODO: Group Key-only comparators and operations into a Key class, just
+ * for neatness sake, if can figure what to call it.
+ */
+public class KeyValue implements Writable, HeapSize {
+ static final Log LOG = LogFactory.getLog(KeyValue.class);
+
+ /**
+ * Colon character in UTF-8
+ */
+ public static final char COLUMN_FAMILY_DELIMITER = ':';
+
+ /**
+ * Comparator for plain key/values; i.e. non-catalog table key/values.
+ */
+ public static KVComparator COMPARATOR = new KVComparator();
+
+ /**
+ * Comparator for plain key; i.e. non-catalog table key. Works on Key portion
+ * of KeyValue only.
+ */
+ public static KeyComparator KEY_COMPARATOR = new KeyComparator();
+
+ /**
+ * A {@link KVComparator} for <code>.META.</code> catalog table
+ * {@link KeyValue}s.
+ */
+ public static KVComparator META_COMPARATOR = new MetaComparator();
+
+ /**
+ * A {@link KVComparator} for <code>.META.</code> catalog table
+ * {@link KeyValue} keys.
+ */
+ public static KeyComparator META_KEY_COMPARATOR = new MetaKeyComparator();
+
+ /**
+ * A {@link KVComparator} for <code>-ROOT-</code> catalog table
+ * {@link KeyValue}s.
+ */
+ public static KVComparator ROOT_COMPARATOR = new RootComparator();
+
+ /**
+ * A {@link KVComparator} for <code>-ROOT-</code> catalog table
+ * {@link KeyValue} keys.
+ */
+ public static KeyComparator ROOT_KEY_COMPARATOR = new RootKeyComparator();
+
+ /**
+ * Comparator that compares the family portion of columns only.
+ * Use this making NavigableMaps of Stores or when you need to compare
+ * column family portion only of two column names.
+ */
+ public static final RawComparator<byte []> FAMILY_COMPARATOR =
+ new RawComparator<byte []> () {
+ public int compare(byte [] a, int ao, int al, byte [] b, int bo, int bl) {
+ int indexa = KeyValue.getDelimiter(a, ao, al, COLUMN_FAMILY_DELIMITER);
+ if (indexa < 0) {
+ indexa = al;
+ }
+ int indexb = KeyValue.getDelimiter(b, bo, bl, COLUMN_FAMILY_DELIMITER);
+ if (indexb < 0) {
+ indexb = bl;
+ }
+ return Bytes.compareTo(a, ao, indexa, b, bo, indexb);
+ }
+
+ public int compare(byte[] a, byte[] b) {
+ return compare(a, 0, a.length, b, 0, b.length);
+ }
+ };
+
+ // Size of the timestamp and type byte on end of a key -- a long + a byte.
+ private static final int TIMESTAMP_TYPE_SIZE =
+ Bytes.SIZEOF_LONG /* timestamp */ +
+ Bytes.SIZEOF_BYTE /*keytype*/;
+
+ // Size of the length shorts and bytes in key.
+ private static final int KEY_INFRASTRUCTURE_SIZE =
+ Bytes.SIZEOF_SHORT /*rowlength*/ +
+ Bytes.SIZEOF_BYTE /*columnfamilylength*/ +
+ TIMESTAMP_TYPE_SIZE;
+
+ // How far into the key the row starts at. First thing to read is the short
+ // that says how long the row is.
+ private static final int ROW_OFFSET =
+ Bytes.SIZEOF_INT /*keylength*/ +
+ Bytes.SIZEOF_INT /*valuelength*/;
+
+ // Size of the length ints in a KeyValue datastructure.
+ private static final int KEYVALUE_INFRASTRUCTURE_SIZE = ROW_OFFSET;
+
+ /**
+ * Key type.
+ * Has space for other key types to be added later. Cannot rely on
+ * enum ordinals . They change if item is removed or moved. Do our own codes.
+ */
+ public static enum Type {
+ Put((byte)4),
+ Delete((byte)8),
+ DeleteColumn((byte)12),
+ DeleteFamily((byte)14),
+ // Maximum is used when searching; you look from maximum on down.
+ Maximum((byte)255);
+
+ private final byte code;
+
+ Type(final byte c) {
+ this.code = c;
+ }
+
+ public byte getCode() {
+ return this.code;
+ }
+
+ /**
+ * Cannot rely on enum ordinals . They change if item is removed or moved.
+ * Do our own codes.
+ * @param b
+ * @return Type associated with passed code.
+ */
+ public static Type codeToType(final byte b) {
+ // This is messy repeating each type here below but no way around it; we
+ // can't use the enum ordinal.
+ if (b == Put.getCode()) {
+ return Put;
+ } else if (b == Delete.getCode()) {
+ return Delete;
+ } else if (b == DeleteColumn.getCode()) {
+ return DeleteColumn;
+ } else if (b == DeleteFamily.getCode()) {
+ return DeleteFamily;
+ } else if (b == Maximum.getCode()) {
+ return Maximum;
+ }
+ throw new RuntimeException("Unknown code " + b);
+ }
+ }
+
+ /**
+ * Lowest possible key.
+ * Makes a Key with highest possible Timestamp, empty row and column. No
+ * key can be equal or lower than this one in memcache or in store file.
+ */
+ public static final KeyValue LOWESTKEY =
+ new KeyValue(HConstants.EMPTY_BYTE_ARRAY, HConstants.LATEST_TIMESTAMP);
+
+ private byte [] bytes = null;
+ private int offset = 0;
+ private int length = 0;
+
+ /** Writable Constructor -- DO NOT USE */
+ public KeyValue() {}
+
+ /**
+ * Creates a KeyValue from the start of the specified byte array.
+ * Presumes <code>bytes</code> content is formatted as a KeyValue blob.
+ * @param bytes byte array
+ */
+ public KeyValue(final byte [] bytes) {
+ this(bytes, 0);
+ }
+
+ /**
+ * Creates a KeyValue from the specified byte array and offset.
+ * Presumes <code>bytes</code> content starting at <code>offset</code> is
+ * formatted as a KeyValue blob.
+ * @param bytes byte array
+ * @param offset offset to start of KeyValue
+ */
+ public KeyValue(final byte [] bytes, final int offset) {
+ this(bytes, offset, getLength(bytes, offset));
+ }
+
+ /**
+ * Creates a KeyValue from the specified byte array, starting at offset, and
+ * for length <code>length</code>.
+ * @param bytes byte array
+ * @param offset offset to start of the KeyValue
+ * @param length length of the KeyValue
+ */
+ public KeyValue(final byte [] bytes, final int offset, final int length) {
+ this.bytes = bytes;
+ this.offset = offset;
+ this.length = length;
+ }
+
+ /**
+ * Constructs KeyValue structure filled with null value.
+ * @param row - row key (arbitrary byte array)
+ * @param timestamp
+ */
+ public KeyValue(final String row, final long timestamp) {
+ this(Bytes.toBytes(row), timestamp);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with null value.
+ * @param row - row key (arbitrary byte array)
+ * @param timestamp
+ */
+ public KeyValue(final byte [] row, final long timestamp) {
+ this(row, null, timestamp, Type.Put, null);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with null value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ */
+ public KeyValue(final String row, final String column) {
+ this(row, column, null);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with null value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ */
+ public KeyValue(final byte [] row, final byte [] column) {
+ this(row, column, null);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with specified value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param value
+ */
+ public KeyValue(final String row, final String column, final byte [] value) {
+ this(Bytes.toBytes(row), Bytes.toBytes(column), value);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with specified value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param value
+ */
+ public KeyValue(final byte [] row, final byte [] column, final byte [] value) {
+ this(row, column, HConstants.LATEST_TIMESTAMP, value);
+ }
+
+
+ /**
+ * Constructs KeyValue structure filled with null value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param ts
+ */
+ public KeyValue(final String row, final String column, final long ts) {
+ this(row, column, ts, null);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with null value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param ts
+ */
+ public KeyValue(final byte [] row, final byte [] column, final long ts) {
+ this(row, column, ts, Type.Put);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with specified value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param timestamp
+ * @param value
+ */
+ public KeyValue(final String row, final String column,
+ final long timestamp, final byte [] value) {
+ this(Bytes.toBytes(row),
+ column == null? HConstants.EMPTY_BYTE_ARRAY: Bytes.toBytes(column),
+ timestamp, value);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with specified value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param timestamp
+ * @param value
+ */
+ public KeyValue(final byte [] row, final byte [] column,
+ final long timestamp, final byte [] value) {
+ this(row, column, timestamp, Type.Put, value);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with specified value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param timestamp
+ * @param type
+ * @param value
+ */
+ public KeyValue(final String row, final String column,
+ final long timestamp, final Type type, final byte [] value) {
+ this(Bytes.toBytes(row), Bytes.toBytes(column), timestamp, type,
+ value);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with null value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param timestamp
+ * @param type
+ */
+ public KeyValue(final byte [] row, final byte [] column,
+ final long timestamp, final Type type) {
+ this(row, 0, row.length, column, 0, column == null? 0: column.length,
+ timestamp, type, null, 0, -1);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with specified value.
+ * @param row - row key (arbitrary byte array)
+ * @param column Column with delimiter between family and qualifier
+ * @param timestamp
+ * @param type
+ * @param value
+ */
+ public KeyValue(final byte [] row, final byte [] column,
+ final long timestamp, final Type type, final byte [] value) {
+ this(row, 0, row.length, column, 0, column == null? 0: column.length,
+ timestamp, type, value, 0, value == null? 0: value.length);
+ }
+
+ /**
+ * Constructs KeyValue structure filled with specified value.
+ * @param row - row key (arbitrary byte array)
+ * @param roffset
+ * @param rlength
+ * @param column Column with delimiter between family and qualifier
+ * @param coffset Where to start reading the column.
+ * @param clength How long column is (including the family/qualifier delimiter.
+ * @param timestamp
+ * @param type
+ * @param value
+ * @param voffset
+ * @param vlength
+ * @throws IllegalArgumentException
+ */
+ public KeyValue(final byte [] row, final int roffset, final int rlength,
+ final byte [] column, final int coffset, int clength,
+ final long timestamp, final Type type,
+ final byte [] value, final int voffset, int vlength) {
+ this.bytes = createByteArray(row, roffset, rlength, column, coffset,
+ clength, timestamp, type, value, voffset, vlength);
+ this.length = bytes.length;
+ this.offset = 0;
+ }
+
+ /**
+ * Write KeyValue format into a byte array.
+ * @param row - row key (arbitrary byte array)
+ * @param roffset
+ * @param rlength
+ * @param column
+ * @param coffset
+ * @param clength
+ * @param timestamp
+ * @param type
+ * @param value
+ * @param voffset
+ * @param vlength
+ * @return
+ */
+ static byte [] createByteArray(final byte [] row, final int roffset,
+ final int rlength,
+ final byte [] column, final int coffset, int clength,
+ final long timestamp, final Type type,
+ final byte [] value, final int voffset, int vlength) {
+ if (rlength > Short.MAX_VALUE) {
+ throw new IllegalArgumentException("Row > " + Short.MAX_VALUE);
+ }
+ if (row == null) {
+ throw new IllegalArgumentException("Row is null");
+ }
+ // If column is non-null, figure where the delimiter is at.
+ int delimiteroffset = 0;
+ if (column != null && column.length > 0) {
+ delimiteroffset = getFamilyDelimiterIndex(column, coffset, clength);
+ if (delimiteroffset > Byte.MAX_VALUE) {
+ throw new IllegalArgumentException("Family > " + Byte.MAX_VALUE);
+ }
+ }
+ // Value length
+ vlength = value == null? 0: vlength;
+ // Column length - minus delimiter
+ clength = column == null || column.length == 0? 0: clength - 1;
+ long longkeylength = KEY_INFRASTRUCTURE_SIZE + rlength + clength;
+ if (longkeylength > Integer.MAX_VALUE) {
+ throw new IllegalArgumentException("keylength " + longkeylength + " > " +
+ Integer.MAX_VALUE);
+ }
+ int keylength = (int)longkeylength;
+ // Allocate right-sized byte array.
+ byte [] bytes = new byte[KEYVALUE_INFRASTRUCTURE_SIZE + keylength + vlength];
+ // Write key, value and key row length.
+ int pos = 0;
+ pos = Bytes.putInt(bytes, pos, keylength);
+ pos = Bytes.putInt(bytes, pos, vlength);
+ pos = Bytes.putShort(bytes, pos, (short)(rlength & 0x0000ffff));
+ pos = Bytes.putBytes(bytes, pos, row, roffset, rlength);
+ // Write out column family length.
+ pos = Bytes.putByte(bytes, pos, (byte)(delimiteroffset & 0x0000ff));
+ if (column != null && column.length != 0) {
+ // Write family.
+ pos = Bytes.putBytes(bytes, pos, column, coffset, delimiteroffset);
+ // Write qualifier.
+ delimiteroffset++;
+ pos = Bytes.putBytes(bytes, pos, column, coffset + delimiteroffset,
+ column.length - delimiteroffset);
+ }
+ pos = Bytes.putLong(bytes, pos, timestamp);
+ pos = Bytes.putByte(bytes, pos, type.getCode());
+ if (value != null && value.length > 0) {
+ pos = Bytes.putBytes(bytes, pos, value, voffset, vlength);
+ }
+ return bytes;
+ }
+
+ // Needed doing 'contains' on List. Only compares the key portion, not the
+ // value.
+ public boolean equals(Object other) {
+ KeyValue kv = (KeyValue)other;
+ // Comparing bytes should be fine doing equals test. Shouldn't have to
+ // worry about special .META. comparators doing straight equals.
+ boolean result = Bytes.BYTES_RAWCOMPARATOR.compare(getBuffer(),
+ getKeyOffset(), getKeyLength(),
+ kv.getBuffer(), kv.getKeyOffset(), kv.getKeyLength()) == 0;
+ return result;
+ }
+
+ /**
+ * @param timestamp
+ * @return Clone of bb's key portion with only the row and timestamp filled in.
+ * @throws IOException
+ */
+ public KeyValue cloneRow(final long timestamp) {
+ return new KeyValue(getBuffer(), getRowOffset(), getRowLength(),
+ null, 0, 0, timestamp, Type.codeToType(getType()), null, 0, 0);
+ }
+
+ /**
+ * @return Clone of bb's key portion with type set to Type.Delete.
+ * @throws IOException
+ */
+ public KeyValue cloneDelete() {
+ return createKey(Type.Delete);
+ }
+
+ /**
+ * @return Clone of bb's key portion with type set to Type.Maximum. Use this
+ * doing lookups where you are doing getClosest. Using Maximum, you'll be
+ * sure to trip over all of the other key types since Maximum sorts first.
+ * @throws IOException
+ */
+ public KeyValue cloneMaximum() {
+ return createKey(Type.Maximum);
+ }
+
+ /*
+ * Make a clone with the new type.
+ * Does not copy value.
+ * @param newtype New type to set on clone of this key.
+ * @return Clone of this key with type set to <code>newtype</code>
+ */
+ private KeyValue createKey(final Type newtype) {
+ int keylength = getKeyLength();
+ int l = keylength + ROW_OFFSET;
+ byte [] other = new byte[l];
+ System.arraycopy(getBuffer(), getOffset(), other, 0, l);
+ // Set value length to zero.
+ Bytes.putInt(other, Bytes.SIZEOF_INT, 0);
+ // Set last byte, the type, to new type
+ other[l - 1] = newtype.getCode();
+ return new KeyValue(other, 0, other.length);
+ }
+
+ public String toString() {
+ return keyToString(this.bytes, this.offset + ROW_OFFSET, getKeyLength()) +
+ "/vlen=" + getValueLength();
+ }
+
+ /**
+ * @param k Key portion of a KeyValue.
+ * @return Key as a String.
+ */
+ public static String keyToString(final byte [] k) {
+ return keyToString(k, 0, k.length);
+ }
+
+ /**
+ * @param b Key portion of a KeyValue.
+ * @param o Offset to start of key
+ * @param l Length of key.
+ * @return Key as a String.
+ */
+ public static String keyToString(final byte [] b, final int o, final int l) {
+ int rowlength = Bytes.toShort(b, o);
+ String row = Bytes.toString(b, o + Bytes.SIZEOF_SHORT, rowlength);
+ int columnoffset = o + Bytes.SIZEOF_SHORT + 1 + rowlength;
+ int familylength = b[columnoffset - 1];
+ int columnlength = l - ((columnoffset - o) + TIMESTAMP_TYPE_SIZE);
+ String family = familylength == 0? "":
+ Bytes.toString(b, columnoffset, familylength);
+ String qualifier = columnlength == 0? "":
+ Bytes.toString(b, columnoffset + familylength,
+ columnlength - familylength);
+ long timestamp = Bytes.toLong(b, o + (l - TIMESTAMP_TYPE_SIZE));
+ byte type = b[o + l - 1];
+ return row + "/" + family +
+ (family != null && family.length() > 0? COLUMN_FAMILY_DELIMITER: "") +
+ qualifier + "/" + timestamp + "/" + Type.codeToType(type);
+ }
+
+ /**
+ * @return The byte array backing this KeyValue.
+ */
+ public byte [] getBuffer() {
+ return this.bytes;
+ }
+
+ /**
+ * @return Offset into {@link #getBuffer()} at which this KeyValue starts.
+ */
+ public int getOffset() {
+ return this.offset;
+ }
+
+ /**
+ * @return Length of bytes this KeyValue occupies in {@link #getBuffer()}.
+ */
+ public int getLength() {
+ return length;
+ }
+
+ /*
+ * Determines the total length of the KeyValue stored in the specified
+ * byte array and offset. Includes all headers.
+ * @param bytes byte array
+ * @param offset offset to start of the KeyValue
+ * @return length of entire KeyValue, in bytes
+ */
+ private static int getLength(byte [] bytes, int offset) {
+ return (2 * Bytes.SIZEOF_INT) +
+ Bytes.toInt(bytes, offset) +
+ Bytes.toInt(bytes, offset + Bytes.SIZEOF_INT);
+ }
+
+ /**
+ * @return Copy of the key portion only. Used compacting and testing.
+ */
+ public byte [] getKey() {
+ int keylength = getKeyLength();
+ byte [] key = new byte[keylength];
+ System.arraycopy(getBuffer(), getKeyOffset(), key, 0, keylength);
+ return key;
+ }
+
+ public String getKeyString() {
+ return Bytes.toString(getBuffer(), getKeyOffset(), getKeyLength());
+ }
+
+ /**
+ * @return Key offset in backing buffer..
+ */
+ public int getKeyOffset() {
+ return this.offset + ROW_OFFSET;
+ }
+
+ /**
+ * @return Row length.
+ */
+ public short getRowLength() {
+ return Bytes.toShort(this.bytes, getKeyOffset());
+ }
+
+ /**
+ * @return Offset into backing buffer at which row starts.
+ */
+ public int getRowOffset() {
+ return getKeyOffset() + Bytes.SIZEOF_SHORT;
+ }
+
+ /**
+ * Do not use this unless you have to.
+ * Use {@link #getBuffer()} with appropriate offsets and lengths instead.
+ * @return Row in a new byte array.
+ */
+ public byte [] getRow() {
+ int o = getRowOffset();
+ short l = getRowLength();
+ byte [] result = new byte[l];
+ System.arraycopy(getBuffer(), o, result, 0, l);
+ return result;
+ }
+
+ /**
+ * @return Timestamp
+ */
+ public long getTimestamp() {
+ return getTimestamp(getKeyLength());
+ }
+
+ /**
+ * @param keylength Pass if you have it to save on a int creation.
+ * @return Timestamp
+ */
+ long getTimestamp(final int keylength) {
+ int tsOffset = getTimestampOffset(keylength);
+ return Bytes.toLong(this.bytes, tsOffset);
+ }
+
+ /**
+ * @param keylength Pass if you have it to save on a int creation.
+ * @return Offset into backing buffer at which timestamp starts.
+ */
+ int getTimestampOffset(final int keylength) {
+ return getKeyOffset() + keylength - TIMESTAMP_TYPE_SIZE;
+ }
+
+ /**
+ * @return True if a {@link Type#Delete}.
+ */
+ public boolean isDeleteType() {
+ return getType() == Type.Delete.getCode();
+ }
+
+ /**
+ * @return Type of this KeyValue.
+ */
+ byte getType() {
+ return getType(getKeyLength());
+ }
+
+ /**
+ * @param keylength Pass if you have it to save on a int creation.
+ * @return Type of this KeyValue.
+ */
+ byte getType(final int keylength) {
+ return this.bytes[this.offset + keylength - 1 + ROW_OFFSET];
+ }
+
+ /**
+ * @return Length of key portion.
+ */
+ public int getKeyLength() {
+ return Bytes.toInt(this.bytes, this.offset);
+ }
+
+ /**
+ * @return Value length
+ */
+ public int getValueLength() {
+ return Bytes.toInt(this.bytes, this.offset + Bytes.SIZEOF_INT);
+ }
+
+ /**
+ * @return Offset into backing buffer at which value starts.
+ */
+ public int getValueOffset() {
+ return getKeyOffset() + getKeyLength();
+ }
+
+ /**
+ * Do not use unless you have to. Use {@link #getBuffer()} with appropriate
+ * offset and lengths instead.
+ * @return Value in a new byte array.
+ */
+ public byte [] getValue() {
+ int o = getValueOffset();
+ int l = getValueLength();
+ byte [] result = new byte[l];
+ System.arraycopy(getBuffer(), o, result, 0, l);
+ return result;
+ }
+
+ /**
+ * @return Offset into backing buffer at which the column begins
+ */
+ public int getColumnOffset() {
+ return getColumnOffset(getRowLength());
+ }
+
+ /**
+ * @param rowlength - length of row.
+ * @return Offset into backing buffer at which the column begins
+ */
+ public int getColumnOffset(final int rowlength) {
+ return getRowOffset() + rowlength + 1;
+ }
+
+ /**
+ * @param columnoffset Pass if you have it to save on an int creation.
+ * @return Length of family portion of column.
+ */
+ int getFamilyLength(final int columnoffset) {
+ return this.bytes[columnoffset - 1];
+ }
+
+ /**
+ * @param columnoffset Pass if you have it to save on an int creation.
+ * @return Length of column.
+ */
+ public int getColumnLength(final int columnoffset) {
+ return getColumnLength(columnoffset, getKeyLength());
+ }
+
+ int getColumnLength(final int columnoffset, final int keylength) {
+ return (keylength + ROW_OFFSET) - (columnoffset - this.offset) -
+ TIMESTAMP_TYPE_SIZE;
+ }
+
+ /**
+ * @param family
+ * @return True if matching families.
+ */
+ public boolean matchingFamily(final byte [] family) {
+ int o = getColumnOffset();
+ // Family length byte is just before the column starts.
+ int l = this.bytes[o - 1];
+ return Bytes.compareTo(family, 0, family.length, this.bytes, o, l) == 0;
+ }
+
+ /**
+ * @param column Column minus its delimiter
+ * @param familylength Length of family in passed <code>column</code>
+ * @return True if column matches.
+ * @see #matchingColumn(byte[])
+ */
+ public boolean matchingColumnNoDelimiter(final byte [] column,
+ final int familylength) {
+ int o = getColumnOffset();
+ int l = getColumnLength(o);
+ int f = getFamilyLength(o);
+ return compareColumns(getBuffer(), o, l, f,
+ column, 0, column.length, familylength) == 0;
+ }
+
+ /**
+ * @param column Column with delimiter
+ * @return True if column matches.
+ */
+ public boolean matchingColumn(final byte [] column) {
+ int index = getFamilyDelimiterIndex(column, 0, column.length);
+ int o = getColumnOffset();
+ int l = getColumnLength(o);
+ int result = Bytes.compareTo(getBuffer(), o, index, column, 0, index);
+ if (result != 0) {
+ return false;
+ }
+ return Bytes.compareTo(getBuffer(), o + index, l - index,
+ column, index + 1, column.length - (index + 1)) == 0;
+ }
+
+ /**
+ * @param left
+ * @param loffset
+ * @param llength
+ * @param lfamilylength Offset of family delimiter in left column.
+ * @param right
+ * @param roffset
+ * @param rlength
+ * @param rfamilylength Offset of family delimiter in right column.
+ * @return
+ */
+ static int compareColumns(final byte [] left, final int loffset,
+ final int llength, final int lfamilylength,
+ final byte [] right, final int roffset, final int rlength,
+ final int rfamilylength) {
+ // Compare family portion first.
+ int diff = Bytes.compareTo(left, loffset, lfamilylength,
+ right, roffset, rfamilylength);
+ if (diff != 0) {
+ return diff;
+ }
+ // Compare qualifier portion
+ return Bytes.compareTo(left, loffset + lfamilylength,
+ llength - lfamilylength,
+ right, roffset + rfamilylength, rlength - rfamilylength);
+ }
+
+ /**
+ * @return True if non-null row and column.
+ */
+ public boolean nonNullRowAndColumn() {
+ return getRowLength() > 0 && !isEmptyColumn();
+ }
+
+ /**
+ * @return Returns column String with delimiter added back. Expensive!
+ */
+ public String getColumnString() {
+ int o = getColumnOffset();
+ int l = getColumnLength(o);
+ int familylength = getFamilyLength(o);
+ return Bytes.toString(this.bytes, o, familylength) +
+ COLUMN_FAMILY_DELIMITER + Bytes.toString(this.bytes,
+ o + familylength, l - familylength);
+ }
+
+ /**
+ * Do not use this unless you have to.
+ * Use {@link #getBuffer()} with appropriate offsets and lengths instead.
+ * @return Returns column. Makes a copy. Inserts delimiter.
+ */
+ public byte [] getColumn() {
+ int o = getColumnOffset();
+ int l = getColumnLength(o);
+ int familylength = getFamilyLength(o);
+ byte [] result = new byte[l + 1];
+ System.arraycopy(getBuffer(), o, result, 0, familylength);
+ result[familylength] = COLUMN_FAMILY_DELIMITER;
+ System.arraycopy(getBuffer(), o + familylength, result,
+ familylength + 1, l - familylength);
+ return result;
+ }
+
+ /**
+ * @return True if column is empty.
+ */
+ public boolean isEmptyColumn() {
+ return getColumnLength(getColumnOffset()) == 0;
+ }
+
+ /**
+ * @param b
+ * @return Index of the family-qualifier colon delimiter character in passed
+ * buffer.
+ */
+ public static int getFamilyDelimiterIndex(final byte [] b, final int offset,
+ final int length) {
+ return getRequiredDelimiter(b, offset, length, COLUMN_FAMILY_DELIMITER);
+ }
+
+ private static int getRequiredDelimiter(final byte [] b,
+ final int offset, final int length, final int delimiter) {
+ int index = getDelimiter(b, offset, length, delimiter);
+ if (index < 0) {
+ throw new IllegalArgumentException("No " + (char)delimiter + " in <" +
+ Bytes.toString(b) + ">" + ", length=" + length + ", offset=" + offset);
+ }
+ return index;
+ }
+
+ static int getRequiredDelimiterInReverse(final byte [] b,
+ final int offset, final int length, final int delimiter) {
+ int index = getDelimiterInReverse(b, offset, length, delimiter);
+ if (index < 0) {
+ throw new IllegalArgumentException("No " + delimiter + " in <" +
+ Bytes.toString(b) + ">" + ", length=" + length + ", offset=" + offset);
+ }
+ return index;
+ }
+
+ /*
+ * @param b
+ * @param delimiter
+ * @return Index of delimiter having started from end of <code>b</code> moving
+ * leftward.
+ */
+ static int getDelimiter(final byte [] b, int offset, final int length,
+ final int delimiter) {
+ if (b == null) {
+ throw new NullPointerException();
+ }
+ int result = -1;
+ for (int i = offset; i < length + offset; i++) {
+ if (b[i] == delimiter) {
+ result = i;
+ break;
+ }
+ }
+ return result;
+ }
+
+ /*
+ * @param b
+ * @param delimiter
+ * @return Index of delimiter
+ */
+ static int getDelimiterInReverse(final byte [] b, final int offset,
+ final int length, final int delimiter) {
+ if (b == null) {
+ throw new NullPointerException();
+ }
+ int result = -1;
+ for (int i = (offset + length) - 1; i >= offset; i--) {
+ if (b[i] == delimiter) {
+ result = i;
+ break;
+ }
+ }
+ return result;
+ }
+
+ /**
+ * A {@link KVComparator} for <code>-ROOT-</code> catalog table
+ * {@link KeyValue}s.
+ */
+ public static class RootComparator extends MetaComparator {
+ private final KeyComparator rawcomparator = new RootKeyComparator();
+
+ public KeyComparator getRawComparator() {
+ return this.rawcomparator;
+ }
+
+ @Override
+ protected Object clone() throws CloneNotSupportedException {
+ return new RootComparator();
+ }
+ }
+
+ /**
+ * A {@link KVComparator} for <code>.META.</code> catalog table
+ * {@link KeyValue}s.
+ */
+ public static class MetaComparator extends KVComparator {
+ private final KeyComparator rawcomparator = new MetaKeyComparator();
+
+ public KeyComparator getRawComparator() {
+ return this.rawcomparator;
+ }
+
+ @Override
+ protected Object clone() throws CloneNotSupportedException {
+ return new MetaComparator();
+ }
+ }
+
+ /**
+ * Compare KeyValues. When we compare KeyValues, we only compare the Key
+ * portion. This means two KeyValues with same Key but different Values are
+ * considered the same as far as this Comparator is concerned.
+ * Hosts a {@link KeyComparator}.
+ */
+ public static class KVComparator implements java.util.Comparator<KeyValue> {
+ private final KeyComparator rawcomparator = new KeyComparator();
+
+ /**
+ * @return RawComparator that can compare the Key portion of a KeyValue.
+ * Used in hfile where indices are the Key portion of a KeyValue.
+ */
+ public KeyComparator getRawComparator() {
+ return this.rawcomparator;
+ }
+
+ public int compare(final KeyValue left, final KeyValue right) {
+ return getRawComparator().compare(left.getBuffer(),
+ left.getOffset() + ROW_OFFSET, left.getKeyLength(),
+ right.getBuffer(), right.getOffset() + ROW_OFFSET,
+ right.getKeyLength());
+ }
+
+ public int compareTimestamps(final KeyValue left, final KeyValue right) {
+ return compareTimestamps(left, left.getKeyLength(), right,
+ right.getKeyLength());
+ }
+
+ int compareTimestamps(final KeyValue left, final int lkeylength,
+ final KeyValue right, final int rkeylength) {
+ // Compare timestamps
+ long ltimestamp = left.getTimestamp(lkeylength);
+ long rtimestamp = right.getTimestamp(rkeylength);
+ return getRawComparator().compareTimestamps(ltimestamp, rtimestamp);
+ }
+
+ /**
+ * @param left
+ * @param right
+ * @return Result comparing rows.
+ */
+ public int compareRows(final KeyValue left, final KeyValue right) {
+ return compareRows(left, left.getRowLength(), right, right.getRowLength());
+ }
+
+ /**
+ * @param left
+ * @param lrowlength Length of left row.
+ * @param right
+ * @param rrowlength Length of right row.
+ * @return Result comparing rows.
+ */
+ public int compareRows(final KeyValue left, final short lrowlength,
+ final KeyValue right, final short rrowlength) {
+ return getRawComparator().compareRows(left.getBuffer(),
+ left.getRowOffset(), lrowlength,
+ right.getBuffer(), right.getRowOffset(), rrowlength);
+ }
+
+ /**
+ * @param left
+ * @param row - row key (arbitrary byte array)
+ * @return RawComparator
+ */
+ public int compareRows(final KeyValue left, final byte [] row) {
+ return getRawComparator().compareRows(left.getBuffer(),
+ left.getRowOffset(), left.getRowLength(), row, 0, row.length);
+ }
+
+ public int compareRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ return getRawComparator().compareRows(left, loffset, llength,
+ right, roffset, rlength);
+ }
+
+ public int compareColumns(final KeyValue left, final byte [] right,
+ final int roffset, final int rlength, final int rfamilyoffset) {
+ int offset = left.getColumnOffset();
+ int length = left.getColumnLength(offset);
+ return getRawComparator().compareColumns(left.getBuffer(), offset, length,
+ left.getFamilyLength(offset),
+ right, roffset, rlength, rfamilyoffset);
+ }
+
+ int compareColumns(final KeyValue left, final short lrowlength,
+ final int lkeylength, final KeyValue right, final short rrowlength,
+ final int rkeylength) {
+ int loffset = left.getColumnOffset(lrowlength);
+ int roffset = right.getColumnOffset(rrowlength);
+ int llength = left.getColumnLength(loffset, lkeylength);
+ int rlength = right.getColumnLength(roffset, rkeylength);
+ int lfamilylength = left.getFamilyLength(loffset);
+ int rfamilylength = right.getFamilyLength(roffset);
+ return getRawComparator().compareColumns(left.getBuffer(), loffset,
+ llength, lfamilylength,
+ right.getBuffer(), roffset, rlength, rfamilylength);
+ }
+
+ /**
+ * Compares the row and column of two keyvalues
+ * @param left
+ * @param right
+ * @return True if same row and column.
+ */
+ public boolean matchingRowColumn(final KeyValue left,
+ final KeyValue right) {
+ short lrowlength = left.getRowLength();
+ short rrowlength = right.getRowLength();
+ if (!matchingRows(left, lrowlength, right, rrowlength)) {
+ return false;
+ }
+ int lkeylength = left.getKeyLength();
+ int rkeylength = right.getKeyLength();
+ return compareColumns(left, lrowlength, lkeylength,
+ right, rrowlength, rkeylength) == 0;
+ }
+
+ /**
+ * @param left
+ * @param right
+ * @return True if rows match.
+ */
+ public boolean matchingRows(final KeyValue left, final byte [] right) {
+ return compareRows(left, right) == 0;
+ }
+
+ /**
+ * @param left
+ * @param right
+ * @return True if rows match.
+ */
+ public boolean matchingRows(final KeyValue left, final KeyValue right) {
+ short lrowlength = left.getRowLength();
+ short rrowlength = right.getRowLength();
+ return matchingRows(left, lrowlength, right, rrowlength);
+ }
+
+ /**
+ * @param left
+ * @param lrowlength
+ * @param right
+ * @param rrowlength
+ * @return True if rows match.
+ */
+ public boolean matchingRows(final KeyValue left, final short lrowlength,
+ final KeyValue right, final short rrowlength) {
+ int compare = compareRows(left, lrowlength, right, rrowlength);
+ if (compare != 0) {
+ return false;
+ }
+ return true;
+ }
+
+ public boolean matchingRows(final byte [] left, final int loffset,
+ final int llength,
+ final byte [] right, final int roffset, final int rlength) {
+ int compare = compareRows(left, loffset, llength, right, roffset, rlength);
+ if (compare != 0) {
+ return false;
+ }
+ return true;
+ }
+
+ /**
+ * Compares the row and timestamp of two keys
+ * Was called matchesWithoutColumn in HStoreKey.
+ * @param right Key to compare against.
+ * @return True if same row and timestamp is greater than the timestamp in
+ * <code>right</code>
+ */
+ public boolean matchingRowsGreaterTimestamp(final KeyValue left,
+ final KeyValue right) {
+ short lrowlength = left.getRowLength();
+ short rrowlength = right.getRowLength();
+ if (!matchingRows(left, lrowlength, right, rrowlength)) {
+ return false;
+ }
+ return left.getTimestamp() >= right.getTimestamp();
+ }
+
+ @Override
+ protected Object clone() throws CloneNotSupportedException {
+ return new KVComparator();
+ }
+
+ /**
+ * @return Comparator that ignores timestamps; useful counting versions.
+ * @throws IOException
+ */
+ public KVComparator getComparatorIgnoringTimestamps() {
+ KVComparator c = null;
+ try {
+ c = (KVComparator)this.clone();
+ c.getRawComparator().ignoreTimestamp = true;
+ } catch (CloneNotSupportedException e) {
+ LOG.error("Not supported", e);
+ }
+ return c;
+ }
+
+ /**
+ * @return Comparator that ignores key type; useful checking deletes
+ */
+ public KVComparator getComparatorIgnoringType() {
+ KVComparator c = null;
+ try {
+ c = (KVComparator)this.clone();
+ c.getRawComparator().ignoreType = true;
+ } catch (CloneNotSupportedException e) {
+ LOG.error("Not supported", e);
+ }
+ return c;
+ }
+ }
+
+ /**
+ * @param row - row key (arbitrary byte array)
+ * @return First possible KeyValue on passed <code>row</code>
+ */
+ public static KeyValue createFirstOnRow(final byte [] row) {
+ return createFirstOnRow(row, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * @param row - row key (arbitrary byte array)
+ * @param ts - timestamp
+ * @return First possible key on passed <code>row</code> and timestamp.
+ */
+ public static KeyValue createFirstOnRow(final byte [] row,
+ final long ts) {
+ return createFirstOnRow(row, null, ts);
+ }
+
+ /**
+ * @param row - row key (arbitrary byte array)
+ * @param ts - timestamp
+ * @return First possible key on passed <code>row</code>, column and timestamp.
+ */
+ public static KeyValue createFirstOnRow(final byte [] row, final byte [] c,
+ final long ts) {
+ return new KeyValue(row, c, ts, Type.Maximum);
+ }
+
+ /**
+ * @param b
+ * @param o
+ * @param l
+ * @return A KeyValue made of a byte array that holds the key-only part.
+ * Needed to convert hfile index members to KeyValues.
+ */
+ public static KeyValue createKeyValueFromKey(final byte [] b, final int o,
+ final int l) {
+ byte [] newb = new byte[b.length + ROW_OFFSET];
+ System.arraycopy(b, o, newb, ROW_OFFSET, l);
+ Bytes.putInt(newb, 0, b.length);
+ Bytes.putInt(newb, Bytes.SIZEOF_INT, 0);
+ return new KeyValue(newb);
+ }
+
+ /**
+ * Compare key portion of a {@link KeyValue} for keys in <code>-ROOT-<code>
+ * table.
+ */
+ public static class RootKeyComparator extends MetaKeyComparator {
+ public int compareRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ // Rows look like this: .META.,ROW_FROM_META,RID
+ // LOG.info("ROOT " + Bytes.toString(left, loffset, llength) +
+ // "---" + Bytes.toString(right, roffset, rlength));
+ final int metalength = 7; // '.META.' length
+ int lmetaOffsetPlusDelimiter = loffset + metalength;
+ int leftFarDelimiter = getDelimiterInReverse(left, lmetaOffsetPlusDelimiter,
+ llength - metalength, HRegionInfo.DELIMITER);
+ int rmetaOffsetPlusDelimiter = roffset + metalength;
+ int rightFarDelimiter = getDelimiterInReverse(right,
+ rmetaOffsetPlusDelimiter, rlength - metalength,
+ HRegionInfo.DELIMITER);
+ if (leftFarDelimiter < 0 && rightFarDelimiter >= 0) {
+ // Nothing between .META. and regionid. Its first key.
+ return -1;
+ } else if (rightFarDelimiter < 0 && leftFarDelimiter >= 0) {
+ return 1;
+ } else if (leftFarDelimiter < 0 && rightFarDelimiter < 0) {
+ return 0;
+ }
+ int result = super.compareRows(left, lmetaOffsetPlusDelimiter,
+ leftFarDelimiter - lmetaOffsetPlusDelimiter,
+ right, rmetaOffsetPlusDelimiter,
+ rightFarDelimiter - rmetaOffsetPlusDelimiter);
+ if (result != 0) {
+ return result;
+ }
+ // Compare last part of row, the rowid.
+ leftFarDelimiter++;
+ rightFarDelimiter++;
+ result = compareRowid(left, leftFarDelimiter,
+ llength - (leftFarDelimiter - loffset),
+ right, rightFarDelimiter, rlength - (rightFarDelimiter - roffset));
+ return result;
+ }
+ }
+
+ /**
+ * Compare key portion of a {@link KeyValue} for keys in <code>.META.</code>
+ * table.
+ */
+ public static class MetaKeyComparator extends KeyComparator {
+ public int compareRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ // LOG.info("META " + Bytes.toString(left, loffset, llength) +
+ // "---" + Bytes.toString(right, roffset, rlength));
+ int leftDelimiter = getDelimiter(left, loffset, llength,
+ HRegionInfo.DELIMITER);
+ int rightDelimiter = getDelimiter(right, roffset, rlength,
+ HRegionInfo.DELIMITER);
+ if (leftDelimiter < 0 && rightDelimiter >= 0) {
+ // Nothing between .META. and regionid. Its first key.
+ return -1;
+ } else if (rightDelimiter < 0 && leftDelimiter >= 0) {
+ return 1;
+ } else if (leftDelimiter < 0 && rightDelimiter < 0) {
+ return 0;
+ }
+ // Compare up to the delimiter
+ int result = Bytes.compareTo(left, loffset, leftDelimiter - loffset,
+ right, roffset, rightDelimiter - roffset);
+ if (result != 0) {
+ return result;
+ }
+ // Compare middle bit of the row.
+ // Move past delimiter
+ leftDelimiter++;
+ rightDelimiter++;
+ int leftFarDelimiter = getRequiredDelimiterInReverse(left, leftDelimiter,
+ llength - (leftDelimiter - loffset), HRegionInfo.DELIMITER);
+ int rightFarDelimiter = getRequiredDelimiterInReverse(right,
+ rightDelimiter, rlength - (rightDelimiter - roffset),
+ HRegionInfo.DELIMITER);
+ // Now compare middlesection of row.
+ result = super.compareRows(left, leftDelimiter,
+ leftFarDelimiter - leftDelimiter, right, rightDelimiter,
+ rightFarDelimiter - rightDelimiter);
+ if (result != 0) {
+ return result;
+ }
+ // Compare last part of row, the rowid.
+ leftFarDelimiter++;
+ rightFarDelimiter++;
+ result = compareRowid(left, leftFarDelimiter,
+ llength - (leftFarDelimiter - loffset),
+ right, rightFarDelimiter, rlength - (rightFarDelimiter - roffset));
+ return result;
+ }
+
+ protected int compareRowid(byte[] left, int loffset, int llength,
+ byte[] right, int roffset, int rlength) {
+ return Bytes.compareTo(left, loffset, llength, right, roffset, rlength);
+ }
+ }
+
+ /**
+ * Compare key portion of a {@link KeyValue}.
+ */
+ public static class KeyComparator implements RawComparator<byte []> {
+ volatile boolean ignoreTimestamp = false;
+ volatile boolean ignoreType = false;
+
+ public int compare(byte[] left, int loffset, int llength, byte[] right,
+ int roffset, int rlength) {
+ // Compare row
+ short lrowlength = Bytes.toShort(left, loffset);
+ short rrowlength = Bytes.toShort(right, roffset);
+ int compare = compareRows(left, loffset + Bytes.SIZEOF_SHORT,
+ lrowlength,
+ right, roffset + Bytes.SIZEOF_SHORT, rrowlength);
+ if (compare != 0) {
+ return compare;
+ }
+
+ // Compare column family. Start compare past row and family length.
+ int lcolumnoffset = Bytes.SIZEOF_SHORT + lrowlength + 1 + loffset;
+ int rcolumnoffset = Bytes.SIZEOF_SHORT + rrowlength + 1 + roffset;
+ int lcolumnlength = llength - TIMESTAMP_TYPE_SIZE -
+ (lcolumnoffset - loffset);
+ int rcolumnlength = rlength - TIMESTAMP_TYPE_SIZE -
+ (rcolumnoffset - roffset);
+ compare = Bytes.compareTo(left, lcolumnoffset, lcolumnlength, right,
+ rcolumnoffset, rcolumnlength);
+ if (compare != 0) {
+ return compare;
+ }
+
+ if (!this.ignoreTimestamp) {
+ // Get timestamps.
+ long ltimestamp = Bytes.toLong(left,
+ loffset + (llength - TIMESTAMP_TYPE_SIZE));
+ long rtimestamp = Bytes.toLong(right,
+ roffset + (rlength - TIMESTAMP_TYPE_SIZE));
+ compare = compareTimestamps(ltimestamp, rtimestamp);
+ if (compare != 0) {
+ return compare;
+ }
+ }
+
+ if (!this.ignoreType) {
+ // Compare types. Let the delete types sort ahead of puts; i.e. types
+ // of higher numbers sort before those of lesser numbers
+ byte ltype = left[loffset + (llength - 1)];
+ byte rtype = right[roffset + (rlength - 1)];
+ return (0xff & rtype) - (0xff & ltype);
+ }
+ return 0;
+ }
+
+ public int compare(byte[] left, byte[] right) {
+ return compare(left, 0, left.length, right, 0, right.length);
+ }
+
+ protected int compareRows(byte [] left, int loffset, int llength,
+ byte [] right, int roffset, int rlength) {
+ return Bytes.compareTo(left, loffset, llength, right, roffset, rlength);
+ }
+
+ protected int compareColumns(
+ byte [] left, int loffset, int llength, final int lfamilylength,
+ byte [] right, int roffset, int rlength, final int rfamilylength) {
+ return KeyValue.compareColumns(left, loffset, llength, lfamilylength,
+ right, roffset, rlength, rfamilylength);
+ }
+
+ int compareTimestamps(final long ltimestamp, final long rtimestamp) {
+ // The below older timestamps sorting ahead of newer timestamps looks
+ // wrong but it is intentional. This way, newer timestamps are first
+ // found when we iterate over a memcache and newer versions are the
+ // first we trip over when reading from a store file.
+ if (ltimestamp < rtimestamp) {
+ return 1;
+ } else if (ltimestamp > rtimestamp) {
+ return -1;
+ }
+ return 0;
+ }
+ }
+
+ // HeapSize
+ public long heapSize() {
+ return this.length;
+ }
+
+ // Writable
+ public void readFields(final DataInput in) throws IOException {
+ this.length = in.readInt();
+ this.offset = 0;
+ this.bytes = new byte[this.length];
+ in.readFully(this.bytes, 0, this.length);
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ out.writeInt(this.length);
+ out.write(this.bytes, this.offset, this.length);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/LeaseException.java b/src/java/org/apache/hadoop/hbase/LeaseException.java
new file mode 100644
index 0000000..c48cc7f
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/LeaseException.java
@@ -0,0 +1,40 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/**
+ * Reports a problem with a lease
+ */
+public class LeaseException extends DoNotRetryIOException {
+
+ private static final long serialVersionUID = 8179703995292418650L;
+
+ /** default constructor */
+ public LeaseException() {
+ super();
+ }
+
+ /**
+ * @param message
+ */
+ public LeaseException(String message) {
+ super(message);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/LeaseListener.java b/src/java/org/apache/hadoop/hbase/LeaseListener.java
new file mode 100644
index 0000000..90a32ef
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/LeaseListener.java
@@ -0,0 +1,34 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+/**
+ * LeaseListener is an interface meant to be implemented by users of the Leases
+ * class.
+ *
+ * It receives events from the Leases class about the status of its accompanying
+ * lease. Users of the Leases class can use a LeaseListener subclass to, for
+ * example, clean up resources after a lease has expired.
+ */
+public interface LeaseListener {
+ /** When a lease expires, this method is called. */
+ public void leaseExpired();
+}
diff --git a/src/java/org/apache/hadoop/hbase/Leases.java b/src/java/org/apache/hadoop/hbase/Leases.java
new file mode 100644
index 0000000..ad00864
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/Leases.java
@@ -0,0 +1,270 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.util.ConcurrentModificationException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.Delayed;
+import java.util.concurrent.DelayQueue;
+import java.util.concurrent.TimeUnit;
+
+import java.io.IOException;
+
+/**
+ * Leases
+ *
+ * There are several server classes in HBase that need to track external
+ * clients that occasionally send heartbeats.
+ *
+ * <p>These external clients hold resources in the server class.
+ * Those resources need to be released if the external client fails to send a
+ * heartbeat after some interval of time passes.
+ *
+ * <p>The Leases class is a general reusable class for this kind of pattern.
+ * An instance of the Leases class will create a thread to do its dirty work.
+ * You should close() the instance if you want to clean up the thread properly.
+ *
+ * <p>
+ * NOTE: This class extends Thread rather than Chore because the sleep time
+ * can be interrupted when there is something to do, rather than the Chore
+ * sleep time which is invariant.
+ */
+public class Leases extends Thread {
+ private static final Log LOG = LogFactory.getLog(Leases.class.getName());
+ private final int leasePeriod;
+ private final int leaseCheckFrequency;
+ private volatile DelayQueue<Lease> leaseQueue = new DelayQueue<Lease>();
+ protected final Map<String, Lease> leases = new HashMap<String, Lease>();
+ private volatile boolean stopRequested = false;
+
+ /**
+ * Creates a lease monitor
+ *
+ * @param leasePeriod - length of time (milliseconds) that the lease is valid
+ * @param leaseCheckFrequency - how often the lease should be checked
+ * (milliseconds)
+ */
+ public Leases(final int leasePeriod, final int leaseCheckFrequency) {
+ this.leasePeriod = leasePeriod;
+ this.leaseCheckFrequency = leaseCheckFrequency;
+ }
+
+ /**
+ * @see java.lang.Thread#run()
+ */
+ @Override
+ public void run() {
+ while (!stopRequested || (stopRequested && leaseQueue.size() > 0) ) {
+ Lease lease = null;
+ try {
+ lease = leaseQueue.poll(leaseCheckFrequency, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ continue;
+ } catch (ConcurrentModificationException e) {
+ continue;
+ } catch (Throwable e) {
+ LOG.fatal("Unexpected exception killed leases thread", e);
+ break;
+ }
+ if (lease == null) {
+ continue;
+ }
+ // A lease expired. Run the expired code before removing from queue
+ // since its presence in queue is used to see if lease exists still.
+ if (lease.getListener() == null) {
+ LOG.error("lease listener is null for lease " + lease.getLeaseName());
+ } else {
+ lease.getListener().leaseExpired();
+ }
+ synchronized (leaseQueue) {
+ leases.remove(lease.getLeaseName());
+ }
+ }
+ close();
+ }
+
+ /**
+ * Shuts down this lease instance when all outstanding leases expire.
+ * Like {@link #close()} but rather than violently end all leases, waits
+ * first on extant leases to finish. Use this method if the lease holders
+ * could loose data, leak locks, etc. Presumes client has shutdown
+ * allocation of new leases.
+ */
+ public void closeAfterLeasesExpire() {
+ this.stopRequested = true;
+ }
+
+ /**
+ * Shut down this Leases instance. All pending leases will be destroyed,
+ * without any cancellation calls.
+ */
+ public void close() {
+ LOG.info(Thread.currentThread().getName() + " closing leases");
+ this.stopRequested = true;
+ synchronized (leaseQueue) {
+ leaseQueue.clear();
+ leases.clear();
+ leaseQueue.notifyAll();
+ }
+ LOG.info(Thread.currentThread().getName() + " closed leases");
+ }
+
+ /**
+ * Obtain a lease
+ *
+ * @param leaseName name of the lease
+ * @param listener listener that will process lease expirations
+ * @throws LeaseStillHeldException
+ */
+ public void createLease(String leaseName, final LeaseListener listener)
+ throws LeaseStillHeldException {
+ if (stopRequested) {
+ return;
+ }
+ Lease lease = new Lease(leaseName, listener,
+ System.currentTimeMillis() + leasePeriod);
+ synchronized (leaseQueue) {
+ if (leases.containsKey(leaseName)) {
+ throw new LeaseStillHeldException(leaseName);
+ }
+ leases.put(leaseName, lease);
+ leaseQueue.add(lease);
+ }
+ }
+
+ /**
+ * Thrown if we are asked create a lease but lease on passed name already
+ * exists.
+ */
+ @SuppressWarnings("serial")
+ public static class LeaseStillHeldException extends IOException {
+ private final String leaseName;
+
+ /**
+ * @param name
+ */
+ public LeaseStillHeldException(final String name) {
+ this.leaseName = name;
+ }
+
+ /** @return name of lease */
+ public String getName() {
+ return this.leaseName;
+ }
+ }
+
+ /**
+ * Renew a lease
+ *
+ * @param leaseName name of lease
+ * @throws LeaseException
+ */
+ public void renewLease(final String leaseName) throws LeaseException {
+ synchronized (leaseQueue) {
+ Lease lease = leases.get(leaseName);
+ if (lease == null) {
+ throw new LeaseException("lease '" + leaseName +
+ "' does not exist");
+ }
+ leaseQueue.remove(lease);
+ lease.setExpirationTime(System.currentTimeMillis() + leasePeriod);
+ leaseQueue.add(lease);
+ }
+ }
+
+ /**
+ * Client explicitly cancels a lease.
+ *
+ * @param leaseName name of lease
+ * @throws LeaseException
+ */
+ public void cancelLease(final String leaseName) throws LeaseException {
+ synchronized (leaseQueue) {
+ Lease lease = leases.remove(leaseName);
+ if (lease == null) {
+ throw new LeaseException("lease '" + leaseName + "' does not exist");
+ }
+ leaseQueue.remove(lease);
+ }
+ }
+
+ /** This class tracks a single Lease. */
+ private static class Lease implements Delayed {
+ private final String leaseName;
+ private final LeaseListener listener;
+ private long expirationTime;
+
+ Lease(final String leaseName, LeaseListener listener, long expirationTime) {
+ this.leaseName = leaseName;
+ this.listener = listener;
+ this.expirationTime = expirationTime;
+ }
+
+ /** @return the lease name */
+ public String getLeaseName() {
+ return leaseName;
+ }
+
+ /** @return listener */
+ public LeaseListener getListener() {
+ return this.listener;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null) {
+ return false;
+ }
+ if (getClass() != obj.getClass()) {
+ return false;
+ }
+ return this.hashCode() == ((Lease) obj).hashCode();
+ }
+
+ @Override
+ public int hashCode() {
+ return this.leaseName.hashCode();
+ }
+
+ public long getDelay(TimeUnit unit) {
+ return unit.convert(this.expirationTime - System.currentTimeMillis(),
+ TimeUnit.MILLISECONDS);
+ }
+
+ public int compareTo(Delayed o) {
+ long delta = this.getDelay(TimeUnit.MILLISECONDS) -
+ o.getDelay(TimeUnit.MILLISECONDS);
+
+ return this.equals(o) ? 0 : (delta > 0 ? 1 : -1);
+ }
+
+ /** @param expirationTime the expirationTime to set */
+ public void setExpirationTime(long expirationTime) {
+ this.expirationTime = expirationTime;
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/LocalHBaseCluster.java b/src/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
new file mode 100644
index 0000000..3639021
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
@@ -0,0 +1,377 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+
+/**
+ * This class creates a single process HBase cluster. One thread is created for
+ * a master and one per region server.
+ *
+ * Call {@link #startup()} to start the cluster running and {@link #shutdown()}
+ * to close it all down. {@link #join} the cluster is you want to wait on
+ * shutdown completion.
+ *
+ * <p>Runs master on port 60000 by default. Because we can't just kill the
+ * process -- not till HADOOP-1700 gets fixed and even then.... -- we need to
+ * be able to find the master with a remote client to run shutdown. To use a
+ * port other than 60000, set the hbase.master to a value of 'local:PORT':
+ * that is 'local', not 'localhost', and the port number the master should use
+ * instead of 60000.
+ *
+ * <p>To make 'local' mode more responsive, make values such as
+ * <code>hbase.regionserver.msginterval</code>,
+ * <code>hbase.master.meta.thread.rescanfrequency</code>, and
+ * <code>hbase.server.thread.wakefrequency</code> a second or less.
+ */
+public class LocalHBaseCluster implements HConstants {
+ static final Log LOG = LogFactory.getLog(LocalHBaseCluster.class);
+ private final HMaster master;
+ private final List<RegionServerThread> regionThreads;
+ private final static int DEFAULT_NO = 1;
+ /** local mode */
+ public static final String LOCAL = "local";
+ /** 'local:' */
+ public static final String LOCAL_COLON = LOCAL + ":";
+ private final HBaseConfiguration conf;
+ private final Class<? extends HRegionServer> regionServerClass;
+
+ /**
+ * Constructor.
+ * @param conf
+ * @throws IOException
+ */
+ public LocalHBaseCluster(final HBaseConfiguration conf)
+ throws IOException {
+ this(conf, DEFAULT_NO);
+ }
+
+ /**
+ * Constructor.
+ * @param conf Configuration to use. Post construction has the master's
+ * address.
+ * @param noRegionServers Count of regionservers to start.
+ * @throws IOException
+ */
+ @SuppressWarnings("unchecked")
+ public LocalHBaseCluster(final HBaseConfiguration conf,
+ final int noRegionServers)
+ throws IOException {
+ this.conf = conf;
+ doLocal(conf);
+ // Create the master
+ this.master = new HMaster(conf);
+ // Start the HRegionServers. Always have region servers come up on
+ // port '0' so there won't be clashes over default port as unit tests
+ // start/stop ports at different times during the life of the test.
+ conf.set(REGIONSERVER_ADDRESS, DEFAULT_HOST + ":0");
+ this.regionThreads = new ArrayList<RegionServerThread>();
+ regionServerClass = (Class<? extends HRegionServer>) conf.getClass(HConstants.REGION_SERVER_IMPL, HRegionServer.class);
+ for (int i = 0; i < noRegionServers; i++) {
+ addRegionServer();
+ }
+ }
+
+ /**
+ * Creates a region server.
+ * Call 'start' on the returned thread to make it run.
+ *
+ * @throws IOException
+ * @return Region server added.
+ */
+ public RegionServerThread addRegionServer() throws IOException {
+ synchronized (regionThreads) {
+ HRegionServer server;
+ try {
+ server = regionServerClass.getConstructor(HBaseConfiguration.class).
+ newInstance(conf);
+ } catch (Exception e) {
+ IOException ioe = new IOException();
+ ioe.initCause(e);
+ throw ioe;
+ }
+ RegionServerThread t = new RegionServerThread(server,
+ this.regionThreads.size());
+ this.regionThreads.add(t);
+ return t;
+ }
+ }
+
+ /**
+ * @param serverNumber
+ * @return region server
+ */
+ public HRegionServer getRegionServer(int serverNumber) {
+ synchronized (regionThreads) {
+ return regionThreads.get(serverNumber).getRegionServer();
+ }
+ }
+
+ /** runs region servers */
+ public static class RegionServerThread extends Thread {
+ private final HRegionServer regionServer;
+
+ RegionServerThread(final HRegionServer r, final int index) {
+ super(r, "RegionServer:" + index);
+ this.regionServer = r;
+ }
+
+ /** @return the region server */
+ public HRegionServer getRegionServer() {
+ return this.regionServer;
+ }
+
+ /**
+ * Block until the region server has come online, indicating it is ready
+ * to be used.
+ */
+ public void waitForServerOnline() {
+ while (!regionServer.isOnline()) {
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException e) {
+ // continue waiting
+ }
+ }
+ }
+ }
+
+ /**
+ * @return the HMaster thread
+ */
+ public HMaster getMaster() {
+ return this.master;
+ }
+
+ /**
+ * @return Read-only list of region server threads.
+ */
+ public List<RegionServerThread> getRegionServers() {
+ return Collections.unmodifiableList(this.regionThreads);
+ }
+
+ /**
+ * Wait for the specified region server to stop
+ * Removes this thread from list of running threads.
+ * @param serverNumber
+ * @return Name of region server that just went down.
+ */
+ public String waitOnRegionServer(int serverNumber) {
+ RegionServerThread regionServerThread;
+ synchronized (regionThreads) {
+ regionServerThread = this.regionThreads.remove(serverNumber);
+ }
+ while (regionServerThread.isAlive()) {
+ try {
+ LOG.info("Waiting on " +
+ regionServerThread.getRegionServer().getServerInfo().toString());
+ regionServerThread.join();
+ } catch (InterruptedException e) {
+ e.printStackTrace();
+ }
+ }
+ return regionServerThread.getName();
+ }
+
+ /**
+ * Wait for Mini HBase Cluster to shut down.
+ * Presumes you've already called {@link #shutdown()}.
+ */
+ public void join() {
+ if (this.regionThreads != null) {
+ synchronized(this.regionThreads) {
+ for(Thread t: this.regionThreads) {
+ if (t.isAlive()) {
+ try {
+ t.join();
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ }
+ }
+ if (this.master != null && this.master.isAlive()) {
+ try {
+ this.master.join();
+ } catch(InterruptedException e) {
+ // continue
+ }
+ }
+ }
+
+ /**
+ * Start the cluster.
+ * @return Address to use contacting master.
+ */
+ public String startup() {
+ this.master.start();
+ synchronized (regionThreads) {
+ for (RegionServerThread t: this.regionThreads) {
+ t.start();
+ }
+ }
+ return this.master.getMasterAddress().toString();
+ }
+
+ /**
+ * Shut down the mini HBase cluster
+ */
+ public void shutdown() {
+ LOG.debug("Shutting down HBase Cluster");
+ // Be careful how the hdfs shutdown thread runs in context where more than
+ // one regionserver in the mix.
+ Thread shutdownThread = null;
+ synchronized (this.regionThreads) {
+ for (RegionServerThread t: this.regionThreads) {
+ Thread tt = t.getRegionServer().setHDFSShutdownThreadOnExit(null);
+ if (shutdownThread == null && tt != null) {
+ shutdownThread = tt;
+ }
+ }
+ }
+ if(this.master != null) {
+ this.master.shutdown();
+ }
+ // regionServerThreads can never be null because they are initialized when
+ // the class is constructed.
+ synchronized(this.regionThreads) {
+ for(Thread t: this.regionThreads) {
+ if (t.isAlive()) {
+ try {
+ t.join();
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ }
+ if (this.master != null) {
+ while (this.master.isAlive()) {
+ try {
+ // The below has been replaced to debug sometime hangs on end of
+ // tests.
+ // this.master.join():
+ threadDumpingJoin(this.master);
+ } catch(InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ Threads.shutdown(shutdownThread);
+ LOG.info("Shutdown " +
+ ((this.regionThreads != null)? this.master.getName(): "0 masters") +
+ " " + this.regionThreads.size() + " region server(s)");
+ }
+
+ /**
+ * @param t
+ * @throws InterruptedException
+ */
+ public void threadDumpingJoin(final Thread t) throws InterruptedException {
+ if (t == null) {
+ return;
+ }
+ long startTime = System.currentTimeMillis();
+ while (t.isAlive()) {
+ Thread.sleep(1000);
+ if (System.currentTimeMillis() - startTime > 60000) {
+ startTime = System.currentTimeMillis();
+ ReflectionUtils.printThreadInfo(new PrintWriter(System.out),
+ "Automatic Stack Trace every 60 seconds waiting on " +
+ t.getName());
+ }
+ }
+ }
+
+ /**
+ * Changes <code>hbase.master</code> from 'local' to 'localhost:PORT' in
+ * passed Configuration instance.
+ * @param c
+ * @return The passed <code>c</code> configuration modified if hbase.master
+ * value was 'local' otherwise, unaltered.
+ */
+ private static HBaseConfiguration doLocal(final HBaseConfiguration c) {
+ if (!isLocal(c)) {
+ return c;
+ }
+
+ // Need to rewrite address in Configuration if not done already.
+ String address = c.get(MASTER_ADDRESS);
+ if (address != null) {
+ String port = address.startsWith(LOCAL_COLON)?
+ address.substring(LOCAL_COLON.length()):
+ Integer.toString(DEFAULT_MASTER_PORT);
+ c.set(MASTER_ADDRESS, "localhost:" + port);
+ }
+
+ // Need to rewrite host in Configuration if not done already.
+ String host = c.get(MASTER_HOST_NAME);
+ if (host != null && host.equals(LOCAL)) {
+ c.set(MASTER_HOST_NAME, "localhost");
+ }
+
+ return c;
+ }
+
+ /**
+ * @param c Configuration to check.
+ * @return True if a 'local' address in hbase.master value.
+ */
+ public static boolean isLocal(final Configuration c) {
+ String address = c.get(MASTER_ADDRESS);
+ boolean addressIsLocal = address == null || address.equals(LOCAL) ||
+ address.startsWith(LOCAL_COLON);
+ String host = c.get(MASTER_HOST_NAME);
+ boolean hostIsLocal = host == null || host.equals(LOCAL);
+ return addressIsLocal && hostIsLocal;
+ }
+
+ /**
+ * Test things basically work.
+ * @param args
+ * @throws IOException
+ */
+ public static void main(String[] args) throws IOException {
+ HBaseConfiguration conf = new HBaseConfiguration();
+ LocalHBaseCluster cluster = new LocalHBaseCluster(conf);
+ cluster.startup();
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ HTableDescriptor htd =
+ new HTableDescriptor(Bytes.toBytes(cluster.getClass().getName()));
+ admin.createTable(htd);
+ cluster.shutdown();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/MasterNotRunningException.java b/src/java/org/apache/hadoop/hbase/MasterNotRunningException.java
new file mode 100644
index 0000000..6cf564c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/MasterNotRunningException.java
@@ -0,0 +1,49 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Thrown if the master is not running
+ */
+public class MasterNotRunningException extends IOException {
+ private static final long serialVersionUID = 1L << 23 - 1L;
+ /** default constructor */
+ public MasterNotRunningException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public MasterNotRunningException(String s) {
+ super(s);
+ }
+
+ /**
+ * Constructor taking another exception.
+ * @param e Exception to grab data from.
+ */
+ public MasterNotRunningException(Exception e) {
+ super(e);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/NotServingRegionException.java b/src/java/org/apache/hadoop/hbase/NotServingRegionException.java
new file mode 100644
index 0000000..5c93ebe
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/NotServingRegionException.java
@@ -0,0 +1,53 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Thrown by a region server if it is sent a request for a region it is not
+ * serving.
+ */
+public class NotServingRegionException extends IOException {
+ private static final long serialVersionUID = 1L << 17 - 1L;
+
+ /** default constructor */
+ public NotServingRegionException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public NotServingRegionException(String s) {
+ super(s);
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public NotServingRegionException(final byte [] s) {
+ super(Bytes.toString(s));
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/RegionException.java b/src/java/org/apache/hadoop/hbase/RegionException.java
new file mode 100644
index 0000000..63063a5
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/RegionException.java
@@ -0,0 +1,43 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+/**
+ * Thrown when something happens related to region handling.
+ * Subclasses have to be more specific.
+ */
+public class RegionException extends IOException {
+ private static final long serialVersionUID = 1473510258071111371L;
+
+ /** default constructor */
+ public RegionException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public RegionException(String s) {
+ super(s);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/RegionHistorian.java b/src/java/org/apache/hadoop/hbase/RegionHistorian.java
new file mode 100644
index 0000000..df08ce7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/RegionHistorian.java
@@ -0,0 +1,331 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.GregorianCalendar;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * The Region Historian task is to keep track of every modification a region
+ * has to go through. Public methods are used to update the information in the
+ * <code>.META.</code> table and to retrieve it. This is a Singleton. By
+ * default, the Historian is offline; it will not log. Its enabled in the
+ * regionserver and master down in their guts after there's some certainty the
+ * .META. has been deployed.
+ */
+public class RegionHistorian implements HConstants {
+ private static final Log LOG = LogFactory.getLog(RegionHistorian.class);
+
+ private HTable metaTable;
+
+
+
+ /** Singleton reference */
+ private static RegionHistorian historian;
+
+ /** Date formater for the timestamp in RegionHistoryInformation */
+ static SimpleDateFormat dateFormat = new SimpleDateFormat(
+ "EEE, d MMM yyyy HH:mm:ss");
+
+
+ private static enum HistorianColumnKey {
+ REGION_CREATION ( Bytes.toBytes(COLUMN_FAMILY_HISTORIAN_STR+"creation")),
+ REGION_OPEN ( Bytes.toBytes(COLUMN_FAMILY_HISTORIAN_STR+"open")),
+ REGION_SPLIT ( Bytes.toBytes(COLUMN_FAMILY_HISTORIAN_STR+"split")),
+ REGION_COMPACTION ( Bytes.toBytes(COLUMN_FAMILY_HISTORIAN_STR+"compaction")),
+ REGION_FLUSH ( Bytes.toBytes(COLUMN_FAMILY_HISTORIAN_STR+"flush")),
+ REGION_ASSIGNMENT ( Bytes.toBytes(COLUMN_FAMILY_HISTORIAN_STR+"assignment"));
+
+ byte[] key;
+
+ HistorianColumnKey(byte[] key) {
+ this.key = key;
+ }
+ }
+
+ public static final String SPLIT_PREFIX = "Region split from: ";
+
+ /**
+ * Default constructor. Initializes reference to .META. table. Inaccessible.
+ * Use {@link #getInstance(HBaseConfiguration)} to obtain the Singleton
+ * instance of this class.
+ */
+ private RegionHistorian() {
+ super();
+ }
+
+ /**
+ * Get the RegionHistorian Singleton instance.
+ * @return The region historian
+ */
+ public synchronized static RegionHistorian getInstance() {
+ if (historian == null) {
+ historian = new RegionHistorian();
+ }
+ return historian;
+ }
+
+ /**
+ * Returns, for a given region name, an ordered list by timestamp of all
+ * values in the historian column of the .META. table.
+ * @param regionName
+ * Region name as a string
+ * @return List of RegionHistoryInformation or null if we're offline.
+ */
+ public List<RegionHistoryInformation> getRegionHistory(String regionName) {
+ if (!isOnline()) {
+ return null;
+ }
+ List<RegionHistoryInformation> informations =
+ new ArrayList<RegionHistoryInformation>();
+ try {
+ /*
+ * TODO REGION_HISTORIAN_KEYS is used because there is no other for the
+ * moment to retrieve all version and to have the column key information.
+ * To be changed when HTable.getRow handles versions.
+ */
+ for (HistorianColumnKey keyEnu : HistorianColumnKey.values()) {
+ byte[] columnKey = keyEnu.key;
+ Cell[] cells = this.metaTable.get(Bytes.toBytes(regionName),
+ columnKey, ALL_VERSIONS);
+ if (cells != null) {
+ for (Cell cell : cells) {
+ informations.add(historian.new RegionHistoryInformation(cell
+ .getTimestamp(), Bytes.toString(columnKey).split(":")[1], Bytes
+ .toString(cell.getValue())));
+ }
+ }
+ }
+ } catch (IOException ioe) {
+ LOG.warn("Unable to retrieve region history", ioe);
+ }
+ Collections.sort(informations);
+ return informations;
+ }
+
+ /**
+ * Method to add a creation event to the row in the .META table
+ * @param info
+ * @param serverName
+ */
+ public void addRegionAssignment(HRegionInfo info, String serverName) {
+ add(HistorianColumnKey.REGION_ASSIGNMENT.key, "Region assigned to server "
+ + serverName, info);
+ }
+
+ /**
+ * Method to add a creation event to the row in the .META table
+ * @param info
+ */
+ public void addRegionCreation(HRegionInfo info) {
+ add(HistorianColumnKey.REGION_CREATION.key, "Region creation", info);
+ }
+
+ /**
+ * Method to add a opening event to the row in the .META table
+ * @param info
+ * @param address
+ */
+ public void addRegionOpen(HRegionInfo info, HServerAddress address) {
+ add(HistorianColumnKey.REGION_OPEN.key, "Region opened on server : "
+ + address.getHostname(), info);
+ }
+
+ /**
+ * Method to add a split event to the rows in the .META table with
+ * information from oldInfo.
+ * @param oldInfo
+ * @param newInfo1
+ * @param newInfo2
+ */
+ public void addRegionSplit(HRegionInfo oldInfo, HRegionInfo newInfo1,
+ HRegionInfo newInfo2) {
+ HRegionInfo[] infos = new HRegionInfo[] { newInfo1, newInfo2 };
+ for (HRegionInfo info : infos) {
+ add(HistorianColumnKey.REGION_SPLIT.key, SPLIT_PREFIX +
+ oldInfo.getRegionNameAsString(), info);
+ }
+ }
+
+ /**
+ * Method to add a compaction event to the row in the .META table
+ * @param info
+ * @param timeTaken
+ */
+ public void addRegionCompaction(final HRegionInfo info,
+ final String timeTaken) {
+ // While historian can not log flushes because it could deadlock the
+ // regionserver -- see the note in addRegionFlush -- there should be no
+ // such danger compacting; compactions are not allowed when
+ // Flusher#flushSomeRegions is run.
+ if (LOG.isDebugEnabled()) {
+ add(HistorianColumnKey.REGION_COMPACTION.key,
+ "Region compaction completed in " + timeTaken, info);
+ }
+ }
+
+ /**
+ * Method to add a flush event to the row in the .META table
+ * @param info
+ * @param timeTaken
+ */
+ public void addRegionFlush(HRegionInfo info, String timeTaken) {
+ // Disabled. Noop. If this regionserver is hosting the .META. AND is
+ // holding the reclaimMemcacheMemory global lock --
+ // see Flusher#flushSomeRegions -- we deadlock. For now, just disable
+ // logging of flushes.
+ }
+
+ /**
+ * Method to add an event with LATEST_TIMESTAMP.
+ * @param column
+ * @param text
+ * @param info
+ */
+ private void add(byte[] column,
+ String text, HRegionInfo info) {
+ add(column, text, info, LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Method to add an event with provided information.
+ * @param column
+ * @param text
+ * @param info
+ * @param timestamp
+ */
+ private void add(byte[] column,
+ String text, HRegionInfo info, long timestamp) {
+ if (!isOnline()) {
+ // Its a noop
+ return;
+ }
+ if (!info.isMetaRegion()) {
+ BatchUpdate batch = new BatchUpdate(info.getRegionName());
+ batch.setTimestamp(timestamp);
+ batch.put(column, Bytes.toBytes(text));
+ try {
+ this.metaTable.commit(batch);
+ } catch (IOException ioe) {
+ LOG.warn("Unable to '" + text + "'", ioe);
+ }
+ }
+ }
+
+ /**
+ * Inner class that only contains information about an event.
+ *
+ */
+ public class RegionHistoryInformation implements
+ Comparable<RegionHistoryInformation> {
+
+ private GregorianCalendar cal = new GregorianCalendar();
+
+ private long timestamp;
+
+ private String event;
+
+ private String description;
+
+ /**
+ * @param timestamp
+ * @param event
+ * @param description
+ */
+ public RegionHistoryInformation(long timestamp, String event,
+ String description) {
+ this.timestamp = timestamp;
+ this.event = event;
+ this.description = description;
+ }
+
+ public int compareTo(RegionHistoryInformation otherInfo) {
+ return -1 * Long.valueOf(timestamp).compareTo(otherInfo.getTimestamp());
+ }
+
+ /** @return the event */
+ public String getEvent() {
+ return event;
+ }
+
+ /** @return the description */
+ public String getDescription() {
+ return description;
+ }
+
+ /** @return the timestamp */
+ public long getTimestamp() {
+ return timestamp;
+ }
+
+ /**
+ * @return The value of the timestamp processed with the date formater.
+ */
+ public String getTimestampAsString() {
+ cal.setTimeInMillis(timestamp);
+ return dateFormat.format(cal.getTime());
+ }
+ }
+
+ /**
+ * @return True if the historian is online. When offline, will not add
+ * updates to the .META. table.
+ */
+ public boolean isOnline() {
+ return this.metaTable != null;
+ }
+
+ /**
+ * @param c Online the historian. Invoke after cluster has spun up.
+ */
+ public void online(final HBaseConfiguration c) {
+ try {
+ this.metaTable = new HTable(c, META_TABLE_NAME);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Onlined");
+ }
+ } catch (IOException ioe) {
+ LOG.error("Unable to create RegionHistorian", ioe);
+ }
+ }
+
+ /**
+ * Offlines the historian.
+ * @see #online(HBaseConfiguration)
+ */
+ public void offline() {
+ this.metaTable = null;
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Offlined");
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java b/src/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java
new file mode 100644
index 0000000..6fc8e57
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java
@@ -0,0 +1,118 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+
+import org.apache.hadoop.ipc.RemoteException;
+
+/**
+ * An immutable class which contains a static method for handling
+ * org.apache.hadoop.ipc.RemoteException exceptions.
+ */
+public class RemoteExceptionHandler {
+ /* Not instantiable */
+ private RemoteExceptionHandler() {super();}
+
+ /**
+ * Examine passed Throwable. See if its carrying a RemoteException. If so,
+ * run {@link #decodeRemoteException(RemoteException)} on it. Otherwise,
+ * pass back <code>t</code> unaltered.
+ * @param t Throwable to examine.
+ * @return Decoded RemoteException carried by <code>t</code> or
+ * <code>t</code> unaltered.
+ */
+ public static Throwable checkThrowable(final Throwable t) {
+ Throwable result = t;
+ if (t instanceof RemoteException) {
+ try {
+ result =
+ RemoteExceptionHandler.decodeRemoteException((RemoteException)t);
+ } catch (Throwable tt) {
+ result = tt;
+ }
+ }
+ return result;
+ }
+
+ /**
+ * Examine passed IOException. See if its carrying a RemoteException. If so,
+ * run {@link #decodeRemoteException(RemoteException)} on it. Otherwise,
+ * pass back <code>e</code> unaltered.
+ * @param e Exception to examine.
+ * @return Decoded RemoteException carried by <code>e</code> or
+ * <code>e</code> unaltered.
+ */
+ public static IOException checkIOException(final IOException e) {
+ Throwable t = checkThrowable(e);
+ return t instanceof IOException? (IOException)t: new IOException(t);
+ }
+
+ /**
+ * Converts org.apache.hadoop.ipc.RemoteException into original exception,
+ * if possible. If the original exception is an Error or a RuntimeException,
+ * throws the original exception.
+ *
+ * @param re original exception
+ * @return decoded RemoteException if it is an instance of or a subclass of
+ * IOException, or the original RemoteException if it cannot be decoded.
+ *
+ * @throws IOException indicating a server error ocurred if the decoded
+ * exception is not an IOException. The decoded exception is set as
+ * the cause.
+ */
+ public static IOException decodeRemoteException(final RemoteException re)
+ throws IOException {
+ IOException i = re;
+
+ try {
+ Class<?> c = Class.forName(re.getClassName());
+
+ Class<?>[] parameterTypes = { String.class };
+ Constructor<?> ctor = c.getConstructor(parameterTypes);
+
+ Object[] arguments = { re.getMessage() };
+ Throwable t = (Throwable) ctor.newInstance(arguments);
+
+ if (t instanceof IOException) {
+ i = (IOException) t;
+
+ } else {
+ i = new IOException("server error");
+ i.initCause(t);
+ throw i;
+ }
+
+ } catch (ClassNotFoundException x) {
+ // continue
+ } catch (NoSuchMethodException x) {
+ // continue
+ } catch (IllegalAccessException x) {
+ // continue
+ } catch (InvocationTargetException x) {
+ // continue
+ } catch (InstantiationException x) {
+ // continue
+ }
+ return i;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/TableExistsException.java b/src/java/org/apache/hadoop/hbase/TableExistsException.java
new file mode 100644
index 0000000..bbcc295
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/TableExistsException.java
@@ -0,0 +1,38 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Thrown when a table exists but should not
+ */
+public class TableExistsException extends IOException {
+ private static final long serialVersionUID = 1L << 7 - 1L;
+ /** default constructor */
+ public TableExistsException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ *
+ * @param s message
+ */
+ public TableExistsException(String s) {
+ super(s);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/TableNotDisabledException.java b/src/java/org/apache/hadoop/hbase/TableNotDisabledException.java
new file mode 100644
index 0000000..4287800
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/TableNotDisabledException.java
@@ -0,0 +1,50 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Thrown if a table should be offline but is not
+ */
+public class TableNotDisabledException extends IOException {
+ private static final long serialVersionUID = 1L << 19 - 1L;
+ /** default constructor */
+ public TableNotDisabledException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public TableNotDisabledException(String s) {
+ super(s);
+ }
+
+ /**
+ * @param tableName Name of table that is not disabled
+ */
+ public TableNotDisabledException(byte[] tableName) {
+ this(Bytes.toString(tableName));
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/TableNotFoundException.java b/src/java/org/apache/hadoop/hbase/TableNotFoundException.java
new file mode 100644
index 0000000..dc6da43
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/TableNotFoundException.java
@@ -0,0 +1,35 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/** Thrown when a table can not be located */
+public class TableNotFoundException extends RegionException {
+ private static final long serialVersionUID = 993179627856392526L;
+
+ /** default constructor */
+ public TableNotFoundException() {
+ super();
+ }
+
+ /** @param s message */
+ public TableNotFoundException(String s) {
+ super(s);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/UnknownRowLockException.java b/src/java/org/apache/hadoop/hbase/UnknownRowLockException.java
new file mode 100644
index 0000000..8cb3985
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/UnknownRowLockException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+/**
+ * Thrown if a region server is passed an unknown row lock id
+ */
+public class UnknownRowLockException extends DoNotRetryIOException {
+ private static final long serialVersionUID = 993179627856392526L;
+
+ /** constructor */
+ public UnknownRowLockException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public UnknownRowLockException(String s) {
+ super(s);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/UnknownScannerException.java b/src/java/org/apache/hadoop/hbase/UnknownScannerException.java
new file mode 100644
index 0000000..1ab41ef
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/UnknownScannerException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+/**
+ * Thrown if a region server is passed an unknown scanner id
+ */
+public class UnknownScannerException extends DoNotRetryIOException {
+ private static final long serialVersionUID = 993179627856392526L;
+
+ /** constructor */
+ public UnknownScannerException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public UnknownScannerException(String s) {
+ super(s);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/ValueOverMaxLengthException.java b/src/java/org/apache/hadoop/hbase/ValueOverMaxLengthException.java
new file mode 100644
index 0000000..2bc136d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ValueOverMaxLengthException.java
@@ -0,0 +1,43 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/**
+ * Thrown when a value is longer than the specified LENGTH
+ */
+public class ValueOverMaxLengthException extends DoNotRetryIOException {
+
+ private static final long serialVersionUID = -5525656352372008316L;
+
+ /**
+ * default constructor
+ */
+ public ValueOverMaxLengthException() {
+ super();
+ }
+
+ /**
+ * @param message
+ */
+ public ValueOverMaxLengthException(String message) {
+ super(message);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/VersionAnnotation.java b/src/java/org/apache/hadoop/hbase/VersionAnnotation.java
new file mode 100644
index 0000000..bf29adf
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/VersionAnnotation.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.lang.annotation.*;
+
+/**
+ * A package attribute that captures the version of hbase that was compiled.
+ * Copied down from hadoop. All is same except name of interface.
+ */
+@Retention(RetentionPolicy.RUNTIME)
+@Target(ElementType.PACKAGE)
+public @interface VersionAnnotation {
+
+ /**
+ * Get the Hadoop version
+ * @return the version string "0.6.3-dev"
+ */
+ String version();
+
+ /**
+ * Get the username that compiled Hadoop.
+ */
+ String user();
+
+ /**
+ * Get the date when Hadoop was compiled.
+ * @return the date in unix 'date' format
+ */
+ String date();
+
+ /**
+ * Get the url for the subversion repository.
+ */
+ String url();
+
+ /**
+ * Get the subversion revision.
+ * @return the revision number as a string (eg. "451451")
+ */
+ String revision();
+}
diff --git a/src/java/org/apache/hadoop/hbase/WritableComparator.java b/src/java/org/apache/hadoop/hbase/WritableComparator.java
new file mode 100644
index 0000000..b765d68
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/WritableComparator.java
@@ -0,0 +1,28 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.util.Comparator;
+
+import org.apache.hadoop.io.Writable;
+
+public interface WritableComparator<T> extends Writable, Comparator<T> {
+// No methods, just bring the two interfaces together
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/HBaseAdmin.java b/src/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
new file mode 100644
index 0000000..11795c1
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
@@ -0,0 +1,812 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.RegionException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.MetaUtils;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.ipc.RemoteException;
+
+/**
+ * Provides administrative functions for HBase
+ */
+public class HBaseAdmin {
+ private final Log LOG = LogFactory.getLog(this.getClass().getName());
+ private final HConnection connection;
+ private volatile HBaseConfiguration conf;
+ private final long pause;
+ private final int numRetries;
+ private volatile HMasterInterface master;
+
+ /**
+ * Constructor
+ *
+ * @param conf Configuration object
+ * @throws MasterNotRunningException
+ */
+ public HBaseAdmin(HBaseConfiguration conf) throws MasterNotRunningException {
+ this.connection = HConnectionManager.getConnection(conf);
+ this.conf = conf;
+ this.pause = conf.getLong("hbase.client.pause", 30 * 1000);
+ this.numRetries = conf.getInt("hbase.client.retries.number", 5);
+ this.master = connection.getMaster();
+ }
+
+ /**
+ * @return proxy connection to master server for this instance
+ * @throws MasterNotRunningException
+ */
+ public HMasterInterface getMaster() throws MasterNotRunningException{
+ return this.connection.getMaster();
+ }
+
+ /** @return - true if the master server is running */
+ public boolean isMasterRunning() {
+ return this.connection.isMasterRunning();
+ }
+
+ /**
+ * @param tableName Table to check.
+ * @return True if table exists already.
+ * @throws MasterNotRunningException
+ */
+ public boolean tableExists(final String tableName)
+ throws MasterNotRunningException {
+ return tableExists(Bytes.toBytes(tableName));
+ }
+
+ /**
+ * @param tableName Table to check.
+ * @return True if table exists already.
+ * @throws MasterNotRunningException
+ */
+ public boolean tableExists(final byte [] tableName)
+ throws MasterNotRunningException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ return connection.tableExists(tableName);
+ }
+
+ /**
+ * List all the userspace tables. In other words, scan the META table.
+ *
+ * If we wanted this to be really fast, we could implement a special
+ * catalog table that just contains table names and their descriptors.
+ * Right now, it only exists as part of the META table's region info.
+ *
+ * @return - returns an array of HTableDescriptors
+ * @throws IOException
+ */
+ public HTableDescriptor[] listTables() throws IOException {
+ return this.connection.listTables();
+ }
+
+ public HTableDescriptor getTableDescriptor(final String tableName)
+ throws IOException {
+ return getTableDescriptor(Bytes.toBytes(tableName));
+ }
+
+ public HTableDescriptor getTableDescriptor(final byte [] tableName)
+ throws IOException {
+ return this.connection.getHTableDescriptor(tableName);
+ }
+
+ private long getPauseTime(int tries) {
+ int triesCount = tries;
+ if (triesCount >= HConstants.RETRY_BACKOFF.length)
+ triesCount = HConstants.RETRY_BACKOFF.length - 1;
+ return this.pause * HConstants.RETRY_BACKOFF[triesCount];
+ }
+
+ /**
+ * Creates a new table.
+ * Synchronous operation.
+ *
+ * @param desc table descriptor for table
+ *
+ * @throws IllegalArgumentException if the table name is reserved
+ * @throws MasterNotRunningException if master is not running
+ * @throws TableExistsException if table already exists (If concurrent
+ * threads, the table may have been created between test-for-existence
+ * and attempt-at-creation).
+ * @throws IOException
+ */
+ public void createTable(HTableDescriptor desc)
+ throws IOException {
+ HTableDescriptor.isLegalTableName(desc.getName());
+ createTableAsync(desc);
+ for (int tries = 0; tries < numRetries; tries++) {
+ try {
+ // Wait for new table to come on-line
+ connection.locateRegion(desc.getName(), HConstants.EMPTY_START_ROW);
+ break;
+
+ } catch (RegionException e) {
+ if (tries == numRetries - 1) {
+ // Ran out of tries
+ throw e;
+ }
+ }
+ try {
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+
+ /**
+ * Creates a new table but does not block and wait for it to come online.
+ * Asynchronous operation.
+ *
+ * @param desc table descriptor for table
+ *
+ * @throws IllegalArgumentException Bad table name.
+ * @throws MasterNotRunningException if master is not running
+ * @throws TableExistsException if table already exists (If concurrent
+ * threads, the table may have been created between test-for-existence
+ * and attempt-at-creation).
+ * @throws IOException
+ */
+ public void createTableAsync(HTableDescriptor desc)
+ throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ HTableDescriptor.isLegalTableName(desc.getName());
+ try {
+ this.master.createTable(desc);
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+ }
+
+ /**
+ * Deletes a table.
+ * Synchronous operation.
+ *
+ * @param tableName name of table to delete
+ * @throws IOException
+ */
+ public void deleteTable(final String tableName) throws IOException {
+ deleteTable(Bytes.toBytes(tableName));
+ }
+
+ /**
+ * Deletes a table.
+ * Synchronous operation.
+ *
+ * @param tableName name of table to delete
+ * @throws IOException
+ */
+ public void deleteTable(final byte [] tableName) throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ HTableDescriptor.isLegalTableName(tableName);
+ HRegionLocation firstMetaServer = getFirstMetaServerForTable(tableName);
+ try {
+ this.master.deleteTable(tableName);
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+
+ // Wait until first region is deleted
+ HRegionInterface server =
+ connection.getHRegionConnection(firstMetaServer.getServerAddress());
+ HRegionInfo info = new HRegionInfo();
+ for (int tries = 0; tries < numRetries; tries++) {
+ long scannerId = -1L;
+ try {
+ scannerId =
+ server.openScanner(firstMetaServer.getRegionInfo().getRegionName(),
+ HConstants.COL_REGIONINFO_ARRAY, tableName,
+ HConstants.LATEST_TIMESTAMP, null);
+ RowResult values = server.next(scannerId);
+ if (values == null || values.size() == 0) {
+ break;
+ }
+ boolean found = false;
+ for (Map.Entry<byte [], Cell> e: values.entrySet()) {
+ if (Bytes.equals(e.getKey(), HConstants.COL_REGIONINFO)) {
+ info = (HRegionInfo) Writables.getWritable(
+ e.getValue().getValue(), info);
+
+ if (Bytes.equals(info.getTableDesc().getName(), tableName)) {
+ found = true;
+ }
+ }
+ }
+ if (!found) {
+ break;
+ }
+
+ } catch (IOException ex) {
+ if(tries == numRetries - 1) { // no more tries left
+ if (ex instanceof RemoteException) {
+ ex = RemoteExceptionHandler.decodeRemoteException((RemoteException) ex);
+ }
+ throw ex;
+ }
+
+ } finally {
+ if (scannerId != -1L) {
+ try {
+ server.close(scannerId);
+ } catch (Exception ex) {
+ LOG.warn(ex);
+ }
+ }
+ }
+
+ try {
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ // Delete cached information to prevent clients from using old locations
+ HConnectionManager.deleteConnectionInfo(conf, false);
+ LOG.info("Deleted " + Bytes.toString(tableName));
+ }
+
+ /**
+ * Brings a table on-line (enables it).
+ * Synchronous operation.
+ *
+ * @param tableName name of the table
+ * @throws IOException
+ */
+ public void enableTable(final String tableName) throws IOException {
+ enableTable(Bytes.toBytes(tableName));
+ }
+
+ /**
+ * Brings a table on-line (enables it).
+ * Synchronous operation.
+ *
+ * @param tableName name of the table
+ * @throws IOException
+ */
+ public void enableTable(final byte [] tableName) throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ try {
+ this.master.enableTable(tableName);
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+
+ // Wait until all regions are enabled
+
+ for (int tries = 0;
+ (tries < numRetries) && (!isTableEnabled(tableName));
+ tries++) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Sleep. Waiting for all regions to be enabled from " +
+ Bytes.toString(tableName));
+ }
+ try {
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException e) {
+ // continue
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Wake. Waiting for all regions to be enabled from " +
+ Bytes.toString(tableName));
+ }
+ }
+ if (!isTableEnabled(tableName))
+ throw new IOException("unable to enable table " +
+ Bytes.toString(tableName));
+ LOG.info("Enabled table " + Bytes.toString(tableName));
+ }
+
+ /**
+ * Disables a table (takes it off-line) If it is being served, the master
+ * will tell the servers to stop serving it.
+ * Synchronous operation.
+ *
+ * @param tableName name of table
+ * @throws IOException
+ */
+ public void disableTable(final String tableName) throws IOException {
+ disableTable(Bytes.toBytes(tableName));
+ }
+
+ /**
+ * Disables a table (takes it off-line) If it is being served, the master
+ * will tell the servers to stop serving it.
+ * Synchronous operation.
+ *
+ * @param tableName name of table
+ * @throws IOException
+ */
+ public void disableTable(final byte [] tableName) throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ try {
+ this.master.disableTable(tableName);
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+
+ // Wait until all regions are disabled
+ for (int tries = 0;
+ (tries < numRetries) && (!isTableDisabled(tableName));
+ tries++) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Sleep. Waiting for all regions to be disabled from " +
+ Bytes.toString(tableName));
+ }
+ try {
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException e) {
+ // continue
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Wake. Waiting for all regions to be disabled from " +
+ Bytes.toString(tableName));
+ }
+ }
+ if (!isTableDisabled(tableName)) {
+ throw new RegionException("Retries exhausted, it took too long to wait"+
+ " for the table " + Bytes.toString(tableName) + " to be disabled.");
+ }
+ LOG.info("Disabled " + Bytes.toString(tableName));
+ }
+
+ /**
+ * @param tableName name of table to check
+ * @return true if table is on-line
+ * @throws IOException
+ */
+ public boolean isTableEnabled(String tableName) throws IOException {
+ return isTableEnabled(Bytes.toBytes(tableName));
+ }
+ /**
+ * @param tableName name of table to check
+ * @return true if table is on-line
+ * @throws IOException
+ */
+ public boolean isTableEnabled(byte[] tableName) throws IOException {
+ return connection.isTableEnabled(tableName);
+ }
+
+ /**
+ * @param tableName name of table to check
+ * @return true if table is off-line
+ * @throws IOException
+ */
+ public boolean isTableDisabled(byte[] tableName) throws IOException {
+ return connection.isTableDisabled(tableName);
+ }
+
+ /**
+ * Add a column to an existing table.
+ * Asynchronous operation.
+ *
+ * @param tableName name of the table to add column to
+ * @param column column descriptor of column to be added
+ * @throws IOException
+ */
+ public void addColumn(final String tableName, HColumnDescriptor column)
+ throws IOException {
+ addColumn(Bytes.toBytes(tableName), column);
+ }
+
+ /**
+ * Add a column to an existing table.
+ * Asynchronous operation.
+ *
+ * @param tableName name of the table to add column to
+ * @param column column descriptor of column to be added
+ * @throws IOException
+ */
+ public void addColumn(final byte [] tableName, HColumnDescriptor column)
+ throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ HTableDescriptor.isLegalTableName(tableName);
+ try {
+ this.master.addColumn(tableName, column);
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+ }
+
+ /**
+ * Delete a column from a table.
+ * Asynchronous operation.
+ *
+ * @param tableName name of table
+ * @param columnName name of column to be deleted
+ * @throws IOException
+ */
+ public void deleteColumn(final String tableName, final String columnName)
+ throws IOException {
+ deleteColumn(Bytes.toBytes(tableName), Bytes.toBytes(columnName));
+ }
+
+ /**
+ * Delete a column from a table.
+ * Asynchronous operation.
+ *
+ * @param tableName name of table
+ * @param columnName name of column to be deleted
+ * @throws IOException
+ */
+ public void deleteColumn(final byte [] tableName, final byte [] columnName)
+ throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ HTableDescriptor.isLegalTableName(tableName);
+ try {
+ this.master.deleteColumn(tableName, columnName);
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+ }
+
+ /**
+ * Modify an existing column family on a table.
+ * Asynchronous operation.
+ *
+ * @param tableName name of table
+ * @param columnName name of column to be modified
+ * @param descriptor new column descriptor to use
+ * @throws IOException
+ */
+ public void modifyColumn(final String tableName, final String columnName,
+ HColumnDescriptor descriptor)
+ throws IOException {
+ modifyColumn(Bytes.toBytes(tableName), Bytes.toBytes(columnName),
+ descriptor);
+ }
+
+ /**
+ * Modify an existing column family on a table.
+ * Asynchronous operation.
+ *
+ * @param tableName name of table
+ * @param columnName name of column to be modified
+ * @param descriptor new column descriptor to use
+ * @throws IOException
+ */
+ public void modifyColumn(final byte [] tableName, final byte [] columnName,
+ HColumnDescriptor descriptor)
+ throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ HTableDescriptor.isLegalTableName(tableName);
+ try {
+ this.master.modifyColumn(tableName, columnName, descriptor);
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+ }
+
+ /**
+ * Close a region. For expert-admins.
+ * Asynchronous operation.
+ *
+ * @param regionname
+ * @param args Optional server name. Otherwise, we'll send close to the
+ * server registered in .META.
+ * @throws IOException
+ */
+ public void closeRegion(final String regionname, final Object... args)
+ throws IOException {
+ closeRegion(Bytes.toBytes(regionname), args);
+ }
+
+ /**
+ * Close a region. For expert-admins.
+ * Asynchronous operation.
+ *
+ * @param regionname
+ * @param args Optional server name. Otherwise, we'll send close to the
+ * server registered in .META.
+ * @throws IOException
+ */
+ public void closeRegion(final byte [] regionname, final Object... args)
+ throws IOException {
+ // Be careful. Must match the handler over in HMaster at MODIFY_CLOSE_REGION
+ int len = (args == null)? 0: args.length;
+ int xtraArgsCount = 1;
+ Object [] newargs = new Object[len + xtraArgsCount];
+ newargs[0] = regionname;
+ if(args != null) {
+ for (int i = 0; i < len; i++) {
+ newargs[i + xtraArgsCount] = args[i];
+ }
+ }
+ modifyTable(HConstants.META_TABLE_NAME, HConstants.MODIFY_CLOSE_REGION,
+ newargs);
+ }
+
+ /**
+ * Flush a table or an individual region.
+ * Asynchronous operation.
+ *
+ * @param tableNameOrRegionName
+ * @throws IOException
+ */
+ public void flush(final String tableNameOrRegionName) throws IOException {
+ flush(Bytes.toBytes(tableNameOrRegionName));
+ }
+
+ /**
+ * Flush a table or an individual region.
+ * Asynchronous operation.
+ *
+ * @param tableNameOrRegionName
+ * @throws IOException
+ */
+ public void flush(final byte [] tableNameOrRegionName) throws IOException {
+ modifyTable(tableNameOrRegionName, HConstants.MODIFY_TABLE_FLUSH);
+ }
+
+ /**
+ * Compact a table or an individual region.
+ * Asynchronous operation.
+ *
+ * @param tableNameOrRegionName
+ * @throws IOException
+ */
+ public void compact(final String tableNameOrRegionName) throws IOException {
+ compact(Bytes.toBytes(tableNameOrRegionName));
+ }
+
+ /**
+ * Compact a table or an individual region.
+ * Asynchronous operation.
+ *
+ * @param tableNameOrRegionName
+ * @throws IOException
+ */
+ public void compact(final byte [] tableNameOrRegionName) throws IOException {
+ modifyTable(tableNameOrRegionName, HConstants.MODIFY_TABLE_COMPACT);
+ }
+
+ /**
+ * Major compact a table or an individual region.
+ * Asynchronous operation.
+ *
+ * @param tableNameOrRegionName
+ * @throws IOException
+ */
+ public void majorCompact(final String tableNameOrRegionName)
+ throws IOException {
+ majorCompact(Bytes.toBytes(tableNameOrRegionName));
+ }
+
+ /**
+ * Major compact a table or an individual region.
+ * Asynchronous operation.
+ *
+ * @param tableNameOrRegionName
+ * @throws IOException
+ */
+ public void majorCompact(final byte [] tableNameOrRegionName)
+ throws IOException {
+ modifyTable(tableNameOrRegionName, HConstants.MODIFY_TABLE_MAJOR_COMPACT);
+ }
+
+ /**
+ * Split a table or an individual region.
+ * Asynchronous operation.
+ *
+ * @param tableNameOrRegionName
+ * @throws IOException
+ */
+ public void split(final String tableNameOrRegionName) throws IOException {
+ split(Bytes.toBytes(tableNameOrRegionName));
+ }
+
+ /**
+ * Split a table or an individual region.
+ * Asynchronous operation.
+ *
+ * @param tableNameOrRegionName
+ * @throws IOException
+ */
+ public void split(final byte [] tableNameOrRegionName) throws IOException {
+ modifyTable(tableNameOrRegionName, HConstants.MODIFY_TABLE_SPLIT);
+ }
+
+ /*
+ * Call modifyTable using passed tableName or region name String. If no
+ * such table, presume we have been passed a region name.
+ * @param tableNameOrRegionName
+ * @param op
+ * @throws IOException
+ */
+ private void modifyTable(final byte [] tableNameOrRegionName, final int op)
+ throws IOException {
+ if (tableNameOrRegionName == null) {
+ throw new IllegalArgumentException("Pass a table name or region name");
+ }
+ byte [] tableName = tableExists(tableNameOrRegionName)?
+ tableNameOrRegionName: null;
+ byte [] regionName = tableName == null? tableNameOrRegionName: null;
+ Object [] args = regionName == null? null: new byte [][] {regionName};
+ modifyTable(tableName == null? null: tableName, op, args);
+ }
+
+ /**
+ * Modify an existing table, more IRB friendly version.
+ * Asynchronous operation.
+ *
+ * @param tableName name of table.
+ * @param htd modified description of the table
+ * @throws IOException
+ */
+ public void modifyTable(final byte [] tableName, HTableDescriptor htd)
+ throws IOException {
+ modifyTable(tableName, HConstants.MODIFY_TABLE_SET_HTD, htd);
+ }
+
+ /**
+ * Modify an existing table.
+ * Asynchronous operation.
+ *
+ * @param tableName name of table. May be null if we are operating on a
+ * region.
+ * @param op table modification operation
+ * @param args operation specific arguments
+ * @throws IOException
+ */
+ public void modifyTable(final byte [] tableName, int op, Object... args)
+ throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ // Let pass if its a catalog table. Used by admins.
+ if (tableName != null && !MetaUtils.isMetaTableName(tableName)) {
+ // This will throw exception
+ HTableDescriptor.isLegalTableName(tableName);
+ }
+ Writable[] arr = null;
+ try {
+ switch (op) {
+ case HConstants.MODIFY_TABLE_SET_HTD:
+ if (args == null || args.length < 1 ||
+ !(args[0] instanceof HTableDescriptor)) {
+ throw new IllegalArgumentException("SET_HTD requires a HTableDescriptor");
+ }
+ arr = new Writable[1];
+ arr[0] = (HTableDescriptor)args[0];
+ this.master.modifyTable(tableName, op, arr);
+ break;
+
+ case HConstants.MODIFY_TABLE_COMPACT:
+ case HConstants.MODIFY_TABLE_SPLIT:
+ case HConstants.MODIFY_TABLE_MAJOR_COMPACT:
+ case HConstants.MODIFY_TABLE_FLUSH:
+ if (args != null && args.length > 0) {
+ arr = new Writable[1];
+ if (args[0] instanceof byte[]) {
+ arr[0] = new ImmutableBytesWritable((byte[])args[0]);
+ } else if (args[0] instanceof ImmutableBytesWritable) {
+ arr[0] = (ImmutableBytesWritable)args[0];
+ } else if (args[0] instanceof String) {
+ arr[0] = new ImmutableBytesWritable(Bytes.toBytes((String)args[0]));
+ } else {
+ throw new IllegalArgumentException("Requires byte[], String, or" +
+ "ImmutableBytesWritable");
+ }
+ }
+ this.master.modifyTable(tableName, op, arr);
+ break;
+
+ case HConstants.MODIFY_CLOSE_REGION:
+ if (args == null || args.length < 1) {
+ throw new IllegalArgumentException("Requires at least a region name");
+ }
+ arr = new Writable[args.length];
+ for (int i = 0; i < args.length; i++) {
+ if (args[i] instanceof byte[]) {
+ arr[i] = new ImmutableBytesWritable((byte[])args[i]);
+ } else if (args[i] instanceof ImmutableBytesWritable) {
+ arr[i] = (ImmutableBytesWritable)args[i];
+ } else if (args[i] instanceof String) {
+ arr[i] = new ImmutableBytesWritable(Bytes.toBytes((String)args[i]));
+ } else if (args[i] instanceof Boolean) {
+ arr[i] = new BooleanWritable(((Boolean)args[i]).booleanValue());
+ } else {
+ throw new IllegalArgumentException("Requires byte [] or " +
+ "ImmutableBytesWritable, not " + args[i]);
+ }
+ }
+ this.master.modifyTable(tableName, op, arr);
+ break;
+
+ default:
+ throw new IOException("unknown modifyTable op " + op);
+ }
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+ }
+
+ /**
+ * Shuts down the HBase instance
+ * @throws IOException
+ */
+ public synchronized void shutdown() throws IOException {
+ if (this.master == null) {
+ throw new MasterNotRunningException("master has been shut down");
+ }
+ try {
+ this.master.shutdown();
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ } finally {
+ this.master = null;
+ }
+ }
+
+ private HRegionLocation getFirstMetaServerForTable(final byte [] tableName)
+ throws IOException {
+ return connection.locateRegion(HConstants.META_TABLE_NAME,
+ HRegionInfo.createRegionName(tableName, null, HConstants.NINES));
+ }
+
+ /**
+ * Check to see if HBase is running. Throw an exception if not.
+ *
+ * @param conf
+ * @throws MasterNotRunningException
+ */
+ public static void checkHBaseAvailable(HBaseConfiguration conf)
+ throws MasterNotRunningException {
+ HBaseConfiguration copyOfConf = new HBaseConfiguration(conf);
+ copyOfConf.setInt("hbase.client.retries.number", 1);
+ new HBaseAdmin(copyOfConf);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/client/HConnection.java b/src/java/org/apache/hadoop/hbase/client/HConnection.java
new file mode 100644
index 0000000..421afe0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/HConnection.java
@@ -0,0 +1,184 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+
+/**
+ * Cluster connection.
+ * {@link HConnectionManager} manages instances of this class.
+ */
+public interface HConnection {
+ /**
+ * Retrieve ZooKeeperWrapper used by the connection.
+ * @return ZooKeeperWrapper handle being used by the connection.
+ * @throws IOException
+ */
+ public ZooKeeperWrapper getZooKeeperWrapper() throws IOException;
+
+ /**
+ * @return proxy connection to master server for this instance
+ * @throws MasterNotRunningException
+ */
+ public HMasterInterface getMaster() throws MasterNotRunningException;
+
+ /** @return - true if the master server is running */
+ public boolean isMasterRunning();
+
+ /**
+ * Checks if <code>tableName</code> exists.
+ * @param tableName Table to check.
+ * @return True if table exists already.
+ * @throws MasterNotRunningException
+ */
+ public boolean tableExists(final byte [] tableName)
+ throws MasterNotRunningException;
+
+ /**
+ * A table that isTableEnabled == false and isTableDisabled == false
+ * is possible. This happens when a table has a lot of regions
+ * that must be processed.
+ * @param tableName
+ * @return true if the table is enabled, false otherwise
+ * @throws IOException
+ */
+ public boolean isTableEnabled(byte[] tableName) throws IOException;
+
+ /**
+ * @param tableName
+ * @return true if the table is disabled, false otherwise
+ * @throws IOException
+ */
+ public boolean isTableDisabled(byte[] tableName) throws IOException;
+
+ /**
+ * List all the userspace tables. In other words, scan the META table.
+ *
+ * If we wanted this to be really fast, we could implement a special
+ * catalog table that just contains table names and their descriptors.
+ * Right now, it only exists as part of the META table's region info.
+ *
+ * @return - returns an array of HTableDescriptors
+ * @throws IOException
+ */
+ public HTableDescriptor[] listTables() throws IOException;
+
+ /**
+ * @param tableName
+ * @return table metadata
+ * @throws IOException
+ */
+ public HTableDescriptor getHTableDescriptor(byte[] tableName)
+ throws IOException;
+
+ /**
+ * Find the location of the region of <i>tableName</i> that <i>row</i>
+ * lives in.
+ * @param tableName name of the table <i>row</i> is in
+ * @param row row key you're trying to find the region of
+ * @return HRegionLocation that describes where to find the reigon in
+ * question
+ * @throws IOException
+ */
+ public HRegionLocation locateRegion(final byte [] tableName,
+ final byte [] row)
+ throws IOException;
+
+ /**
+ * Find the location of the region of <i>tableName</i> that <i>row</i>
+ * lives in, ignoring any value that might be in the cache.
+ * @param tableName name of the table <i>row</i> is in
+ * @param row row key you're trying to find the region of
+ * @return HRegionLocation that describes where to find the reigon in
+ * question
+ * @throws IOException
+ */
+ public HRegionLocation relocateRegion(final byte [] tableName,
+ final byte [] row)
+ throws IOException;
+
+ /**
+ * Establishes a connection to the region server at the specified address.
+ * @param regionServer - the server to connect to
+ * @return proxy for HRegionServer
+ * @throws IOException
+ */
+ public HRegionInterface getHRegionConnection(HServerAddress regionServer)
+ throws IOException;
+
+ /**
+ * Find region location hosting passed row
+ * @param tableName
+ * @param row Row to find.
+ * @param reload If true do not use cache, otherwise bypass.
+ * @return Location of row.
+ * @throws IOException
+ */
+ HRegionLocation getRegionLocation(byte [] tableName, byte [] row,
+ boolean reload)
+ throws IOException;
+
+ /**
+ * Pass in a ServerCallable with your particular bit of logic defined and
+ * this method will manage the process of doing retries with timed waits
+ * and refinds of missing regions.
+ *
+ * @param <T> the type of the return value
+ * @param callable
+ * @return an object of type T
+ * @throws IOException
+ * @throws RuntimeException
+ */
+ public <T> T getRegionServerWithRetries(ServerCallable<T> callable)
+ throws IOException, RuntimeException;
+
+ /**
+ * Pass in a ServerCallable with your particular bit of logic defined and
+ * this method will pass it to the defined region server.
+ * @param <T> the type of the return value
+ * @param callable
+ * @return an object of type T
+ * @throws IOException
+ * @throws RuntimeException
+ */
+ public <T> T getRegionServerForWithoutRetries(ServerCallable<T> callable)
+ throws IOException, RuntimeException;
+
+
+ /**
+ * Process a batch of rows. Currently it only works for updates until
+ * HBASE-880 is available. Does the retries.
+ * @param list A batch of rows to process
+ * @param tableName The name of the table
+ * @throws IOException
+ */
+ public void processBatchOfRows(ArrayList<BatchUpdate> list, byte[] tableName)
+ throws IOException;
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java b/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
new file mode 100644
index 0000000..8978755
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
@@ -0,0 +1,1062 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.lang.reflect.UndeclaredThrowableException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeSet;
+import java.util.WeakHashMap;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HStoreKey;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.ipc.HBaseRPC;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.MetaUtils;
+import org.apache.hadoop.hbase.util.SoftValueSortedMap;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.Watcher.Event.KeeperState;
+
+/**
+ * A non-instantiable class that manages connections to multiple tables in
+ * multiple HBase instances.
+ *
+ * Used by {@link HTable} and {@link HBaseAdmin}
+ */
+public class HConnectionManager implements HConstants {
+
+ /*
+ * Not instantiable.
+ */
+ protected HConnectionManager() {
+ super();
+ }
+
+ // A Map of master HBaseConfiguration -> connection information for that
+ // instance. Note that although the Map is synchronized, the objects it
+ // contains are mutable and hence require synchronized access to them
+ private static
+ final Map<HBaseConfiguration, TableServers> HBASE_INSTANCES =
+ new WeakHashMap<HBaseConfiguration, TableServers>();
+
+ /**
+ * Get the connection object for the instance specified by the configuration
+ * If no current connection exists, create a new connection for that instance
+ * @param conf
+ * @return HConnection object for the instance specified by the configuration
+ */
+ public static HConnection getConnection(HBaseConfiguration conf) {
+ TableServers connection;
+ synchronized (HBASE_INSTANCES) {
+ connection = HBASE_INSTANCES.get(conf);
+ if (connection == null) {
+ connection = new TableServers(conf);
+ HBASE_INSTANCES.put(conf, connection);
+ }
+ }
+ return connection;
+ }
+
+ /**
+ * Delete connection information for the instance specified by configuration
+ * @param conf
+ * @param stopProxy
+ */
+ public static void deleteConnectionInfo(HBaseConfiguration conf,
+ boolean stopProxy) {
+ synchronized (HBASE_INSTANCES) {
+ TableServers t = HBASE_INSTANCES.remove(conf);
+ if (t != null) {
+ t.close(stopProxy);
+ }
+ }
+ }
+
+ /* Encapsulates finding the servers for an HBase instance */
+ private static class TableServers implements ServerConnection, HConstants, Watcher {
+ private static final Log LOG = LogFactory.getLog(TableServers.class);
+ private final Class<? extends HRegionInterface> serverInterfaceClass;
+ private final long pause;
+ private final int numRetries;
+ private final int maxRPCAttempts;
+ private final long rpcTimeout;
+
+ private final Object masterLock = new Object();
+ private volatile boolean closed;
+ private volatile HMasterInterface master;
+ private volatile boolean masterChecked;
+
+ private final Object rootRegionLock = new Object();
+ private final Object metaRegionLock = new Object();
+ private final Object userRegionLock = new Object();
+
+ private volatile HBaseConfiguration conf;
+
+ // Known region HServerAddress.toString() -> HRegionInterface
+ private final Map<String, HRegionInterface> servers =
+ new ConcurrentHashMap<String, HRegionInterface>();
+
+ // Used by master and region servers during safe mode only
+ private volatile HRegionLocation rootRegionLocation;
+
+ private final Map<Integer, SoftValueSortedMap<byte [], HRegionLocation>>
+ cachedRegionLocations =
+ new HashMap<Integer, SoftValueSortedMap<byte [], HRegionLocation>>();
+
+ private ZooKeeperWrapper zooKeeperWrapper;
+
+ /**
+ * constructor
+ * @param conf Configuration object
+ */
+ @SuppressWarnings("unchecked")
+ public TableServers(HBaseConfiguration conf) {
+ this.conf = conf;
+
+ String serverClassName =
+ conf.get(REGION_SERVER_CLASS, DEFAULT_REGION_SERVER_CLASS);
+
+ this.closed = false;
+
+ try {
+ this.serverInterfaceClass =
+ (Class<? extends HRegionInterface>) Class.forName(serverClassName);
+
+ } catch (ClassNotFoundException e) {
+ throw new UnsupportedOperationException(
+ "Unable to find region server interface " + serverClassName, e);
+ }
+
+ this.pause = conf.getLong("hbase.client.pause", 2 * 1000);
+ this.numRetries = conf.getInt("hbase.client.retries.number", 10);
+ this.maxRPCAttempts = conf.getInt("hbase.client.rpc.maxattempts", 1);
+ this.rpcTimeout = conf.getLong("hbase.regionserver.lease.period", 60000);
+
+ this.master = null;
+ this.masterChecked = false;
+ }
+
+ private long getPauseTime(int tries) {
+ int ntries = tries;
+ if (ntries >= HConstants.RETRY_BACKOFF.length)
+ ntries = HConstants.RETRY_BACKOFF.length - 1;
+ return this.pause * HConstants.RETRY_BACKOFF[ntries];
+ }
+
+ /**
+ * Called by ZooKeeper when an event occurs on our connection. We use this to
+ * detect our session expiring. When our session expires, we have lost our
+ * connection to ZooKeeper. Our handle is dead, and we need to recreate it.
+ *
+ * See http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions
+ * for more information.
+ *
+ * @param event WatchedEvent witnessed by ZooKeeper.
+ */
+ public void process(WatchedEvent event) {
+ KeeperState state = event.getState();
+ LOG.debug("Got ZooKeeper event, state: " + state + ", type: " +
+ event.getType() + ", path: " + event.getPath());
+ if (state == KeeperState.Expired) {
+ resetZooKeeper();
+ }
+ }
+
+ private synchronized void resetZooKeeper() {
+ zooKeeperWrapper = null;
+ }
+
+ // Used by master and region servers during safe mode only
+ public void unsetRootRegionLocation() {
+ this.rootRegionLocation = null;
+ }
+
+ // Used by master and region servers during safe mode only
+ public void setRootRegionLocation(HRegionLocation rootRegion) {
+ if (rootRegion == null) {
+ throw new IllegalArgumentException(
+ "Cannot set root region location to null.");
+ }
+ this.rootRegionLocation = rootRegion;
+ }
+
+ public HMasterInterface getMaster() throws MasterNotRunningException {
+ ZooKeeperWrapper zk = null;
+ try {
+ zk = getZooKeeperWrapper();
+ } catch (IOException e) {
+ throw new MasterNotRunningException(e);
+ }
+
+ HServerAddress masterLocation = null;
+ synchronized (this.masterLock) {
+ for (int tries = 0;
+ !this.closed &&
+ !this.masterChecked && this.master == null &&
+ tries < numRetries;
+ tries++) {
+
+ try {
+ masterLocation = zk.readMasterAddressOrThrow();
+
+ HMasterInterface tryMaster = (HMasterInterface)HBaseRPC.getProxy(
+ HMasterInterface.class, HBaseRPCProtocolVersion.versionID,
+ masterLocation.getInetSocketAddress(), this.conf);
+
+ if (tryMaster.isMasterRunning()) {
+ this.master = tryMaster;
+ break;
+ }
+
+ } catch (IOException e) {
+ if (tries == numRetries - 1) {
+ // This was our last chance - don't bother sleeping
+ break;
+ }
+ LOG.info("getMaster attempt " + tries + " of " + this.numRetries +
+ " failed; retrying after sleep of " +
+ getPauseTime(tries), e);
+ }
+
+ // Cannot connect to master or it is not running. Sleep & retry
+ try {
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ this.masterChecked = true;
+ }
+ if (this.master == null) {
+ if (masterLocation == null) {
+ throw new MasterNotRunningException();
+ }
+ throw new MasterNotRunningException(masterLocation.toString());
+ }
+ return this.master;
+ }
+
+ public boolean isMasterRunning() {
+ if (this.master == null) {
+ try {
+ getMaster();
+
+ } catch (MasterNotRunningException e) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ public boolean tableExists(final byte [] tableName)
+ throws MasterNotRunningException {
+ getMaster();
+ if (tableName == null) {
+ throw new IllegalArgumentException("Table name cannot be null");
+ }
+ if (isMetaTableName(tableName)) {
+ return true;
+ }
+ boolean exists = false;
+ try {
+ HTableDescriptor[] tables = listTables();
+ for (int i = 0; i < tables.length; i++) {
+ if (Bytes.equals(tables[i].getName(), tableName)) {
+ exists = true;
+ }
+ }
+ } catch (IOException e) {
+ LOG.warn("Testing for table existence threw exception", e);
+ }
+ return exists;
+ }
+
+ /*
+ * @param n
+ * @return Truen if passed tablename <code>n</code> is equal to the name
+ * of a catalog table.
+ */
+ private static boolean isMetaTableName(final byte [] n) {
+ return MetaUtils.isMetaTableName(n);
+ }
+
+ public HRegionLocation getRegionLocation(final byte [] name,
+ final byte [] row, boolean reload)
+ throws IOException {
+ getMaster();
+ return reload? relocateRegion(name, row): locateRegion(name, row);
+ }
+
+ public HTableDescriptor[] listTables() throws IOException {
+ getMaster();
+ final TreeSet<HTableDescriptor> uniqueTables =
+ new TreeSet<HTableDescriptor>();
+
+ MetaScannerVisitor visitor = new MetaScannerVisitor() {
+
+ public boolean processRow(RowResult rowResult) throws IOException {
+ HRegionInfo info = Writables.getHRegionInfo(
+ rowResult.get(COL_REGIONINFO));
+
+ // Only examine the rows where the startKey is zero length
+ if (info.getStartKey().length == 0) {
+ uniqueTables.add(info.getTableDesc());
+ }
+ return true;
+ }
+
+ };
+ MetaScanner.metaScan(conf, visitor);
+
+ return uniqueTables.toArray(new HTableDescriptor[uniqueTables.size()]);
+ }
+
+ public boolean isTableEnabled(byte[] tableName) throws IOException {
+ return testTableOnlineState(tableName, true);
+ }
+
+ public boolean isTableDisabled(byte[] tableName) throws IOException {
+ return testTableOnlineState(tableName, false);
+ }
+
+ /*
+ * If online == true
+ * Returns true if all regions are online
+ * Returns false in any other case
+ * If online == false
+ * Returns true if all regions are offline
+ * Returns false in any other case
+ */
+ private boolean testTableOnlineState(byte[] tableName,
+ boolean online) throws IOException {
+ if (!tableExists(tableName)) {
+ throw new TableNotFoundException(Bytes.toString(tableName));
+ }
+ if (Bytes.equals(tableName, HConstants.ROOT_TABLE_NAME)) {
+ // The root region is always enabled
+ return true;
+ }
+
+ int rowsScanned = 0;
+ int rowsOffline = 0;
+ byte[] startKey =
+ HRegionInfo.createRegionName(tableName, null, HConstants.ZEROES);
+ byte[] endKey = null;
+ HRegionInfo currentRegion = null;
+ ScannerCallable s = new ScannerCallable(this,
+ (Bytes.equals(tableName, HConstants.META_TABLE_NAME) ?
+ HConstants.ROOT_TABLE_NAME : HConstants.META_TABLE_NAME),
+ HConstants.COL_REGIONINFO_ARRAY, startKey,
+ HConstants.LATEST_TIMESTAMP, null
+ );
+ try {
+ // Open scanner
+ getRegionServerWithRetries(s);
+ do {
+ HRegionInfo oldRegion = currentRegion;
+ if (oldRegion != null) {
+ startKey = oldRegion.getEndKey();
+ }
+ currentRegion = s.getHRegionInfo();
+ RowResult r = null;
+ RowResult[] rrs = null;
+ while ((rrs = getRegionServerWithRetries(s)) != null) {
+ r = rrs[0];
+ Cell c = r.get(HConstants.COL_REGIONINFO);
+ if (c != null) {
+ byte[] value = c.getValue();
+ if (value != null) {
+ HRegionInfo info = Writables.getHRegionInfoOrNull(value);
+ if (info != null) {
+ if (Bytes.equals(info.getTableDesc().getName(), tableName)) {
+ rowsScanned += 1;
+ rowsOffline += info.isOffline() ? 1 : 0;
+ }
+ }
+ }
+ }
+ }
+ endKey = currentRegion.getEndKey();
+ } while (!(endKey == null || HStoreKey.equalsTwoRowKeys(endKey,
+ HConstants.EMPTY_BYTE_ARRAY)));
+ }
+ finally {
+ s.setClose();
+ }
+ boolean onlineOffline =
+ online ? rowsOffline == 0 : rowsOffline == rowsScanned;
+ return rowsScanned > 0 && onlineOffline;
+
+ }
+
+ private static class HTableDescriptorFinder
+ implements MetaScanner.MetaScannerVisitor {
+ byte[] tableName;
+ HTableDescriptor result;
+ protected HTableDescriptorFinder(byte[] tableName) {
+ this.tableName = tableName;
+ }
+ public boolean processRow(RowResult rowResult) throws IOException {
+ HRegionInfo info = Writables.getHRegionInfo(
+ rowResult.get(HConstants.COL_REGIONINFO));
+ HTableDescriptor desc = info.getTableDesc();
+ if (Bytes.compareTo(desc.getName(), tableName) == 0) {
+ result = desc;
+ return false;
+ }
+ return true;
+ }
+ HTableDescriptor getResult() {
+ return result;
+ }
+ }
+
+ public HTableDescriptor getHTableDescriptor(final byte[] tableName)
+ throws IOException {
+ if (Bytes.equals(tableName, HConstants.ROOT_TABLE_NAME)) {
+ return new UnmodifyableHTableDescriptor(HTableDescriptor.ROOT_TABLEDESC);
+ }
+ if (Bytes.equals(tableName, HConstants.META_TABLE_NAME)) {
+ return new UnmodifyableHTableDescriptor(HTableDescriptor.META_TABLEDESC);
+ }
+ HTableDescriptorFinder finder = new HTableDescriptorFinder(tableName);
+ MetaScanner.metaScan(conf, finder);
+ HTableDescriptor result = finder.getResult();
+ if (result == null) {
+ throw new TableNotFoundException(Bytes.toString(tableName));
+ }
+ return result;
+ }
+
+ public HRegionLocation locateRegion(final byte [] tableName,
+ final byte [] row)
+ throws IOException{
+ getMaster();
+ return locateRegion(tableName, row, true);
+ }
+
+ public HRegionLocation relocateRegion(final byte [] tableName,
+ final byte [] row)
+ throws IOException{
+ getMaster();
+ return locateRegion(tableName, row, false);
+ }
+
+ private HRegionLocation locateRegion(final byte [] tableName,
+ final byte [] row, boolean useCache)
+ throws IOException{
+ if (tableName == null || tableName.length == 0) {
+ throw new IllegalArgumentException(
+ "table name cannot be null or zero length");
+ }
+
+ if (Bytes.equals(tableName, ROOT_TABLE_NAME)) {
+ synchronized (rootRegionLock) {
+ // This block guards against two threads trying to find the root
+ // region at the same time. One will go do the find while the
+ // second waits. The second thread will not do find.
+
+ if (!useCache || rootRegionLocation == null) {
+ return locateRootRegion();
+ }
+ return rootRegionLocation;
+ }
+ } else if (Bytes.equals(tableName, META_TABLE_NAME)) {
+ synchronized (metaRegionLock) {
+ // This block guards against two threads trying to load the meta
+ // region at the same time. The first will load the meta region and
+ // the second will use the value that the first one found.
+ return locateRegionInMeta(ROOT_TABLE_NAME, tableName, row, useCache);
+ }
+ } else {
+ synchronized(userRegionLock){
+ return locateRegionInMeta(META_TABLE_NAME, tableName, row, useCache);
+ }
+ }
+ }
+
+ /*
+ * Search one of the meta tables (-ROOT- or .META.) for the HRegionLocation
+ * info that contains the table and row we're seeking.
+ */
+ private HRegionLocation locateRegionInMeta(final byte [] parentTable,
+ final byte [] tableName, final byte [] row, boolean useCache)
+ throws IOException{
+ HRegionLocation location = null;
+ // If supposed to be using the cache, then check it for a possible hit.
+ // Otherwise, delete any existing cached location so it won't interfere.
+ if (useCache) {
+ location = getCachedLocation(tableName, row);
+ if (location != null) {
+ return location;
+ }
+ } else {
+ deleteCachedLocation(tableName, row);
+ }
+
+ // build the key of the meta region we should be looking for.
+ // the extra 9's on the end are necessary to allow "exact" matches
+ // without knowing the precise region names.
+ byte [] metaKey = HRegionInfo.createRegionName(tableName, row,
+ HConstants.NINES);
+ for (int tries = 0; true; tries++) {
+ if (tries >= numRetries) {
+ throw new NoServerForRegionException("Unable to find region for "
+ + Bytes.toString(row) + " after " + numRetries + " tries.");
+ }
+
+ try {
+ // locate the root or meta region
+ HRegionLocation metaLocation = locateRegion(parentTable, metaKey);
+ HRegionInterface server =
+ getHRegionConnection(metaLocation.getServerAddress());
+
+ // Query the root or meta region for the location of the meta region
+ RowResult regionInfoRow = server.getClosestRowBefore(
+ metaLocation.getRegionInfo().getRegionName(), metaKey,
+ HConstants.COLUMN_FAMILY);
+ if (regionInfoRow == null) {
+ throw new TableNotFoundException(Bytes.toString(tableName));
+ }
+
+ Cell value = regionInfoRow.get(COL_REGIONINFO);
+ if (value == null || value.getValue().length == 0) {
+ throw new IOException("HRegionInfo was null or empty in " +
+ Bytes.toString(parentTable));
+ }
+ // convert the row result into the HRegionLocation we need!
+ HRegionInfo regionInfo = (HRegionInfo) Writables.getWritable(
+ value.getValue(), new HRegionInfo());
+ // possible we got a region of a different table...
+ if (!Bytes.equals(regionInfo.getTableDesc().getName(), tableName)) {
+ throw new TableNotFoundException(
+ "Table '" + Bytes.toString(tableName) + "' was not found.");
+ }
+ if (regionInfo.isOffline()) {
+ throw new RegionOfflineException("region offline: " +
+ regionInfo.getRegionNameAsString());
+ }
+
+ String serverAddress =
+ Writables.cellToString(regionInfoRow.get(COL_SERVER));
+ if (serverAddress.equals("")) {
+ throw new NoServerForRegionException("No server address listed " +
+ "in " + Bytes.toString(parentTable) + " for region " +
+ regionInfo.getRegionNameAsString());
+ }
+
+ // instantiate the location
+ location = new HRegionLocation(regionInfo,
+ new HServerAddress(serverAddress));
+ cacheLocation(tableName, location);
+ return location;
+ } catch (TableNotFoundException e) {
+ // if we got this error, probably means the table just plain doesn't
+ // exist. rethrow the error immediately. this should always be coming
+ // from the HTable constructor.
+ throw e;
+ } catch (IOException e) {
+ if (e instanceof RemoteException) {
+ e = RemoteExceptionHandler.decodeRemoteException(
+ (RemoteException) e);
+ }
+ if (tries < numRetries - 1) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("locateRegionInMeta attempt " + tries + " of " +
+ this.numRetries + " failed; retrying after sleep of " +
+ getPauseTime(tries), e);
+ }
+ relocateRegion(parentTable, metaKey);
+ } else {
+ throw e;
+ }
+ }
+
+ try{
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException e){
+ // continue
+ }
+ }
+ }
+
+ /*
+ * Search the cache for a location that fits our table and row key.
+ * Return null if no suitable region is located. TODO: synchronization note
+ *
+ * <p>TODO: This method during writing consumes 15% of CPU doing lookup
+ * into the Soft Reference SortedMap. Improve.
+ *
+ * @param tableName
+ * @param row
+ * @return Null or region location found in cache.
+ */
+ private HRegionLocation getCachedLocation(final byte [] tableName,
+ final byte [] row) {
+ SoftValueSortedMap<byte [], HRegionLocation> tableLocations =
+ getTableLocations(tableName);
+
+ // start to examine the cache. we can only do cache actions
+ // if there's something in the cache for this table.
+ if (tableLocations.isEmpty()) {
+ return null;
+ }
+
+ HRegionLocation rl = tableLocations.get(row);
+ if (rl != null) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Cache hit for row <" +
+ Bytes.toString(row) +
+ "> in tableName " + Bytes.toString(tableName) +
+ ": location server " + rl.getServerAddress() +
+ ", location region name " +
+ rl.getRegionInfo().getRegionNameAsString());
+ }
+ return rl;
+ }
+
+ // Cut the cache so that we only get the part that could contain
+ // regions that match our key
+ SoftValueSortedMap<byte[], HRegionLocation> matchingRegions =
+ tableLocations.headMap(row);
+
+ // if that portion of the map is empty, then we're done. otherwise,
+ // we need to examine the cached location to verify that it is
+ // a match by end key as well.
+ if (!matchingRegions.isEmpty()) {
+ HRegionLocation possibleRegion =
+ matchingRegions.get(matchingRegions.lastKey());
+
+ // there is a possibility that the reference was garbage collected
+ // in the instant since we checked isEmpty().
+ if (possibleRegion != null) {
+ byte[] endKey = possibleRegion.getRegionInfo().getEndKey();
+
+ // make sure that the end key is greater than the row we're looking
+ // for, otherwise the row actually belongs in the next region, not
+ // this one. the exception case is when the endkey is EMPTY_START_ROW,
+ // signifying that the region we're checking is actually the last
+ // region in the table.
+ if (HStoreKey.equalsTwoRowKeys(endKey, HConstants.EMPTY_END_ROW) ||
+ HStoreKey.getComparator(tableName).compareRows(endKey, row) > 0) {
+ return possibleRegion;
+ }
+ }
+ }
+
+ // Passed all the way through, so we got nothin - complete cache miss
+ return null;
+ }
+
+ /*
+ * Delete a cached location, if it satisfies the table name and row
+ * requirements.
+ */
+ private void deleteCachedLocation(final byte [] tableName,
+ final byte [] row) {
+ SoftValueSortedMap<byte [], HRegionLocation> tableLocations =
+ getTableLocations(tableName);
+
+ // start to examine the cache. we can only do cache actions
+ // if there's something in the cache for this table.
+ if (!tableLocations.isEmpty()) {
+ // cut the cache so that we only get the part that could contain
+ // regions that match our key
+ SoftValueSortedMap<byte [], HRegionLocation> matchingRegions =
+ tableLocations.headMap(row);
+
+ // if that portion of the map is empty, then we're done. otherwise,
+ // we need to examine the cached location to verify that it is
+ // a match by end key as well.
+ if (!matchingRegions.isEmpty()) {
+ HRegionLocation possibleRegion =
+ matchingRegions.get(matchingRegions.lastKey());
+ byte [] endKey = possibleRegion.getRegionInfo().getEndKey();
+
+ // by nature of the map, we know that the start key has to be <
+ // otherwise it wouldn't be in the headMap.
+ if (HStoreKey.getComparator(tableName).compareRows(endKey, row) <= 0) {
+ // delete any matching entry
+ HRegionLocation rl =
+ tableLocations.remove(matchingRegions.lastKey());
+ if (rl != null && LOG.isDebugEnabled()) {
+ LOG.debug("Removed " + rl.getRegionInfo().getRegionNameAsString() +
+ " for tableName=" + Bytes.toString(tableName) + " from cache " +
+ "because of " + Bytes.toString(row));
+ }
+ }
+ }
+ }
+ }
+
+ /*
+ * @param tableName
+ * @return Map of cached locations for passed <code>tableName</code>
+ */
+ private SoftValueSortedMap<byte [], HRegionLocation> getTableLocations(
+ final byte [] tableName) {
+ // find the map of cached locations for this table
+ Integer key = Bytes.mapKey(tableName);
+ SoftValueSortedMap<byte [], HRegionLocation> result = null;
+ synchronized (this.cachedRegionLocations) {
+ result = this.cachedRegionLocations.get(key);
+ // if tableLocations for this table isn't built yet, make one
+ if (result == null) {
+ result = new SoftValueSortedMap<byte [], HRegionLocation>(
+ Bytes.BYTES_COMPARATOR);
+ this.cachedRegionLocations.put(key, result);
+ }
+ }
+ return result;
+ }
+
+ /*
+ * Put a newly discovered HRegionLocation into the cache.
+ */
+ private void cacheLocation(final byte [] tableName,
+ final HRegionLocation location) {
+ byte [] startKey = location.getRegionInfo().getStartKey();
+ SoftValueSortedMap<byte [], HRegionLocation> tableLocations =
+ getTableLocations(tableName);
+ tableLocations.put(startKey, location);
+ }
+
+ public HRegionInterface getHRegionConnection(HServerAddress regionServer)
+ throws IOException {
+ getMaster();
+ HRegionInterface server;
+ synchronized (this.servers) {
+ // See if we already have a connection
+ server = this.servers.get(regionServer.toString());
+ if (server == null) { // Get a connection
+ try {
+ server = (HRegionInterface)HBaseRPC.waitForProxy(
+ serverInterfaceClass, HBaseRPCProtocolVersion.versionID,
+ regionServer.getInetSocketAddress(), this.conf,
+ this.maxRPCAttempts, this.rpcTimeout);
+ } catch (RemoteException e) {
+ throw RemoteExceptionHandler.decodeRemoteException(e);
+ }
+ this.servers.put(regionServer.toString(), server);
+ }
+ }
+ return server;
+ }
+
+ public synchronized ZooKeeperWrapper getZooKeeperWrapper() throws IOException {
+ if (zooKeeperWrapper == null) {
+ zooKeeperWrapper = new ZooKeeperWrapper(conf, this);
+ }
+ return zooKeeperWrapper;
+ }
+
+ /*
+ * Repeatedly try to find the root region by asking the master for where it is
+ * @return HRegionLocation for root region if found
+ * @throws NoServerForRegionException - if the root region can not be
+ * located after retrying
+ * @throws IOException
+ */
+ private HRegionLocation locateRootRegion()
+ throws IOException {
+ getMaster();
+
+ // We lazily instantiate the ZooKeeper object because we don't want to
+ // make the constructor have to throw IOException or handle it itself.
+ ZooKeeperWrapper zk = getZooKeeperWrapper();
+
+ HServerAddress rootRegionAddress = null;
+ for (int tries = 0; tries < numRetries; tries++) {
+ int localTimeouts = 0;
+ // ask the master which server has the root region
+ while (rootRegionAddress == null && localTimeouts < numRetries) {
+ // Don't read root region until we're out of safe mode so we know
+ // that the meta regions have been assigned.
+ boolean outOfSafeMode = zk.checkOutOfSafeMode();
+ if (outOfSafeMode) {
+ rootRegionAddress = zk.readRootRegionLocation();
+ }
+ if (rootRegionAddress == null) {
+ try {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Sleeping " + getPauseTime(tries) +
+ "ms, waiting for root region.");
+ }
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException iex) {
+ // continue
+ }
+ localTimeouts++;
+ }
+ }
+
+ if (rootRegionAddress == null) {
+ throw new NoServerForRegionException(
+ "Timed out trying to locate root region");
+ }
+
+ // get a connection to the region server
+ HRegionInterface server = getHRegionConnection(rootRegionAddress);
+ try {
+ // if this works, then we're good, and we have an acceptable address,
+ // so we can stop doing retries and return the result.
+ server.getRegionInfo(HRegionInfo.ROOT_REGIONINFO.getRegionName());
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Found ROOT at " + rootRegionAddress);
+ }
+ break;
+ } catch (IOException e) {
+ if (tries == numRetries - 1) {
+ // Don't bother sleeping. We've run out of retries.
+ if (e instanceof RemoteException) {
+ e = RemoteExceptionHandler.decodeRemoteException(
+ (RemoteException) e);
+ }
+ throw e;
+ }
+
+ // Sleep and retry finding root region.
+ try {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Root region location changed. Sleeping.");
+ }
+ Thread.sleep(getPauseTime(tries));
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Wake. Retry finding root region.");
+ }
+ } catch (InterruptedException iex) {
+ // continue
+ }
+ }
+
+ rootRegionAddress = null;
+ }
+
+ // if the address is null by this point, then the retries have failed,
+ // and we're sort of sunk
+ if (rootRegionAddress == null) {
+ throw new NoServerForRegionException(
+ "unable to locate root region server");
+ }
+
+ // return the region location
+ return new HRegionLocation(
+ HRegionInfo.ROOT_REGIONINFO, rootRegionAddress);
+ }
+
+ public <T> T getRegionServerWithRetries(ServerCallable<T> callable)
+ throws IOException, RuntimeException {
+ getMaster();
+ List<Throwable> exceptions = new ArrayList<Throwable>();
+ for(int tries = 0; tries < numRetries; tries++) {
+ try {
+ callable.instantiateServer(tries != 0);
+ return callable.call();
+ } catch (Throwable t) {
+ if (t instanceof UndeclaredThrowableException) {
+ t = t.getCause();
+ }
+ if (t instanceof RemoteException) {
+ t = RemoteExceptionHandler.decodeRemoteException((RemoteException)t);
+ }
+ if (t instanceof DoNotRetryIOException) {
+ throw (DoNotRetryIOException)t;
+ }
+ exceptions.add(t);
+ if (tries == numRetries - 1) {
+ throw new RetriesExhaustedException(callable.getServerName(),
+ callable.getRegionName(), callable.getRow(), tries, exceptions);
+ }
+ }
+ try {
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ return null;
+ }
+
+ public <T> T getRegionServerForWithoutRetries(ServerCallable<T> callable)
+ throws IOException, RuntimeException {
+ getMaster();
+ try {
+ callable.instantiateServer(false);
+ return callable.call();
+ } catch (Throwable t) {
+ if (t instanceof UndeclaredThrowableException) {
+ t = t.getCause();
+ }
+ if (t instanceof RemoteException) {
+ t = RemoteExceptionHandler.decodeRemoteException((RemoteException) t);
+ }
+ if (t instanceof DoNotRetryIOException) {
+ throw (DoNotRetryIOException) t;
+ }
+ }
+ return null;
+ }
+
+ private HRegionLocation
+ getRegionLocationForRowWithRetries(byte[] tableName, byte[] rowKey,
+ boolean reload)
+ throws IOException {
+ boolean reloadFlag = reload;
+ getMaster();
+ List<Throwable> exceptions = new ArrayList<Throwable>();
+ HRegionLocation location = null;
+ int tries = 0;
+ while (tries < numRetries) {
+ try {
+ location = getRegionLocation(tableName, rowKey, reloadFlag);
+ } catch (Throwable t) {
+ exceptions.add(t);
+ }
+ if (location != null) {
+ break;
+ }
+ reloadFlag = true;
+ tries++;
+ try {
+ Thread.sleep(getPauseTime(tries));
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ if (location == null) {
+ throw new RetriesExhaustedException("Some server",
+ HConstants.EMPTY_BYTE_ARRAY, rowKey, tries, exceptions);
+ }
+ return location;
+ }
+
+ public void processBatchOfRows(ArrayList<BatchUpdate> list, byte[] tableName)
+ throws IOException {
+ if (list.isEmpty()) {
+ return;
+ }
+ boolean retryOnlyOne = false;
+ int tries = 0;
+ Collections.sort(list);
+ List<BatchUpdate> tempUpdates = new ArrayList<BatchUpdate>();
+ HRegionLocation location =
+ getRegionLocationForRowWithRetries(tableName, list.get(0).getRow(),
+ false);
+ byte [] currentRegion = location.getRegionInfo().getRegionName();
+ byte [] region = currentRegion;
+ boolean isLastRow = false;
+ for (int i = 0; i < list.size() && tries < numRetries; i++) {
+ BatchUpdate batchUpdate = list.get(i);
+ tempUpdates.add(batchUpdate);
+ isLastRow = (i + 1) == list.size();
+ if (!isLastRow) {
+ location = getRegionLocationForRowWithRetries(tableName,
+ list.get(i+1).getRow(), false);
+ region = location.getRegionInfo().getRegionName();
+ }
+ if (!Bytes.equals(currentRegion, region) || isLastRow || retryOnlyOne) {
+ final BatchUpdate[] updates = tempUpdates.toArray(new BatchUpdate[0]);
+ int index = getRegionServerWithRetries(new ServerCallable<Integer>(
+ this, tableName, batchUpdate.getRow()) {
+ public Integer call() throws IOException {
+ int i = server.batchUpdates(location.getRegionInfo()
+ .getRegionName(), updates);
+ return i;
+ }
+ });
+ if (index != -1) {
+ if (tries == numRetries - 1) {
+ throw new RetriesExhaustedException("Some server",
+ currentRegion, batchUpdate.getRow(),
+ tries, new ArrayList<Throwable>());
+ }
+ long sleepTime = getPauseTime(tries);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Reloading region " + Bytes.toString(currentRegion) +
+ " location because regionserver didn't accept updates; " +
+ "tries=" + tries +
+ " of max=" + this.numRetries + ", waiting=" + sleepTime + "ms");
+ }
+ try {
+ Thread.sleep(sleepTime);
+ tries++;
+ } catch (InterruptedException e) {
+ // continue
+ }
+ i = i - updates.length + index;
+ retryOnlyOne = true;
+ location = getRegionLocationForRowWithRetries(tableName,
+ list.get(i + 1).getRow(), true);
+ region = location.getRegionInfo().getRegionName();
+ }
+ else {
+ retryOnlyOne = false;
+ }
+ currentRegion = region;
+ tempUpdates.clear();
+ }
+ }
+ }
+
+ void close(boolean stopProxy) {
+ if (master != null) {
+ if (stopProxy) {
+ HBaseRPC.stopProxy(master);
+ }
+ master = null;
+ masterChecked = false;
+ }
+ if (stopProxy) {
+ synchronized (servers) {
+ for (HRegionInterface i: servers.values()) {
+ HBaseRPC.stopProxy(i);
+ }
+ }
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/client/HTable.java b/src/java/org/apache/hadoop/hbase/client/HTable.java
new file mode 100644
index 0000000..b71dbc2
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/HTable.java
@@ -0,0 +1,1770 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HStoreKey;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.StopRowFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.io.BatchOperation;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * Used to communicate with a single HBase table
+ */
+public class HTable {
+ private final HConnection connection;
+ private final byte [] tableName;
+ protected final int scannerTimeout;
+ private volatile HBaseConfiguration configuration;
+ private ArrayList<BatchUpdate> writeBuffer;
+ private long writeBufferSize;
+ private boolean autoFlush;
+ private long currentWriteBufferSize;
+ protected int scannerCaching;
+
+ /**
+ * Creates an object to access a HBase table
+ *
+ * @param tableName name of the table
+ * @throws IOException
+ */
+ public HTable(final String tableName)
+ throws IOException {
+ this(new HBaseConfiguration(), Bytes.toBytes(tableName));
+ }
+
+ /**
+ * Creates an object to access a HBase table
+ *
+ * @param tableName name of the table
+ * @throws IOException
+ */
+ public HTable(final byte [] tableName)
+ throws IOException {
+ this(new HBaseConfiguration(), tableName);
+ }
+
+ /**
+ * Creates an object to access a HBase table
+ *
+ * @param conf configuration object
+ * @param tableName name of the table
+ * @throws IOException
+ */
+ public HTable(HBaseConfiguration conf, final String tableName)
+ throws IOException {
+ this(conf, Bytes.toBytes(tableName));
+ }
+
+ /**
+ * Creates an object to access a HBase table
+ *
+ * @param conf configuration object
+ * @param tableName name of the table
+ * @throws IOException
+ */
+ public HTable(HBaseConfiguration conf, final byte [] tableName)
+ throws IOException {
+ this.connection = HConnectionManager.getConnection(conf);
+ this.tableName = tableName;
+ this.scannerTimeout =
+ conf.getInt("hbase.regionserver.lease.period", 60 * 1000);
+ this.configuration = conf;
+ this.connection.locateRegion(tableName, HConstants.EMPTY_START_ROW);
+ this.writeBuffer = new ArrayList<BatchUpdate>();
+ this.writeBufferSize =
+ this.configuration.getLong("hbase.client.write.buffer", 2097152);
+ this.autoFlush = true;
+ this.currentWriteBufferSize = 0;
+ this.scannerCaching = conf.getInt("hbase.client.scanner.caching", 1);
+ }
+
+ /**
+ * @param tableName name of table to check
+ * @return true if table is on-line
+ * @throws IOException
+ */
+ public static boolean isTableEnabled(String tableName) throws IOException {
+ return isTableEnabled(Bytes.toBytes(tableName));
+ }
+ /**
+ * @param tableName name of table to check
+ * @return true if table is on-line
+ * @throws IOException
+ */
+ public static boolean isTableEnabled(byte[] tableName) throws IOException {
+ return isTableEnabled(new HBaseConfiguration(), tableName);
+ }
+
+ /**
+ * @param conf HBaseConfiguration object
+ * @param tableName name of table to check
+ * @return true if table is on-line
+ * @throws IOException
+ */
+ public static boolean isTableEnabled(HBaseConfiguration conf, String tableName)
+ throws IOException {
+ return isTableEnabled(conf, Bytes.toBytes(tableName));
+ }
+
+ /**
+ * @param conf HBaseConfiguration object
+ * @param tableName name of table to check
+ * @return true if table is on-line
+ * @throws IOException
+ */
+ public static boolean isTableEnabled(HBaseConfiguration conf, byte[] tableName)
+ throws IOException {
+ return HConnectionManager.getConnection(conf).isTableEnabled(tableName);
+ }
+
+ /**
+ * Find region location hosting passed row using cached info
+ * @param row Row to find.
+ * @return Location of row.
+ * @throws IOException
+ */
+ public HRegionLocation getRegionLocation(final String row)
+ throws IOException {
+ return connection.getRegionLocation(tableName, Bytes.toBytes(row), false);
+ }
+
+ /**
+ * Find region location hosting passed row using cached info
+ * @param row Row to find.
+ * @return Location of row.
+ * @throws IOException
+ */
+ public HRegionLocation getRegionLocation(final byte [] row)
+ throws IOException {
+ return connection.getRegionLocation(tableName, row, false);
+ }
+
+ /** @return the table name */
+ public byte [] getTableName() {
+ return this.tableName;
+ }
+
+ /**
+ * Used by unit tests and tools to do low-level manipulations. Not for
+ * general use.
+ * @return An HConnection instance.
+ */
+ public HConnection getConnection() {
+ return this.connection;
+ }
+
+ /**
+ * Get the number of rows for caching that will be passed to scanners
+ * @return the number of rows for caching
+ */
+ public int getScannerCaching() {
+ return scannerCaching;
+ }
+
+ /**
+ * Set the number of rows for caching that will be passed to scanners
+ * @param scannerCaching the number of rows for caching
+ */
+ public void setScannerCaching(int scannerCaching) {
+ this.scannerCaching = scannerCaching;
+ }
+
+ /**
+ * @return table metadata
+ * @throws IOException
+ */
+ public HTableDescriptor getTableDescriptor() throws IOException {
+ return new UnmodifyableHTableDescriptor(
+ this.connection.getHTableDescriptor(this.tableName));
+ }
+
+ /**
+ * Gets the starting row key for every region in the currently open table
+ *
+ * @return Array of region starting row keys
+ * @throws IOException
+ */
+ public byte [][] getStartKeys() throws IOException {
+ return getStartEndKeys().getFirst();
+ }
+
+ /**
+ * Gets the ending row key for every region in the currently open table
+ *
+ * @return Array of region ending row keys
+ * @throws IOException
+ */
+ public byte[][] getEndKeys() throws IOException {
+ return getStartEndKeys().getSecond();
+ }
+
+ /**
+ * Gets the starting and ending row keys for every region in the currently open table
+ *
+ * @return Pair of arrays of region starting and ending row keys
+ * @throws IOException
+ */
+ @SuppressWarnings("unchecked")
+ public Pair<byte[][],byte[][]> getStartEndKeys() throws IOException {
+ final List<byte[]> startKeyList = new ArrayList<byte[]>();
+ final List<byte[]> endKeyList = new ArrayList<byte[]>();
+ MetaScannerVisitor visitor = new MetaScannerVisitor() {
+ public boolean processRow(RowResult rowResult) throws IOException {
+ HRegionInfo info = Writables.getHRegionInfo(
+ rowResult.get(HConstants.COL_REGIONINFO));
+ if (Bytes.equals(info.getTableDesc().getName(), getTableName())) {
+ if (!(info.isOffline() || info.isSplit())) {
+ startKeyList.add(info.getStartKey());
+ endKeyList.add(info.getEndKey());
+ }
+ }
+ return true;
+ }
+ };
+ MetaScanner.metaScan(configuration, visitor, this.tableName);
+ return new Pair(startKeyList.toArray(new byte[startKeyList.size()][]),
+ endKeyList.toArray(new byte[endKeyList.size()][]));
+ }
+
+ /**
+ * Get all the regions and their address for this table
+ *
+ * @return A map of HRegionInfo with it's server address
+ * @throws IOException
+ */
+ public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException {
+ final Map<HRegionInfo, HServerAddress> regionMap =
+ new TreeMap<HRegionInfo, HServerAddress>();
+
+ MetaScannerVisitor visitor = new MetaScannerVisitor() {
+ public boolean processRow(RowResult rowResult) throws IOException {
+ HRegionInfo info = Writables.getHRegionInfo(
+ rowResult.get(HConstants.COL_REGIONINFO));
+
+ if (!(Bytes.equals(info.getTableDesc().getName(), getTableName()))) {
+ return false;
+ }
+
+ HServerAddress server = new HServerAddress();
+ Cell c = rowResult.get(HConstants.COL_SERVER);
+ if (c != null && c.getValue() != null && c.getValue().length > 0) {
+ String address = Bytes.toString(c.getValue());
+ server = new HServerAddress(address);
+ }
+
+ if (!(info.isOffline() || info.isSplit())) {
+ regionMap.put(new UnmodifyableHRegionInfo(info), server);
+ }
+ return true;
+ }
+
+ };
+ MetaScanner.metaScan(configuration, visitor, tableName);
+ return regionMap;
+ }
+
+ /**
+ * Get a single value for the specified row and column
+ *
+ * @param row row key
+ * @param column column name
+ * @return value for specified row/column
+ * @throws IOException
+ */
+ public Cell get(final String row, final String column)
+ throws IOException {
+ return get(Bytes.toBytes(row), Bytes.toBytes(column));
+ }
+
+ /**
+ * Get a single value for the specified row and column
+ *
+ * @param row row key
+ * @param column column name
+ * @param numVersions - number of versions to retrieve
+ * @return value for specified row/column
+ * @throws IOException
+ */
+ public Cell [] get(final String row, final String column, int numVersions)
+ throws IOException {
+ return get(Bytes.toBytes(row), Bytes.toBytes(column), numVersions);
+ }
+
+ /**
+ * Get a single value for the specified row and column
+ *
+ * @param row row key
+ * @param column column name
+ * @return value for specified row/column
+ * @throws IOException
+ */
+ public Cell get(final byte [] row, final byte [] column)
+ throws IOException {
+ return connection.getRegionServerWithRetries(
+ new ServerCallable<Cell>(connection, tableName, row) {
+ public Cell call() throws IOException {
+ Cell[] result = server.get(location.getRegionInfo().getRegionName(),
+ row, column, -1, -1);
+ return (result == null)? null : result[0];
+ }
+ }
+ );
+ }
+
+ /**
+ * Get the specified number of versions of the specified row and column
+ * @param row row key
+ * @param column column name
+ * @param numVersions number of versions to retrieve
+ * @return Array of Cells.
+ * @throws IOException
+ */
+ public Cell [] get(final byte [] row, final byte [] column,
+ final int numVersions)
+ throws IOException {
+ return connection.getRegionServerWithRetries(
+ new ServerCallable<Cell[]>(connection, tableName, row) {
+ public Cell[] call() throws IOException {
+ return server.get(location.getRegionInfo().getRegionName(), row,
+ column, -1, numVersions);
+ }
+ }
+ );
+ }
+
+ /**
+ * Get the specified number of versions of the specified row and column with
+ * the specified timestamp.
+ *
+ * @param row - row key
+ * @param column - column name
+ * @param timestamp - timestamp
+ * @param numVersions - number of versions to retrieve
+ * @return - array of values that match the above criteria
+ * @throws IOException
+ */
+ public Cell[] get(final String row, final String column,
+ final long timestamp, final int numVersions)
+ throws IOException {
+ return get(Bytes.toBytes(row), Bytes.toBytes(column), timestamp, numVersions);
+ }
+
+ /**
+ * Get the specified number of versions of the specified row and column with
+ * the specified timestamp.
+ *
+ * @param row - row key
+ * @param column - column name
+ * @param timestamp - timestamp
+ * @param numVersions - number of versions to retrieve
+ * @return - array of values that match the above criteria
+ * @throws IOException
+ */
+ public Cell[] get(final byte [] row, final byte [] column,
+ final long timestamp, final int numVersions)
+ throws IOException {
+ Cell[] values = null;
+ values = connection.getRegionServerWithRetries(
+ new ServerCallable<Cell[]>(connection, tableName, row) {
+ public Cell[] call() throws IOException {
+ return server.get(location.getRegionInfo().getRegionName(), row,
+ column, timestamp, numVersions);
+ }
+ }
+ );
+
+ if (values != null) {
+ ArrayList<Cell> cellValues = new ArrayList<Cell>();
+ for (int i = 0 ; i < values.length; i++) {
+ cellValues.add(values[i]);
+ }
+ return cellValues.toArray(new Cell[values.length]);
+ }
+ return null;
+ }
+
+ /**
+ * Get all the data for the specified row at the latest timestamp
+ *
+ * @param row row key
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final String row) throws IOException {
+ return getRow(Bytes.toBytes(row));
+ }
+
+ /**
+ * Get all the data for the specified row at the latest timestamp
+ *
+ * @param row row key
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final byte [] row) throws IOException {
+ return getRow(row, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Get more than one version of all columns for the specified row
+ *
+ * @param row row key
+ * @param numVersions number of versions to return
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final String row, final int numVersions)
+ throws IOException {
+ return getRow(Bytes.toBytes(row), null,
+ HConstants.LATEST_TIMESTAMP, numVersions, null);
+ }
+
+ /**
+ * Get more than one version of all columns for the specified row
+ *
+ * @param row row key
+ * @param numVersions number of versions to return
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final byte[] row, final int numVersions)
+ throws IOException {
+ return getRow(row, null, HConstants.LATEST_TIMESTAMP, numVersions, null);
+ }
+
+ /**
+ * Get all the data for the specified row at a specified timestamp
+ *
+ * @param row row key
+ * @param ts timestamp
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final String row, final long ts)
+ throws IOException {
+ return getRow(Bytes.toBytes(row), ts);
+ }
+
+ /**
+ * Get all the data for the specified row at a specified timestamp
+ *
+ * @param row row key
+ * @param ts timestamp
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final byte [] row, final long ts)
+ throws IOException {
+ return getRow(row,null,ts);
+ }
+
+ public RowResult getRow(final String row, final long ts,
+ final int numVersions) throws IOException {
+ return getRow(Bytes.toBytes(row), null, ts, numVersions, null);
+ }
+
+ /**
+ * Get more than one version of all columns for the specified row
+ * at a specified timestamp
+ *
+ * @param row row key
+ * @param timestamp timestamp
+ * @param numVersions number of versions to return
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final byte[] row, final long timestamp,
+ final int numVersions) throws IOException {
+ return getRow(row, null, timestamp, numVersions, null);
+ }
+
+ /**
+ * Get selected columns for the specified row at the latest timestamp
+ *
+ * @param row row key
+ * @param columns Array of column names and families you want to retrieve.
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final String row, final String [] columns)
+ throws IOException {
+ return getRow(Bytes.toBytes(row), Bytes.toByteArrays(columns));
+ }
+
+ /**
+ * Get selected columns for the specified row at the latest timestamp
+ *
+ * @param row row key
+ * @param columns Array of column names and families you want to retrieve.
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final byte [] row, final byte [][] columns)
+ throws IOException {
+ return getRow(row, columns, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Get more than one version of selected columns for the specified row
+ *
+ * @param row row key
+ * @param columns Array of column names and families you want to retrieve.
+ * @param numVersions number of versions to return
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final String row, final String[] columns,
+ final int numVersions) throws IOException {
+ return getRow(Bytes.toBytes(row), Bytes.toByteArrays(columns),
+ HConstants.LATEST_TIMESTAMP, numVersions, null);
+ }
+
+ /**
+ * Get more than one version of selected columns for the specified row
+ *
+ * @param row row key
+ * @param columns Array of column names and families you want to retrieve.
+ * @param numVersions number of versions to return
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final byte[] row, final byte[][] columns,
+ final int numVersions) throws IOException {
+ return getRow(row, columns, HConstants.LATEST_TIMESTAMP, numVersions, null);
+ }
+
+ /**
+ * Get selected columns for the specified row at a specified timestamp
+ *
+ * @param row row key
+ * @param columns Array of column names and families you want to retrieve.
+ * @param ts timestamp
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final String row, final String [] columns,
+ final long ts)
+ throws IOException {
+ return getRow(Bytes.toBytes(row), Bytes.toByteArrays(columns), ts);
+ }
+
+ /**
+ * Get selected columns for the specified row at a specified timestamp
+ *
+ * @param row row key
+ * @param columns Array of column names and families you want to retrieve.
+ * @param ts timestamp
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final byte [] row, final byte [][] columns,
+ final long ts)
+ throws IOException {
+ return getRow(row,columns,ts,1,null);
+ }
+
+ public RowResult getRow(final String row, final String[] columns,
+ final long timestamp, final int numVersions, final RowLock rowLock)
+ throws IOException {
+ return getRow(Bytes.toBytes(row), Bytes.toByteArrays(columns), timestamp,
+ numVersions, rowLock);
+ }
+
+
+ /**
+ * Get selected columns for the specified row at a specified timestamp
+ * using existing row lock.
+ *
+ * @param row row key
+ * @param columns Array of column names and families you want to retrieve.
+ * @param ts timestamp
+ * @param numVersions
+ * @param rl row lock
+ * @return RowResult is <code>null</code> if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final byte [] row, final byte [][] columns,
+ final long ts, final int numVersions, final RowLock rl)
+ throws IOException {
+ return connection.getRegionServerWithRetries(
+ new ServerCallable<RowResult>(connection, tableName, row) {
+ public RowResult call() throws IOException {
+ long lockId = -1L;
+ if(rl != null) {
+ lockId = rl.getLockId();
+ }
+ return server.getRow(location.getRegionInfo().getRegionName(), row,
+ columns, ts, numVersions, lockId);
+ }
+ }
+ );
+ }
+
+ public RowResult getClosestRowBefore(final byte[] row, final byte[] columnFamily)
+ throws IOException {
+ return connection.getRegionServerWithRetries(
+ new ServerCallable<RowResult>(connection,tableName,row) {
+ public RowResult call() throws IOException {
+ return server.getClosestRowBefore(
+ location.getRegionInfo().getRegionName(), row, columnFamily
+ );
+ }
+ }
+ );
+ }
+
+ /**
+ * Get a scanner on the current table starting at first row.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final String [] columns)
+ throws IOException {
+ return getScanner(Bytes.toByteArrays(columns), HConstants.EMPTY_START_ROW);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final String [] columns, final String startRow)
+ throws IOException {
+ return getScanner(Bytes.toByteArrays(columns), Bytes.toBytes(startRow));
+ }
+
+ /**
+ * Get a scanner on the current table starting at first row.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final byte[][] columns)
+ throws IOException {
+ return getScanner(columns, HConstants.EMPTY_START_ROW,
+ HConstants.LATEST_TIMESTAMP, null);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final byte[][] columns, final byte [] startRow)
+ throws IOException {
+ return getScanner(columns, startRow, HConstants.LATEST_TIMESTAMP, null);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param timestamp only return results whose timestamp <= this value
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final byte[][] columns, final byte [] startRow,
+ long timestamp)
+ throws IOException {
+ return getScanner(columns, startRow, timestamp, null);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param filter a row filter using row-key regexp and/or column data filter.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final byte[][] columns, final byte [] startRow,
+ RowFilterInterface filter)
+ throws IOException {
+ return getScanner(columns, startRow, HConstants.LATEST_TIMESTAMP, filter);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row and
+ * ending just before <code>stopRow<code>.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param stopRow Row to stop scanning on. Once we hit this row we stop
+ * returning values; i.e. we return the row before this one but not the
+ * <code>stopRow</code> itself.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final byte [][] columns,
+ final byte [] startRow, final byte [] stopRow)
+ throws IOException {
+ return getScanner(columns, startRow, stopRow,
+ HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row and
+ * ending just before <code>stopRow<code>.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param stopRow Row to stop scanning on. Once we hit this row we stop
+ * returning values; i.e. we return the row before this one but not the
+ * <code>stopRow</code> itself.
+ * @param timestamp only return results whose timestamp <= this value
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final String [] columns,
+ final String startRow, final String stopRow, final long timestamp)
+ throws IOException {
+ return getScanner(Bytes.toByteArrays(columns), Bytes.toBytes(startRow),
+ Bytes.toBytes(stopRow), timestamp);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row and
+ * ending just before <code>stopRow<code>.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param stopRow Row to stop scanning on. Once we hit this row we stop
+ * returning values; i.e. we return the row before this one but not the
+ * <code>stopRow</code> itself.
+ * @param timestamp only return results whose timestamp <= this value
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final byte [][] columns,
+ final byte [] startRow, final byte [] stopRow, final long timestamp)
+ throws IOException {
+ return getScanner(columns, startRow, timestamp,
+ new WhileMatchRowFilter(new StopRowFilter(stopRow)));
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param timestamp only return results whose timestamp <= this value
+ * @param filter a row filter using row-key regexp and/or column data filter.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(String[] columns,
+ String startRow, long timestamp, RowFilterInterface filter)
+ throws IOException {
+ return getScanner(Bytes.toByteArrays(columns), Bytes.toBytes(startRow),
+ timestamp, filter);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row.
+ * Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param timestamp only return results whose timestamp <= this value
+ * @param filter a row filter using row-key regexp and/or column data filter.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final byte [][] columns,
+ final byte [] startRow, long timestamp, RowFilterInterface filter)
+ throws IOException {
+ ClientScanner s = new ClientScanner(columns, startRow,
+ timestamp, filter);
+ s.initialize();
+ return s;
+ }
+
+ /**
+ * Completely delete the row's cells.
+ *
+ * @param row Key of the row you want to completely delete.
+ * @throws IOException
+ */
+ public void deleteAll(final byte [] row) throws IOException {
+ deleteAll(row, null);
+ }
+
+ /**
+ * Completely delete the row's cells.
+ *
+ * @param row Key of the row you want to completely delete.
+ * @throws IOException
+ */
+ public void deleteAll(final String row) throws IOException {
+ deleteAll(row, null);
+ }
+
+ /**
+ * Completely delete the row's cells.
+ *
+ * @param row Key of the row you want to completely delete.
+ * @param column column to be deleted
+ * @throws IOException
+ */
+ public void deleteAll(final byte [] row, final byte [] column)
+ throws IOException {
+ deleteAll(row, column, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Completely delete the row's cells.
+ *
+ * @param row Key of the row you want to completely delete.
+ * @param ts Delete all cells of the same timestamp or older.
+ * @throws IOException
+ */
+ public void deleteAll(final byte [] row, final long ts)
+ throws IOException {
+ deleteAll(row, null, ts);
+ }
+
+ /**
+ * Completely delete the row's cells.
+ *
+ * @param row Key of the row you want to completely delete.
+ * @param ts Delete all cells of the same timestamp or older.
+ * @throws IOException
+ */
+ public void deleteAll(final String row, final long ts)
+ throws IOException {
+ deleteAll(row, null, ts);
+ }
+
+ /**
+ * Delete all cells that match the passed row and column.
+ * @param row Row to update
+ * @param column name of column whose value is to be deleted
+ * @throws IOException
+ */
+ public void deleteAll(final String row, final String column)
+ throws IOException {
+ deleteAll(row, column, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Delete all cells that match the passed row and column and whose
+ * timestamp is equal-to or older than the passed timestamp.
+ * @param row Row to update
+ * @param column name of column whose value is to be deleted
+ * @param ts Delete all cells of the same timestamp or older.
+ * @throws IOException
+ */
+ public void deleteAll(final String row, final String column, final long ts)
+ throws IOException {
+ deleteAll(Bytes.toBytes(row),
+ column != null? Bytes.toBytes(column): null, ts);
+ }
+
+ /**
+ * Delete all cells that match the passed row and column and whose
+ * timestamp is equal-to or older than the passed timestamp.
+ * @param row Row to update
+ * @param column name of column whose value is to be deleted
+ * @param ts Delete all cells of the same timestamp or older.
+ * @throws IOException
+ */
+ public void deleteAll(final byte [] row, final byte [] column, final long ts)
+ throws IOException {
+ deleteAll(row,column,ts,null);
+ }
+
+ /**
+ * Delete all cells that match the passed row and column and whose
+ * timestamp is equal-to or older than the passed timestamp, using an
+ * existing row lock.
+ * @param row Row to update
+ * @param column name of column whose value is to be deleted
+ * @param ts Delete all cells of the same timestamp or older.
+ * @param rl Existing row lock
+ * @throws IOException
+ */
+ public void deleteAll(final byte [] row, final byte [] column, final long ts,
+ final RowLock rl)
+ throws IOException {
+ connection.getRegionServerWithRetries(
+ new ServerCallable<Boolean>(connection, tableName, row) {
+ public Boolean call() throws IOException {
+ long lockId = -1L;
+ if(rl != null) {
+ lockId = rl.getLockId();
+ }
+ if (column != null) {
+ this.server.deleteAll(location.getRegionInfo().getRegionName(),
+ row, column, ts, lockId);
+ } else {
+ this.server.deleteAll(location.getRegionInfo().getRegionName(),
+ row, ts, lockId);
+ }
+ return null;
+ }
+ }
+ );
+ }
+
+ /**
+ * Delete all cells that match the passed row and column.
+ * @param row Row to update
+ * @param colRegex column regex expression
+ * @throws IOException
+ */
+ public void deleteAllByRegex(final String row, final String colRegex)
+ throws IOException {
+ deleteAll(row, colRegex, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Delete all cells that match the passed row and column and whose
+ * timestamp is equal-to or older than the passed timestamp.
+ * @param row Row to update
+ * @param colRegex Column Regex expression
+ * @param ts Delete all cells of the same timestamp or older.
+ * @throws IOException
+ */
+ public void deleteAllByRegex(final String row, final String colRegex,
+ final long ts) throws IOException {
+ deleteAllByRegex(Bytes.toBytes(row), colRegex, ts);
+ }
+
+ /**
+ * Delete all cells that match the passed row and column and whose
+ * timestamp is equal-to or older than the passed timestamp.
+ * @param row Row to update
+ * @param colRegex Column Regex expression
+ * @param ts Delete all cells of the same timestamp or older.
+ * @throws IOException
+ */
+ public void deleteAllByRegex(final byte [] row, final String colRegex,
+ final long ts) throws IOException {
+ deleteAllByRegex(row, colRegex, ts, null);
+ }
+
+ /**
+ * Delete all cells that match the passed row and column and whose
+ * timestamp is equal-to or older than the passed timestamp, using an
+ * existing row lock.
+ * @param row Row to update
+ * @param colRegex Column regex expression
+ * @param ts Delete all cells of the same timestamp or older.
+ * @param rl Existing row lock
+ * @throws IOException
+ */
+ public void deleteAllByRegex(final byte [] row, final String colRegex,
+ final long ts, final RowLock rl)
+ throws IOException {
+ connection.getRegionServerWithRetries(
+ new ServerCallable<Boolean>(connection, tableName, row) {
+ public Boolean call() throws IOException {
+ long lockId = -1L;
+ if(rl != null) {
+ lockId = rl.getLockId();
+ }
+ this.server.deleteAllByRegex(location.getRegionInfo().getRegionName(),
+ row, colRegex, ts, lockId);
+ return null;
+ }
+ }
+ );
+ }
+
+ /**
+ * Delete all cells for a row with matching column family at all timestamps.
+ *
+ * @param row The row to operate on
+ * @param family The column family to match
+ * @throws IOException
+ */
+ public void deleteFamily(final String row, final String family)
+ throws IOException {
+ deleteFamily(row, family, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Delete all cells for a row with matching column family at all timestamps.
+ *
+ * @param row The row to operate on
+ * @param family The column family to match
+ * @throws IOException
+ */
+ public void deleteFamily(final byte[] row, final byte[] family)
+ throws IOException {
+ deleteFamily(row, family, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Delete all cells for a row with matching column family with timestamps
+ * less than or equal to <i>timestamp</i>.
+ *
+ * @param row The row to operate on
+ * @param family The column family to match
+ * @param timestamp Timestamp to match
+ * @throws IOException
+ */
+ public void deleteFamily(final String row, final String family,
+ final long timestamp)
+ throws IOException{
+ deleteFamily(Bytes.toBytes(row), Bytes.toBytes(family), timestamp);
+ }
+
+ /**
+ * Delete all cells for a row with matching column family with timestamps
+ * less than or equal to <i>timestamp</i>.
+ *
+ * @param row The row to operate on
+ * @param family The column family to match
+ * @param timestamp Timestamp to match
+ * @throws IOException
+ */
+ public void deleteFamily(final byte [] row, final byte [] family,
+ final long timestamp)
+ throws IOException {
+ deleteFamily(row,family,timestamp,null);
+ }
+
+ /**
+ * Delete all cells for a row with matching column family with timestamps
+ * less than or equal to <i>timestamp</i>, using existing row lock.
+ *
+ * @param row The row to operate on
+ * @param family The column family to match
+ * @param timestamp Timestamp to match
+ * @param rl Existing row lock
+ * @throws IOException
+ */
+ public void deleteFamily(final byte [] row, final byte [] family,
+ final long timestamp, final RowLock rl)
+ throws IOException {
+ connection.getRegionServerWithRetries(
+ new ServerCallable<Boolean>(connection, tableName, row) {
+ public Boolean call() throws IOException {
+ long lockId = -1L;
+ if(rl != null) {
+ lockId = rl.getLockId();
+ }
+ server.deleteFamily(location.getRegionInfo().getRegionName(), row,
+ family, timestamp, lockId);
+ return null;
+ }
+ }
+ );
+ }
+
+ /**
+ * Delete all cells for a row with matching column family regex
+ * at all timestamps.
+ *
+ * @param row The row to operate on
+ * @param familyRegex Column family regex
+ * @throws IOException
+ */
+ public void deleteFamilyByRegex(final String row, final String familyRegex)
+ throws IOException {
+ deleteFamilyByRegex(row, familyRegex, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Delete all cells for a row with matching column family regex
+ * at all timestamps.
+ *
+ * @param row The row to operate on
+ * @param familyRegex Column family regex
+ * @throws IOException
+ */
+ public void deleteFamilyByRegex(final byte[] row, final String familyRegex)
+ throws IOException {
+ deleteFamilyByRegex(row, familyRegex, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Delete all cells for a row with matching column family regex
+ * with timestamps less than or equal to <i>timestamp</i>.
+ *
+ * @param row The row to operate on
+ * @param familyRegex Column family regex
+ * @param timestamp Timestamp to match
+ * @throws IOException
+ */
+ public void deleteFamilyByRegex(final String row, final String familyRegex,
+ final long timestamp)
+ throws IOException{
+ deleteFamilyByRegex(Bytes.toBytes(row), familyRegex, timestamp);
+ }
+
+ /**
+ * Delete all cells for a row with matching column family regex
+ * with timestamps less than or equal to <i>timestamp</i>.
+ *
+ * @param row The row to operate on
+ * @param familyRegex Column family regex
+ * @param timestamp Timestamp to match
+ * @throws IOException
+ */
+ public void deleteFamilyByRegex(final byte [] row, final String familyRegex,
+ final long timestamp)
+ throws IOException {
+ deleteFamilyByRegex(row,familyRegex,timestamp,null);
+ }
+
+ /**
+ * Delete all cells for a row with matching column family regex with
+ * timestamps less than or equal to <i>timestamp</i>, using existing
+ * row lock.
+ *
+ * @param row The row to operate on
+ * @param familyRegex Column Family Regex
+ * @param timestamp Timestamp to match
+ * @param r1 Existing row lock
+ * @throws IOException
+ */
+ public void deleteFamilyByRegex(final byte[] row, final String familyRegex,
+ final long timestamp, final RowLock r1) throws IOException {
+ connection.getRegionServerWithRetries(
+ new ServerCallable<Boolean>(connection, tableName, row) {
+ public Boolean call() throws IOException {
+ long lockId = -1L;
+ if(r1 != null) {
+ lockId = r1.getLockId();
+ }
+ server.deleteFamilyByRegex(location.getRegionInfo().getRegionName(),
+ row, familyRegex, timestamp, lockId);
+ return null;
+ }
+ }
+ );
+ }
+
+ /**
+ * Test for the existence of a row in the table.
+ *
+ * @param row The row
+ * @return true if the row exists, false otherwise
+ * @throws IOException
+ */
+ public boolean exists(final byte [] row) throws IOException {
+ return exists(row, null, HConstants.LATEST_TIMESTAMP, null);
+ }
+
+ /**
+ * Test for the existence of a row and column in the table.
+ *
+ * @param row The row
+ * @param column The column
+ * @return true if the row exists, false otherwise
+ * @throws IOException
+ */
+ public boolean exists(final byte [] row, final byte[] column)
+ throws IOException {
+ return exists(row, column, HConstants.LATEST_TIMESTAMP, null);
+ }
+
+ /**
+ * Test for the existence of a coordinate in the table.
+ *
+ * @param row The row
+ * @param column The column
+ * @param timestamp The timestamp
+ * @return true if the specified coordinate exists
+ * @throws IOException
+ */
+ public boolean exists(final byte [] row, final byte [] column,
+ long timestamp) throws IOException {
+ return exists(row, column, timestamp, null);
+ }
+
+ /**
+ * Test for the existence of a coordinate in the table.
+ *
+ * @param row The row
+ * @param column The column
+ * @param timestamp The timestamp
+ * @param rl Existing row lock
+ * @return true if the specified coordinate exists
+ * @throws IOException
+ */
+ public boolean exists(final byte [] row, final byte [] column,
+ final long timestamp, final RowLock rl) throws IOException {
+ return connection.getRegionServerWithRetries(
+ new ServerCallable<Boolean>(connection, tableName, row) {
+ public Boolean call() throws IOException {
+ long lockId = -1L;
+ if (rl != null) {
+ lockId = rl.getLockId();
+ }
+ return Boolean.valueOf(server.
+ exists(location.getRegionInfo().getRegionName(), row,
+ column, timestamp, lockId));
+ }
+ }
+ ).booleanValue();
+ }
+
+ /**
+ * Commit a BatchUpdate to the table.
+ * If autoFlush is false, the update is buffered
+ * @param batchUpdate
+ * @throws IOException
+ */
+ public synchronized void commit(final BatchUpdate batchUpdate)
+ throws IOException {
+ commit(batchUpdate,null);
+ }
+
+ /**
+ * Commit a BatchUpdate to the table using existing row lock.
+ * If autoFlush is false, the update is buffered
+ * @param batchUpdate
+ * @param rl Existing row lock
+ * @throws IOException
+ */
+ public synchronized void commit(final BatchUpdate batchUpdate,
+ final RowLock rl)
+ throws IOException {
+ checkRowAndColumns(batchUpdate);
+ if(rl != null) {
+ batchUpdate.setRowLock(rl.getLockId());
+ }
+ writeBuffer.add(batchUpdate);
+ currentWriteBufferSize += batchUpdate.heapSize();
+ if (autoFlush || currentWriteBufferSize > writeBufferSize) {
+ flushCommits();
+ }
+ }
+
+ /**
+ * Commit a List of BatchUpdate to the table.
+ * If autoFlush is false, the updates are buffered
+ * @param batchUpdates
+ * @throws IOException
+ */
+ public synchronized void commit(final List<BatchUpdate> batchUpdates)
+ throws IOException {
+ for (BatchUpdate bu : batchUpdates) {
+ checkRowAndColumns(bu);
+ writeBuffer.add(bu);
+ currentWriteBufferSize += bu.heapSize();
+ }
+ if (autoFlush || currentWriteBufferSize > writeBufferSize) {
+ flushCommits();
+ }
+ }
+
+ /**
+ * Atomically checks if a row's values match
+ * the expectedValues. If it does, it uses the
+ * batchUpdate to update the row.
+ * @param batchUpdate batchupdate to apply if check is successful
+ * @param expectedValues values to check
+ * @param rl rowlock
+ * @throws IOException
+ */
+ public synchronized boolean checkAndSave(final BatchUpdate batchUpdate,
+ final HbaseMapWritable<byte[],byte[]> expectedValues, final RowLock rl)
+ throws IOException {
+ checkRowAndColumns(batchUpdate);
+ if(rl != null) {
+ batchUpdate.setRowLock(rl.getLockId());
+ }
+ return connection.getRegionServerWithRetries(
+ new ServerCallable<Boolean>(connection, tableName, batchUpdate.getRow()) {
+ public Boolean call() throws IOException {
+ return server.checkAndSave(location.getRegionInfo().getRegionName(),
+ batchUpdate, expectedValues)?
+ Boolean.TRUE: Boolean.FALSE;
+ }
+ }
+ ).booleanValue();
+ }
+
+ /**
+ * Commit to the table the buffer of BatchUpdate.
+ * Called automaticaly in the commit methods when autoFlush is true.
+ * @throws IOException
+ */
+ public void flushCommits() throws IOException {
+ try {
+ connection.processBatchOfRows(writeBuffer, tableName);
+ } finally {
+ currentWriteBufferSize = 0;
+ writeBuffer.clear();
+ }
+ }
+
+ /**
+ * Release held resources
+ *
+ * @throws IOException
+ */
+ public void close() throws IOException{
+ flushCommits();
+ }
+
+ /**
+ * Utility method that checks rows existence, length and columns well
+ * formedness.
+ *
+ * @param bu
+ * @throws IllegalArgumentException
+ * @throws IOException
+ */
+ private void checkRowAndColumns(BatchUpdate bu)
+ throws IllegalArgumentException, IOException {
+ if (bu.getRow() == null || bu.getRow().length > HConstants.MAX_ROW_LENGTH) {
+ throw new IllegalArgumentException("Row key is invalid");
+ }
+ for (BatchOperation bo : bu) {
+ HStoreKey.getFamily(bo.getColumn());
+ }
+ }
+
+ /**
+ * Obtain a row lock
+ * @param row The row to lock
+ * @return rowLock RowLock containing row and lock id
+ * @throws IOException
+ */
+ public RowLock lockRow(final byte [] row)
+ throws IOException {
+ return connection.getRegionServerWithRetries(
+ new ServerCallable<RowLock>(connection, tableName, row) {
+ public RowLock call() throws IOException {
+ long lockId =
+ server.lockRow(location.getRegionInfo().getRegionName(), row);
+ RowLock rowLock = new RowLock(row,lockId);
+ return rowLock;
+ }
+ }
+ );
+ }
+
+ /**
+ * Release a row lock
+ * @param rl The row lock to release
+ * @throws IOException
+ */
+ public void unlockRow(final RowLock rl)
+ throws IOException {
+ connection.getRegionServerWithRetries(
+ new ServerCallable<Boolean>(connection, tableName, rl.getRow()) {
+ public Boolean call() throws IOException {
+ server.unlockRow(location.getRegionInfo().getRegionName(),
+ rl.getLockId());
+ return null;
+ }
+ }
+ );
+ }
+
+ /**
+ * Get the value of autoFlush. If true, updates will not be buffered
+ * @return value of autoFlush
+ */
+ public boolean isAutoFlush() {
+ return autoFlush;
+ }
+
+ /**
+ * Set if this instanciation of HTable will autoFlush
+ * @param autoFlush
+ */
+ public void setAutoFlush(boolean autoFlush) {
+ this.autoFlush = autoFlush;
+ }
+
+ /**
+ * Get the maximum size in bytes of the write buffer for this HTable
+ * @return the size of the write buffer in bytes
+ */
+ public long getWriteBufferSize() {
+ return writeBufferSize;
+ }
+
+ /**
+ * Set the size of the buffer in bytes
+ * @param writeBufferSize
+ */
+ public void setWriteBufferSize(long writeBufferSize) {
+ this.writeBufferSize = writeBufferSize;
+ }
+
+ /**
+ * Get the write buffer
+ * @return the current write buffer
+ */
+ public ArrayList<BatchUpdate> getWriteBuffer() {
+ return writeBuffer;
+ }
+
+ public long incrementColumnValue(final byte [] row, final byte [] column,
+ final long amount) throws IOException {
+ return connection.getRegionServerWithRetries(
+ new ServerCallable<Long>(connection, tableName, row) {
+ public Long call() throws IOException {
+ return server.incrementColumnValue(
+ location.getRegionInfo().getRegionName(), row, column, amount);
+ }
+ }
+ );
+ }
+
+ /**
+ * Implements the scanner interface for the HBase client.
+ * If there are multiple regions in a table, this scanner will iterate
+ * through them all.
+ */
+ protected class ClientScanner implements Scanner {
+ private final Log CLIENT_LOG = LogFactory.getLog(this.getClass());
+ private byte[][] columns;
+ private byte [] startRow;
+ protected long scanTime;
+ private boolean closed = false;
+ private HRegionInfo currentRegion = null;
+ private ScannerCallable callable = null;
+ protected RowFilterInterface filter;
+ private final LinkedList<RowResult> cache = new LinkedList<RowResult>();
+ @SuppressWarnings("hiding")
+ private final int scannerCaching = HTable.this.scannerCaching;
+ private long lastNext;
+
+ protected ClientScanner(final byte[][] columns, final byte [] startRow,
+ final long timestamp, final RowFilterInterface filter) {
+ if (CLIENT_LOG.isDebugEnabled()) {
+ CLIENT_LOG.debug("Creating scanner over "
+ + Bytes.toString(getTableName())
+ + " starting at key '" + Bytes.toString(startRow) + "'");
+ }
+ // save off the simple parameters
+ this.columns = columns;
+ this.startRow = startRow;
+ this.scanTime = timestamp;
+
+ // save the filter, and make sure that the filter applies to the data
+ // we're expecting to pull back
+ this.filter = filter;
+ if (filter != null) {
+ filter.validate(columns);
+ }
+ this.lastNext = System.currentTimeMillis();
+ }
+
+ //TODO: change visibility to protected
+
+ public void initialize() throws IOException {
+ nextScanner(this.scannerCaching);
+ }
+
+ protected byte[][] getColumns() {
+ return columns;
+ }
+
+ protected long getTimestamp() {
+ return scanTime;
+ }
+
+ protected RowFilterInterface getFilter() {
+ return filter;
+ }
+
+ /*
+ * Gets a scanner for the next region.
+ * Returns false if there are no more scanners.
+ */
+ private boolean nextScanner(int nbRows) throws IOException {
+ // Close the previous scanner if it's open
+ if (this.callable != null) {
+ this.callable.setClose();
+ getConnection().getRegionServerWithRetries(callable);
+ this.callable = null;
+ }
+
+ // if we're at the end of the table, then close and return false
+ // to stop iterating
+ if (currentRegion != null) {
+ if (CLIENT_LOG.isDebugEnabled()) {
+ CLIENT_LOG.debug("Advancing forward from region " + currentRegion);
+ }
+
+ byte [] endKey = currentRegion.getEndKey();
+ if (endKey == null ||
+ Bytes.equals(endKey, HConstants.EMPTY_BYTE_ARRAY) ||
+ filterSaysStop(endKey)) {
+ close();
+ return false;
+ }
+ }
+
+ HRegionInfo oldRegion = this.currentRegion;
+ byte [] localStartKey = oldRegion == null? startRow: oldRegion.getEndKey();
+
+ if (CLIENT_LOG.isDebugEnabled()) {
+ CLIENT_LOG.debug("Advancing internal scanner to startKey at '" +
+ Bytes.toString(localStartKey) + "'");
+ }
+
+ try {
+ callable = getScannerCallable(localStartKey, nbRows);
+ // open a scanner on the region server starting at the
+ // beginning of the region
+ getConnection().getRegionServerWithRetries(callable);
+ currentRegion = callable.getHRegionInfo();
+ } catch (IOException e) {
+ close();
+ throw e;
+ }
+ return true;
+ }
+
+ protected ScannerCallable getScannerCallable(byte [] localStartKey,
+ int nbRows) {
+ ScannerCallable s = new ScannerCallable(getConnection(),
+ getTableName(), columns,
+ localStartKey, scanTime, filter);
+ s.setCaching(nbRows);
+ return s;
+ }
+
+ /**
+ * @param endKey
+ * @return Returns true if the passed region endkey is judged beyond
+ * filter.
+ */
+ private boolean filterSaysStop(final byte [] endKey) {
+ if (this.filter == null) {
+ return false;
+ }
+ // Let the filter see current row.
+ this.filter.filterRowKey(endKey, 0, endKey.length);
+ return this.filter.filterAllRemaining();
+ }
+
+ public RowResult next() throws IOException {
+ // If the scanner is closed but there is some rows left in the cache,
+ // it will first empty it before returning null
+ if (cache.size() == 0 && this.closed) {
+ return null;
+ }
+ if (cache.size() == 0) {
+ RowResult[] values = null;
+ int countdown = this.scannerCaching;
+ // We need to reset it if it's a new callable that was created
+ // with a countdown in nextScanner
+ callable.setCaching(this.scannerCaching);
+ do {
+ try {
+ values = getConnection().getRegionServerWithRetries(callable);
+ } catch (IOException e) {
+ if (e instanceof UnknownScannerException &&
+ lastNext + scannerTimeout < System.currentTimeMillis()) {
+ ScannerTimeoutException ex = new ScannerTimeoutException();
+ ex.initCause(e);
+ throw ex;
+ }
+ throw e;
+ }
+ lastNext = System.currentTimeMillis();
+ if (values != null && values.length > 0) {
+ for (RowResult rs : values) {
+ cache.add(rs);
+ countdown--;
+ }
+ }
+ } while (countdown > 0 && nextScanner(countdown));
+ }
+
+ if (cache.size() > 0) {
+ return cache.poll();
+ }
+ return null;
+ }
+
+ /**
+ * @param nbRows number of rows to return
+ * @return Between zero and <param>nbRows</param> RowResults
+ * @throws IOException
+ */
+ public RowResult[] next(int nbRows) throws IOException {
+ // Collect values to be returned here
+ ArrayList<RowResult> resultSets = new ArrayList<RowResult>(nbRows);
+ for(int i = 0; i < nbRows; i++) {
+ RowResult next = next();
+ if (next != null) {
+ resultSets.add(next);
+ } else {
+ break;
+ }
+ }
+ return resultSets.toArray(new RowResult[resultSets.size()]);
+ }
+
+ public void close() {
+ if (callable != null) {
+ callable.setClose();
+ try {
+ getConnection().getRegionServerWithRetries(callable);
+ } catch (IOException e) {
+ // We used to catch this error, interpret, and rethrow. However, we
+ // have since decided that it's not nice for a scanner's close to
+ // throw exceptions. Chances are it was just an UnknownScanner
+ // exception due to lease time out.
+ }
+ callable = null;
+ }
+ closed = true;
+ }
+
+ public Iterator<RowResult> iterator() {
+ return new Iterator<RowResult>() {
+ // The next RowResult, possibly pre-read
+ RowResult next = null;
+
+ // return true if there is another item pending, false if there isn't.
+ // this method is where the actual advancing takes place, but you need
+ // to call next() to consume it. hasNext() will only advance if there
+ // isn't a pending next().
+ public boolean hasNext() {
+ if (next == null) {
+ try {
+ next = ClientScanner.this.next();
+ return next != null;
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ return true;
+ }
+
+ // get the pending next item and advance the iterator. returns null if
+ // there is no next item.
+ public RowResult next() {
+ // since hasNext() does the real advancing, we call this to determine
+ // if there is a next before proceeding.
+ if (!hasNext()) {
+ return null;
+ }
+
+ // if we get to here, then hasNext() has given us an item to return.
+ // we want to return the item and then null out the next pointer, so
+ // we use a temporary variable.
+ RowResult temp = next;
+ next = null;
+ return temp;
+ }
+
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ };
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/HTablePool.java b/src/java/org/apache/hadoop/hbase/client/HTablePool.java
new file mode 100755
index 0000000..301982e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/HTablePool.java
@@ -0,0 +1,127 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.ArrayDeque;
+/* using a stack instead of a FIFO might have some small positive performance
+ impact wrt. cache */
+import java.util.Deque;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A simple pool of HTable instances.
+ * <p>
+ * The default pool size is 10.
+ */
+public class HTablePool {
+ private static final Map<byte[], HTablePool> poolMap =
+ new TreeMap<byte[], HTablePool>(Bytes.BYTES_COMPARATOR);
+
+ private final byte[] tableName;
+ private final Deque<HTable> pool;
+ private final int maxSize;
+
+ /**
+ * Get a shared table pool.
+ * @param tableName the table name
+ * @return the table pool
+ */
+ public static HTablePool getPool(byte[] tableName) {
+ return getPool(tableName, 10);
+ }
+
+ /**
+ * Get a shared table pool.
+ * <p>
+ * NOTE: <i>maxSize</i> is advisory. If the pool does not yet exist, a new
+ * shared pool will be allocated with <i>maxSize</i> as the size limit.
+ * However, if the shared pool already exists, and was created with a
+ * different (or default) value for <i>maxSize</i>, it will not be changed.
+ * @param tableName the table name
+ * @param maxSize the maximum size of the pool
+ * @return the table pool
+ */
+ public static HTablePool getPool(byte[] tableName, int maxSize) {
+ synchronized (poolMap) {
+ HTablePool pool = poolMap.get(tableName);
+ if (pool == null) {
+ pool = new HTablePool(tableName, maxSize);
+ poolMap.put(tableName, pool);
+ }
+ return pool;
+ }
+ }
+
+ /**
+ * Constructor
+ * @param tableName the table name
+ */
+ public HTablePool(byte[] tableName) {
+ this.tableName = tableName;
+ this.maxSize = 10;
+ this.pool = new ArrayDeque<HTable>(this.maxSize);
+ }
+
+ /**
+ * Constructor
+ * @param tableName the table name
+ * @param maxSize maximum pool size
+ */
+ public HTablePool(byte[] tableName, int maxSize) {
+ this.tableName = tableName;
+ this.maxSize = maxSize;
+ this.pool = new ArrayDeque<HTable>(this.maxSize);
+ }
+
+ /**
+ * Get a HTable instance, possibly from the pool, if one is available.
+ * @return HTable a HTable instance
+ * @throws IOException
+ */
+ public HTable get() throws IOException {
+ synchronized (pool) {
+ // peek then pop inside a synchronized block avoids the overhead of a
+ // NoSuchElementException
+ HTable table = pool.peek();
+ if (table != null) {
+ return pool.pop();
+ }
+ }
+ return new HTable(tableName);
+ }
+
+ /**
+ * Return a HTable instance to the pool.
+ * @param table a HTable instance
+ */
+ public void put(HTable table) {
+ synchronized (pool) {
+ if (pool.size() < maxSize) {
+ pool.push(table);
+ }
+ }
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/MetaScanner.java b/src/java/org/apache/hadoop/hbase/client/MetaScanner.java
new file mode 100644
index 0000000..689ca01
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/MetaScanner.java
@@ -0,0 +1,90 @@
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Scanner class that contains the <code>.META.</code> table scanning logic
+ * and uses a Retryable scanner. Provided visitors will be called
+ * for each row.
+ */
+class MetaScanner implements HConstants {
+
+ /**
+ * Scans the meta table and calls a visitor on each RowResult and uses a empty
+ * start row value as table name.
+ *
+ * @param configuration
+ * @param visitor A custom visitor
+ * @throws IOException
+ */
+ public static void metaScan(HBaseConfiguration configuration,
+ MetaScannerVisitor visitor)
+ throws IOException {
+ metaScan(configuration, visitor, EMPTY_START_ROW);
+ }
+
+ /**
+ * Scans the meta table and calls a visitor on each RowResult. Uses a table
+ * name to locate meta regions.
+ *
+ * @param configuration
+ * @param visitor
+ * @param tableName
+ * @throws IOException
+ */
+ public static void metaScan(HBaseConfiguration configuration,
+ MetaScannerVisitor visitor, byte[] tableName)
+ throws IOException {
+ HConnection connection = HConnectionManager.getConnection(configuration);
+ byte [] startRow = tableName == null || tableName.length == 0 ?
+ HConstants.EMPTY_START_ROW :
+ HRegionInfo.createRegionName(tableName, null, ZEROES);
+
+ // Scan over each meta region
+ ScannerCallable callable = null;
+ do {
+ callable = new ScannerCallable(connection, META_TABLE_NAME,
+ COLUMN_FAMILY_ARRAY, startRow, LATEST_TIMESTAMP, null);
+ // Open scanner
+ connection.getRegionServerWithRetries(callable);
+ try {
+ RowResult r = null;
+ do {
+ RowResult [] rrs = connection.getRegionServerWithRetries(callable);
+ if (rrs == null || rrs.length == 0 || rrs[0].size() == 0) {
+ break;
+ }
+ r = rrs[0];
+ } while(visitor.processRow(r));
+ // Advance the startRow to the end key of the current region
+ startRow = callable.getHRegionInfo().getEndKey();
+ } finally {
+ // Close scanner
+ callable.setClose();
+ connection.getRegionServerWithRetries(callable);
+ }
+ } while (Bytes.compareTo(startRow, LAST_ROW) != 0);
+ }
+
+ /**
+ * Visitor class called to process each row of the .META. table
+ */
+ interface MetaScannerVisitor {
+ /**
+ * Visitor method that accepts a RowResult and the meta region location.
+ * Implementations can return false to stop the region's loop if it becomes
+ * unnecessary for some reason.
+ *
+ * @param rowResult
+ * @return A boolean to know if it should continue to loop in the region
+ * @throws IOException
+ */
+ public boolean processRow(RowResult rowResult) throws IOException;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/NoServerForRegionException.java b/src/java/org/apache/hadoop/hbase/client/NoServerForRegionException.java
new file mode 100644
index 0000000..592061b
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/NoServerForRegionException.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.RegionException;
+
+/**
+ * Thrown when no region server can be found for a region
+ */
+public class NoServerForRegionException extends RegionException {
+ private static final long serialVersionUID = 1L << 11 - 1L;
+
+ /** default constructor */
+ public NoServerForRegionException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public NoServerForRegionException(String s) {
+ super(s);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/RegionOfflineException.java b/src/java/org/apache/hadoop/hbase/client/RegionOfflineException.java
new file mode 100644
index 0000000..ccbd592
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/RegionOfflineException.java
@@ -0,0 +1,36 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.RegionException;
+
+/** Thrown when a table can not be located */
+public class RegionOfflineException extends RegionException {
+ private static final long serialVersionUID = 466008402L;
+/** default constructor */
+ public RegionOfflineException() {
+ super();
+ }
+
+ /** @param s message */
+ public RegionOfflineException(String s) {
+ super(s);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java b/src/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java
new file mode 100644
index 0000000..bdc768c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java
@@ -0,0 +1,62 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Exception thrown by HTable methods when an attempt to do something (like
+ * commit changes) fails after a bunch of retries.
+ */
+public class RetriesExhaustedException extends IOException {
+ private static final long serialVersionUID = 1876775844L;
+ /**
+ * Create a new RetriesExhaustedException from the list of prior failures.
+ * @param serverName name of HRegionServer
+ * @param regionName name of region
+ * @param row The row we were pursuing when we ran out of retries
+ * @param numTries The number of tries we made
+ * @param exceptions List of exceptions that failed before giving up
+ */
+ public RetriesExhaustedException(String serverName, final byte [] regionName,
+ final byte [] row,
+ int numTries, List<Throwable> exceptions) {
+ super(getMessage(serverName, regionName, row, numTries, exceptions));
+ }
+
+
+ private static String getMessage(String serverName, final byte [] regionName,
+ final byte [] row,
+ int numTries, List<Throwable> exceptions) {
+ StringBuilder buffer = new StringBuilder("Trying to contact region server ");
+ buffer.append(serverName);
+ buffer.append(" for region ");
+ buffer.append(regionName == null? "": Bytes.toString(regionName));
+ buffer.append(", row '");
+ buffer.append(row == null? "": Bytes.toString(row));
+ buffer.append("', but failed after ");
+ buffer.append(numTries + 1);
+ buffer.append(" attempts.\nExceptions:\n");
+ for (Throwable t : exceptions) {
+ buffer.append(t.toString());
+ buffer.append("\n");
+ }
+ return buffer.toString();
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/client/RowLock.java b/src/java/org/apache/hadoop/hbase/client/RowLock.java
new file mode 100644
index 0000000..3c8c461
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/RowLock.java
@@ -0,0 +1,62 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+/**
+ * Holds row name and lock id.
+ */
+public class RowLock {
+ private byte [] row = null;
+ private long lockId = -1L;
+
+ /**
+ * Creates a RowLock from a row and lock id
+ * @param row
+ * @param lockId
+ */
+ public RowLock(final byte [] row, final long lockId) {
+ this.row = row;
+ this.lockId = lockId;
+ }
+
+ /**
+ * Creates a RowLock with only a lock id
+ * @param lockId
+ */
+ public RowLock(final long lockId) {
+ this.lockId = lockId;
+ }
+
+ /**
+ * Get the row for this RowLock
+ * @return the row
+ */
+ public byte [] getRow() {
+ return row;
+ }
+
+ /**
+ * Get the lock id from this RowLock
+ * @return the lock id
+ */
+ public long getLockId() {
+ return lockId;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/Scanner.java b/src/java/org/apache/hadoop/hbase/client/Scanner.java
new file mode 100644
index 0000000..5f50f42
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/Scanner.java
@@ -0,0 +1,54 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+import org.apache.hadoop.hbase.io.RowResult;
+
+/**
+ * Interface for client-side scanning.
+ * Go to {@link HTable} to obtain instances.
+ */
+public interface Scanner extends Closeable, Iterable<RowResult> {
+ /**
+ * Grab the next row's worth of values. The scanner will return a RowResult
+ * that contains both the row's key and a map of byte[] column names to Cell
+ * value objects. The data returned will only contain the most recent data
+ * value for each row that is not newer than the target time passed when the
+ * scanner was created.
+ * @return RowResult object if there is another row, null if the scanner is
+ * exhausted.
+ * @throws IOException
+ */
+ public RowResult next() throws IOException;
+
+ /**
+ * @param nbRows number of rows to return
+ * @return Between zero and <param>nbRows</param> RowResults
+ * @throws IOException
+ */
+ public RowResult [] next(int nbRows) throws IOException;
+
+ /**
+ * Closes the scanner and releases any resources it has allocated
+ */
+ public void close();
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/client/ScannerCallable.java b/src/java/org/apache/hadoop/hbase/client/ScannerCallable.java
new file mode 100644
index 0000000..c43598d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/ScannerCallable.java
@@ -0,0 +1,138 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.RowResult;
+
+
+/**
+ * Retries scanner operations such as create, next, etc.
+ * Used by {@link Scanner}s made by {@link HTable}.
+ */
+public class ScannerCallable extends ServerCallable<RowResult[]> {
+ private long scannerId = -1L;
+ private boolean instantiated = false;
+ private boolean closed = false;
+ private final byte [][] columns;
+ private final long timestamp;
+ private final RowFilterInterface filter;
+ private int caching = 1;
+
+ /**
+ * @param connection
+ * @param tableName
+ * @param columns
+ * @param startRow
+ * @param timestamp
+ * @param filter
+ */
+ public ScannerCallable (HConnection connection, byte [] tableName, byte [][] columns,
+ byte [] startRow, long timestamp, RowFilterInterface filter) {
+ super(connection, tableName, startRow);
+ this.columns = columns;
+ this.timestamp = timestamp;
+ this.filter = filter;
+ }
+
+ /**
+ * @param reload
+ * @throws IOException
+ */
+ @Override
+ public void instantiateServer(boolean reload) throws IOException {
+ if (!instantiated || reload) {
+ super.instantiateServer(reload);
+ instantiated = true;
+ }
+ }
+
+ /**
+ * @see java.util.concurrent.Callable#call()
+ */
+ public RowResult[] call() throws IOException {
+ if (scannerId != -1L && closed) {
+ server.close(scannerId);
+ scannerId = -1L;
+ } else if (scannerId == -1L && !closed) {
+ // open the scanner
+ scannerId = openScanner();
+ } else {
+ RowResult [] rrs = server.next(scannerId, caching);
+ return rrs.length == 0 ? null : rrs;
+ }
+ return null;
+ }
+
+ protected long openScanner() throws IOException {
+ return server.openScanner(
+ this.location.getRegionInfo().getRegionName(), columns, row,
+ timestamp, filter);
+ }
+
+ protected byte [][] getColumns() {
+ return columns;
+ }
+
+ protected long getTimestamp() {
+ return timestamp;
+ }
+
+ protected RowFilterInterface getFilter() {
+ return filter;
+ }
+
+ /**
+ * Call this when the next invocation of call should close the scanner
+ */
+ public void setClose() {
+ closed = true;
+ }
+
+ /**
+ * @return the HRegionInfo for the current region
+ */
+ public HRegionInfo getHRegionInfo() {
+ if (!instantiated) {
+ return null;
+ }
+ return location.getRegionInfo();
+ }
+
+ /**
+ * Get the number of rows that will be fetched on next
+ * @return the number of rows for caching
+ */
+ public int getCaching() {
+ return caching;
+ }
+
+ /**
+ * Set the number of rows that will be fetched on next
+ * @param caching the number of rows for caching
+ */
+ public void setCaching(int caching) {
+ this.caching = caching;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java b/src/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java
new file mode 100644
index 0000000..7b31935
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Thrown when a scanner has timed out.
+ */
+public class ScannerTimeoutException extends DoNotRetryIOException {
+
+ private static final long serialVersionUID = 8788838690290688313L;
+
+ /** default constructor */
+ ScannerTimeoutException() {
+ super();
+ }
+
+ /** @param s */
+ ScannerTimeoutException(String s) {
+ super(s);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/ServerCallable.java b/src/java/org/apache/hadoop/hbase/client/ServerCallable.java
new file mode 100644
index 0000000..a26a96a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/ServerCallable.java
@@ -0,0 +1,81 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+
+/**
+ * Abstract class that implements Callable, used by retryable actions.
+ * @param <T> the class that the ServerCallable handles
+ */
+public abstract class ServerCallable<T> implements Callable<T> {
+ protected final HConnection connection;
+ protected final byte [] tableName;
+ protected final byte [] row;
+ protected HRegionLocation location;
+ protected HRegionInterface server;
+
+ /**
+ * @param connection
+ * @param tableName
+ * @param row
+ */
+ public ServerCallable(HConnection connection, byte [] tableName, byte [] row) {
+ this.connection = connection;
+ this.tableName = tableName;
+ this.row = row;
+ }
+
+ /**
+ *
+ * @param reload set this to true if connection should re-find the region
+ * @throws IOException
+ */
+ public void instantiateServer(boolean reload) throws IOException {
+ this.location = connection.getRegionLocation(tableName, row, reload);
+ this.server = connection.getHRegionConnection(location.getServerAddress());
+ }
+
+ /** @return the server name */
+ public String getServerName() {
+ if (location == null) {
+ return null;
+ }
+ return location.getServerAddress().toString();
+ }
+
+ /** @return the region name */
+ public byte[] getRegionName() {
+ if (location == null) {
+ return null;
+ }
+ return location.getRegionInfo().getRegionName();
+ }
+
+ /** @return the row */
+ public byte [] getRow() {
+ return row;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/client/ServerConnection.java b/src/java/org/apache/hadoop/hbase/client/ServerConnection.java
new file mode 100644
index 0000000..0ea29fe
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/ServerConnection.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HRegionLocation;
+
+/**
+ * Used by master and region server, so that they do not need to wait for the
+ * cluster to be up to get a connection.
+ */
+public interface ServerConnection extends HConnection {
+
+ /**
+ * Set root region location in connection
+ * @param rootRegion
+ */
+ public void setRootRegionLocation(HRegionLocation rootRegion);
+
+ /**
+ * Unset the root region location in the connection. Called by
+ * ServerManager.processRegionClose.
+ */
+ public void unsetRootRegionLocation();
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/ServerConnectionManager.java b/src/java/org/apache/hadoop/hbase/client/ServerConnectionManager.java
new file mode 100644
index 0000000..34bcd8b
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/ServerConnectionManager.java
@@ -0,0 +1,44 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+
+/**
+ * Used by server processes to expose HServerConnection method
+ * setRootRegionLocation
+ */
+public class ServerConnectionManager extends HConnectionManager {
+ /*
+ * Not instantiable
+ */
+ private ServerConnectionManager() {}
+
+ /**
+ * Get the connection object for the instance specified by the configuration
+ * If no current connection exists, create a new connection for that instance
+ * @param conf
+ * @return HConnection object for the instance specified by the configuration
+ */
+ public static ServerConnection getConnection(HBaseConfiguration conf) {
+ return (ServerConnection) HConnectionManager.getConnection(conf);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/UnmodifyableHColumnDescriptor.java b/src/java/org/apache/hadoop/hbase/client/UnmodifyableHColumnDescriptor.java
new file mode 100644
index 0000000..eee609c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/UnmodifyableHColumnDescriptor.java
@@ -0,0 +1,89 @@
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+
+/**
+ * Immutable HColumnDescriptor
+ */
+public class UnmodifyableHColumnDescriptor extends HColumnDescriptor {
+
+ /**
+ * @param desc
+ */
+ public UnmodifyableHColumnDescriptor (final HColumnDescriptor desc) {
+ super(desc);
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setValue(byte[], byte[])
+ */
+ @Override
+ public void setValue(byte[] key, byte[] value) {
+ throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setValue(java.lang.String, java.lang.String)
+ */
+ @Override
+ public void setValue(String key, String value) {
+ throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions(int)
+ */
+ @Override
+ public void setMaxVersions(int maxVersions) {
+ throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setInMemory(boolean)
+ */
+ @Override
+ public void setInMemory(boolean inMemory) {
+ throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setBlockCacheEnabled(boolean)
+ */
+ @Override
+ public void setBlockCacheEnabled(boolean blockCacheEnabled) {
+ throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setMaxValueLength(int)
+ */
+ @Override
+ public void setMaxValueLength(int maxLength) {
+ throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setTimeToLive(int)
+ */
+ @Override
+ public void setTimeToLive(int timeToLive) {
+ throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setCompressionType(org.apache.hadoop.hbase.io.hfile.Compression.Algorithm)
+ */
+ @Override
+ public void setCompressionType(Compression.Algorithm type) {
+ throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HColumnDescriptor#setMapFileIndexInterval(int)
+ */
+ @Override
+ public void setMapFileIndexInterval(int interval) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java b/src/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java
new file mode 100644
index 0000000..2519a07
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java
@@ -0,0 +1,51 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+
+class UnmodifyableHRegionInfo extends HRegionInfo {
+ /*
+ * Creates an unmodifyable copy of an HRegionInfo
+ *
+ * @param info
+ */
+ UnmodifyableHRegionInfo(HRegionInfo info) {
+ super(info);
+ this.tableDesc = new UnmodifyableHTableDescriptor(info.getTableDesc());
+ }
+
+ /**
+ * @param split set split status
+ */
+ @Override
+ public void setSplit(boolean split) {
+ throw new UnsupportedOperationException("HRegionInfo is read-only");
+ }
+
+ /**
+ * @param offLine set online - offline status
+ */
+ @Override
+ public void setOffline(boolean offLine) {
+ throw new UnsupportedOperationException("HRegionInfo is read-only");
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java b/src/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java
new file mode 100644
index 0000000..8d3e002
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java
@@ -0,0 +1,132 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.tableindexed.IndexSpecification;
+
+/**
+ * Read-only table descriptor.
+ */
+public class UnmodifyableHTableDescriptor extends HTableDescriptor {
+ /** Default constructor */
+ public UnmodifyableHTableDescriptor() {
+ super();
+ }
+
+ /*
+ * Create an unmodifyable copy of an HTableDescriptor
+ * @param desc
+ */
+ UnmodifyableHTableDescriptor(final HTableDescriptor desc) {
+ super(desc.getName(), getUnmodifyableFamilies(desc), desc.getIndexes(), desc.getValues());
+ }
+
+ /*
+ * @param desc
+ * @return Families as unmodifiable array.
+ */
+ private static HColumnDescriptor[] getUnmodifyableFamilies(
+ final HTableDescriptor desc) {
+ HColumnDescriptor [] f = new HColumnDescriptor[desc.getFamilies().size()];
+ int i = 0;
+ for (HColumnDescriptor c: desc.getFamilies()) {
+ f[i++] = c;
+ }
+ return f;
+ }
+
+ /**
+ * Does NOT add a column family. This object is immutable
+ * @param family HColumnDescriptor of familyto add.
+ */
+ @Override
+ public void addFamily(final HColumnDescriptor family) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+
+ /**
+ * @param column
+ * @return Column descriptor for the passed family name or the family on
+ * passed in column.
+ */
+ @Override
+ public HColumnDescriptor removeFamily(final byte [] column) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HTableDescriptor#setInMemory(boolean)
+ */
+ @Override
+ public void setInMemory(boolean inMemory) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HTableDescriptor#setReadOnly(boolean)
+ */
+ @Override
+ public void setReadOnly(boolean readOnly) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HTableDescriptor#setValue(byte[], byte[])
+ */
+ @Override
+ public void setValue(byte[] key, byte[] value) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HTableDescriptor#setValue(java.lang.String, java.lang.String)
+ */
+ @Override
+ public void setValue(String key, String value) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HTableDescriptor#setMaxFileSize(long)
+ */
+ @Override
+ public void setMaxFileSize(long maxFileSize) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HTableDescriptor#setMemcacheFlushSize(int)
+ */
+ @Override
+ public void setMemcacheFlushSize(int memcacheFlushSize) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.HTableDescriptor#addIndex(org.apache.hadoop.hbase.client.tableindexed.IndexSpecification)
+ */
+ @Override
+ public void addIndex(IndexSpecification index) {
+ throw new UnsupportedOperationException("HTableDescriptor is read-only");
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexKeyGenerator.java b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexKeyGenerator.java
new file mode 100644
index 0000000..dae811e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexKeyGenerator.java
@@ -0,0 +1,29 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.tableindexed;
+
+import java.util.Map;
+
+import org.apache.hadoop.io.Writable;
+
+public interface IndexKeyGenerator extends Writable {
+
+ byte [] createIndexKey(byte [] rowKey, Map<byte [], byte []> columns);
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexNotFoundException.java b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexNotFoundException.java
new file mode 100644
index 0000000..3e6169c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexNotFoundException.java
@@ -0,0 +1,47 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.tableindexed;
+
+import java.io.IOException;
+
+/**
+ * Thrown when asking for an index that does not exist.
+ */
+public class IndexNotFoundException extends IOException {
+
+ private static final long serialVersionUID = 6533971528557000965L;
+
+ public IndexNotFoundException() {
+ super();
+ }
+
+ public IndexNotFoundException(String arg0) {
+ super(arg0);
+ }
+
+ public IndexNotFoundException(Throwable arg0) {
+ super(arg0.getMessage());
+ }
+
+ public IndexNotFoundException(String arg0, Throwable arg1) {
+ super(arg0+arg1.getMessage());
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexSpecification.java b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexSpecification.java
new file mode 100644
index 0000000..54f8c62
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexSpecification.java
@@ -0,0 +1,190 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.tableindexed;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.hadoop.io.Writable;
+
+/** Holds the specification for a single secondary index. */
+public class IndexSpecification implements Writable {
+
+ // Columns that are indexed (part of the indexRowKey)
+ private byte[][] indexedColumns;
+
+ // Constructs the
+ private IndexKeyGenerator keyGenerator;
+
+ // Additional columns mapped into the indexed row. These will be available for
+ // filters when scanning the index.
+ private byte[][] additionalColumns;
+
+ private byte[][] allColumns;
+
+ // Id of this index, unique within a table.
+ private String indexId;
+
+ /** Construct an "simple" index spec for a single column.
+ * @param indexId
+ * @param indexedColumn
+ */
+ public IndexSpecification(String indexId, byte[] indexedColumn) {
+ this(indexId, new byte[][] { indexedColumn }, null,
+ new SimpleIndexKeyGenerator(indexedColumn));
+ }
+
+ /**
+ * Construct an index spec by specifying everything.
+ *
+ * @param indexId
+ * @param indexedColumns
+ * @param additionalColumns
+ * @param keyGenerator
+ */
+ public IndexSpecification(String indexId, byte[][] indexedColumns,
+ byte[][] additionalColumns, IndexKeyGenerator keyGenerator) {
+ this.indexId = indexId;
+ this.indexedColumns = indexedColumns;
+ this.additionalColumns = additionalColumns;
+ this.keyGenerator = keyGenerator;
+ this.makeAllColumns();
+ }
+
+ public IndexSpecification() {
+ // For writable
+ }
+
+ private void makeAllColumns() {
+ this.allColumns = new byte[indexedColumns.length
+ + (additionalColumns == null ? 0 : additionalColumns.length)][];
+ System.arraycopy(indexedColumns, 0, allColumns, 0, indexedColumns.length);
+ if (additionalColumns != null) {
+ System.arraycopy(additionalColumns, 0, allColumns, indexedColumns.length,
+ additionalColumns.length);
+ }
+ }
+
+ /**
+ * Get the indexedColumns.
+ *
+ * @return Return the indexedColumns.
+ */
+ public byte[][] getIndexedColumns() {
+ return indexedColumns;
+ }
+
+ /**
+ * Get the keyGenerator.
+ *
+ * @return Return the keyGenerator.
+ */
+ public IndexKeyGenerator getKeyGenerator() {
+ return keyGenerator;
+ }
+
+ /**
+ * Get the additionalColumns.
+ *
+ * @return Return the additionalColumns.
+ */
+ public byte[][] getAdditionalColumns() {
+ return additionalColumns;
+ }
+
+ /**
+ * Get the indexId.
+ *
+ * @return Return the indexId.
+ */
+ public String getIndexId() {
+ return indexId;
+ }
+
+ public byte[][] getAllColumns() {
+ return allColumns;
+ }
+
+ public boolean containsColumn(byte[] column) {
+ for (byte[] col : allColumns) {
+ if (Bytes.equals(column, col)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ public byte[] getIndexedTableName(byte[] baseTableName) {
+ return Bytes.add(baseTableName, Bytes.toBytes("-" + indexId));
+ }
+
+ /** {@inheritDoc} */
+ public void readFields(DataInput in) throws IOException {
+ indexId = in.readUTF();
+ int numIndexedCols = in.readInt();
+ indexedColumns = new byte[numIndexedCols][];
+ for (int i = 0; i < numIndexedCols; i++) {
+ indexedColumns[i] = Bytes.readByteArray(in);
+ }
+ int numAdditionalCols = in.readInt();
+ additionalColumns = new byte[numAdditionalCols][];
+ for (int i = 0; i < numAdditionalCols; i++) {
+ additionalColumns[i] = Bytes.readByteArray(in);
+ }
+ makeAllColumns();
+ HBaseConfiguration conf = new HBaseConfiguration();
+ keyGenerator = (IndexKeyGenerator) ObjectWritable.readObject(in, conf);
+ }
+
+ /** {@inheritDoc} */
+ public void write(DataOutput out) throws IOException {
+ out.writeUTF(indexId);
+ out.writeInt(indexedColumns.length);
+ for (byte[] col : indexedColumns) {
+ Bytes.writeByteArray(out, col);
+ }
+ if (additionalColumns != null) {
+ out.writeInt(additionalColumns.length);
+ for (byte[] col : additionalColumns) {
+ Bytes.writeByteArray(out, col);
+ }
+ } else {
+ out.writeInt(0);
+ }
+ HBaseConfiguration conf = new HBaseConfiguration();
+ ObjectWritable
+ .writeObject(out, keyGenerator, IndexKeyGenerator.class, conf);
+ }
+
+ /** {@inheritDoc} */
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("ID => ");
+ sb.append(indexId);
+ return sb.toString();
+ }
+
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTable.java b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTable.java
new file mode 100644
index 0000000..1cfa0ff
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTable.java
@@ -0,0 +1,224 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.tableindexed;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HStoreKey;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.client.transactional.TransactionalTable;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/** HTable extended with indexed support. */
+public class IndexedTable extends TransactionalTable {
+
+ // FIXME, these belong elsewhere
+ static final byte[] INDEX_COL_FAMILY_NAME = Bytes.toBytes("__INDEX__");
+ static final byte[] INDEX_COL_FAMILY = Bytes.add(
+ INDEX_COL_FAMILY_NAME, new byte[] { HStoreKey.COLUMN_FAMILY_DELIMITER });
+ public static final byte[] INDEX_BASE_ROW_COLUMN = Bytes.add(
+ INDEX_COL_FAMILY, Bytes.toBytes("ROW"));
+
+ static final Log LOG = LogFactory.getLog(IndexedTable.class);
+
+ private Map<String, HTable> indexIdToTable = new HashMap<String, HTable>();
+
+ public IndexedTable(final HBaseConfiguration conf, final byte[] tableName)
+ throws IOException {
+ super(conf, tableName);
+
+ for (IndexSpecification spec : super.getTableDescriptor().getIndexes()) {
+ indexIdToTable.put(spec.getIndexId(), new HTable(conf, spec
+ .getIndexedTableName(tableName)));
+ }
+ }
+
+ /**
+ * Open up an indexed scanner. Results will come back in the indexed order,
+ * but will contain RowResults from the original table.
+ *
+ * @param indexId the id of the index to use
+ * @param indexStartRow (created from the IndexKeyGenerator)
+ * @param indexColumns in the index table
+ * @param indexFilter filter to run on the index'ed table. This can only use
+ * columns that have been added to the index.
+ * @param baseColumns from the original table
+ * @return scanner
+ * @throws IOException
+ * @throws IndexNotFoundException
+ */
+ public Scanner getIndexedScanner(String indexId, final byte[] indexStartRow,
+ byte[][] indexColumns, final RowFilterInterface indexFilter,
+ final byte[][] baseColumns) throws IOException, IndexNotFoundException {
+ IndexSpecification indexSpec = super.getTableDescriptor().getIndex(indexId);
+ if (indexSpec == null) {
+ throw new IndexNotFoundException("Index " + indexId
+ + " not defined in table "
+ + super.getTableDescriptor().getNameAsString());
+ }
+ verifyIndexColumns(indexColumns, indexSpec);
+ // TODO, verify/remove index columns from baseColumns
+
+ HTable indexTable = indexIdToTable.get(indexId);
+
+ byte[][] allIndexColumns;
+ if (indexColumns != null) {
+ allIndexColumns = new byte[indexColumns.length + 1][];
+ System
+ .arraycopy(indexColumns, 0, allIndexColumns, 0, indexColumns.length);
+ allIndexColumns[indexColumns.length] = INDEX_BASE_ROW_COLUMN;
+ } else {
+ byte[][] allColumns = indexSpec.getAllColumns();
+ allIndexColumns = new byte[allColumns.length + 1][];
+ System.arraycopy(allColumns, 0, allIndexColumns, 0, allColumns.length);
+ allIndexColumns[allColumns.length] = INDEX_BASE_ROW_COLUMN;
+ }
+
+ Scanner indexScanner = indexTable.getScanner(allIndexColumns,
+ indexStartRow, indexFilter);
+
+ return new ScannerWrapper(indexScanner, baseColumns);
+ }
+
+ private void verifyIndexColumns(byte[][] requestedColumns,
+ IndexSpecification indexSpec) {
+ if (requestedColumns == null) {
+ return;
+ }
+ for (byte[] requestedColumn : requestedColumns) {
+ boolean found = false;
+ for (byte[] indexColumn : indexSpec.getAllColumns()) {
+ if (Bytes.equals(requestedColumn, indexColumn)) {
+ found = true;
+ break;
+ }
+ }
+ if (!found) {
+ throw new RuntimeException("Column [" + Bytes.toString(requestedColumn)
+ + "] not in index " + indexSpec.getIndexId());
+ }
+ }
+ }
+
+ private class ScannerWrapper implements Scanner {
+
+ private Scanner indexScanner;
+ private byte[][] columns;
+
+ public ScannerWrapper(Scanner indexScanner, byte[][] columns) {
+ this.indexScanner = indexScanner;
+ this.columns = columns;
+ }
+
+ /** {@inheritDoc} */
+ public RowResult next() throws IOException {
+ RowResult[] result = next(1);
+ if (result == null || result.length < 1)
+ return null;
+ return result[0];
+ }
+
+ /** {@inheritDoc} */
+ public RowResult[] next(int nbRows) throws IOException {
+ RowResult[] indexResult = indexScanner.next(nbRows);
+ if (indexResult == null) {
+ return null;
+ }
+ RowResult[] result = new RowResult[indexResult.length];
+ for (int i = 0; i < indexResult.length; i++) {
+ RowResult row = indexResult[i];
+ byte[] baseRow = row.get(INDEX_BASE_ROW_COLUMN).getValue();
+ LOG.debug("next index row [" + Bytes.toString(row.getRow())
+ + "] -> base row [" + Bytes.toString(baseRow) + "]");
+ HbaseMapWritable<byte[], Cell> colValues =
+ new HbaseMapWritable<byte[], Cell>();
+ if (columns != null && columns.length > 0) {
+ LOG.debug("Going to base table for remaining columns");
+ RowResult baseResult = IndexedTable.this.getRow(baseRow, columns);
+
+ if (baseResult != null) {
+ colValues.putAll(baseResult);
+ }
+ }
+ for (Entry<byte[], Cell> entry : row.entrySet()) {
+ byte[] col = entry.getKey();
+ if (HStoreKey.matchingFamily(INDEX_COL_FAMILY_NAME, col)) {
+ continue;
+ }
+ colValues.put(col, entry.getValue());
+ }
+ result[i] = new RowResult(baseRow, colValues);
+ }
+ return result;
+ }
+
+ /** {@inheritDoc} */
+ public void close() {
+ indexScanner.close();
+ }
+
+ /** {@inheritDoc} */
+ public Iterator<RowResult> iterator() {
+ // FIXME, copied from HTable.ClientScanner. Extract this to common base
+ // class?
+ return new Iterator<RowResult>() {
+ RowResult next = null;
+
+ public boolean hasNext() {
+ if (next == null) {
+ try {
+ next = ScannerWrapper.this.next();
+ return next != null;
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ return true;
+ }
+
+ public RowResult next() {
+ if (!hasNext()) {
+ return null;
+ }
+ RowResult temp = next;
+ next = null;
+ return temp;
+ }
+
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ };
+ }
+
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTableAdmin.java b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTableAdmin.java
new file mode 100644
index 0000000..9c753c8
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/tableindexed/IndexedTableAdmin.java
@@ -0,0 +1,97 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.tableindexed;
+
+import java.io.IOException;
+import java.util.Set;
+import java.util.TreeSet;
+
+import org.apache.hadoop.hbase.ColumnNameParseException;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HStoreKey;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.TableExistsException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Extension of HBaseAdmin that creates indexed tables.
+ *
+ */
+public class IndexedTableAdmin extends HBaseAdmin {
+
+ /**
+ * Constructor
+ *
+ * @param conf Configuration object
+ * @throws MasterNotRunningException
+ */
+ public IndexedTableAdmin(HBaseConfiguration conf)
+ throws MasterNotRunningException {
+ super(conf);
+ }
+
+ /**
+ * Creates a new table
+ *
+ * @param desc table descriptor for table
+ *
+ * @throws IllegalArgumentException if the table name is reserved
+ * @throws MasterNotRunningException if master is not running
+ * @throws TableExistsException if table already exists (If concurrent
+ * threads, the table may have been created between test-for-existence and
+ * attempt-at-creation).
+ * @throws IOException
+ */
+ @Override
+ public void createTable(HTableDescriptor desc) throws IOException {
+ super.createTable(desc);
+ this.createIndexTables(desc);
+ }
+
+ private void createIndexTables(HTableDescriptor tableDesc) throws IOException {
+ byte[] baseTableName = tableDesc.getName();
+ for (IndexSpecification indexSpec : tableDesc.getIndexes()) {
+ HTableDescriptor indexTableDesc = createIndexTableDesc(baseTableName,
+ indexSpec);
+ super.createTable(indexTableDesc);
+ }
+ }
+
+ private HTableDescriptor createIndexTableDesc(byte[] baseTableName,
+ IndexSpecification indexSpec) throws ColumnNameParseException {
+ HTableDescriptor indexTableDesc = new HTableDescriptor(indexSpec
+ .getIndexedTableName(baseTableName));
+ Set<byte[]> families = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+ families.add(IndexedTable.INDEX_COL_FAMILY);
+ for (byte[] column : indexSpec.getAllColumns()) {
+ families.add(Bytes.add(HStoreKey.getFamily(column),
+ new byte[] { HStoreKey.COLUMN_FAMILY_DELIMITER }));
+ }
+
+ for (byte[] colFamily : families) {
+ indexTableDesc.addFamily(new HColumnDescriptor(colFamily));
+ }
+
+ return indexTableDesc;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/tableindexed/ReverseByteArrayComparator.java b/src/java/org/apache/hadoop/hbase/client/tableindexed/ReverseByteArrayComparator.java
new file mode 100644
index 0000000..8af7b81
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/tableindexed/ReverseByteArrayComparator.java
@@ -0,0 +1,46 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.tableindexed;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.WritableComparator;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class ReverseByteArrayComparator implements WritableComparator<byte[]> {
+
+ /** {@inheritDoc} */
+ public int compare(byte[] o1, byte[] o2) {
+ return Bytes.compareTo(o2, o1);
+ }
+
+
+ /** {@inheritDoc} */
+ public void readFields(DataInput arg0) throws IOException {
+ // Nothing
+ }
+
+ /** {@inheritDoc} */
+ public void write(DataOutput arg0) throws IOException {
+ // Nothing
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/tableindexed/SimpleIndexKeyGenerator.java b/src/java/org/apache/hadoop/hbase/client/tableindexed/SimpleIndexKeyGenerator.java
new file mode 100644
index 0000000..4969417
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/tableindexed/SimpleIndexKeyGenerator.java
@@ -0,0 +1,59 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.tableindexed;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/** Creates indexed keys for a single column....
+ *
+ */
+public class SimpleIndexKeyGenerator implements IndexKeyGenerator {
+
+ private byte [] column;
+
+ public SimpleIndexKeyGenerator(byte [] column) {
+ this.column = column;
+ }
+
+ public SimpleIndexKeyGenerator() {
+ // For Writable
+ }
+
+ /** {@inheritDoc} */
+ public byte[] createIndexKey(byte[] rowKey, Map<byte[], byte[]> columns) {
+ return Bytes.add(columns.get(column), rowKey);
+ }
+
+ /** {@inheritDoc} */
+ public void readFields(DataInput in) throws IOException {
+ column = Bytes.readByteArray(in);
+ }
+
+ /** {@inheritDoc} */
+ public void write(DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, column);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/tableindexed/package.html b/src/java/org/apache/hadoop/hbase/client/tableindexed/package.html
new file mode 100644
index 0000000..36214f7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/tableindexed/package.html
@@ -0,0 +1,46 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<head />
+<body bgcolor="white">
+
+This package provides support for secondary indexing by maintaining a separate, "index", table for each index.
+
+The IndexSpecification class provides the metadata for the index. This includes:
+<li> the columns that contribute to the index key,
+<li> additional columns to put in the index table (and are thus made available to filters on the index table),
+<br> and
+<li> an IndexKeyGenerator which constructs the index-row-key from the indexed column(s) and the original row.
+
+IndexesSpecifications can be added to a table's metadata (HTableDescriptor) before the table is constructed.
+Afterwards, updates and deletes to the original table will trigger the updates in the index, and
+the indexes can be scanned using the API on IndexedTable.
+
+For a simple example, look at the unit test in org.apache.hadoop.hbase.client.tableIndexed.
+
+<p> To enable the indexing, modify hbase-site.xml to turn on the
+IndexedRegionServer. This is done by setting
+<i>hbase.regionserver.class</i> to
+<i>org.apache.hadoop.hbase.ipc.IndexedRegionInterface</i> and
+<i>hbase.regionserver.impl </i> to
+<i>org.apache.hadoop.hbase.regionserver.tableindexed.IndexedRegionServer</i>
+
+</body>
+</html>
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/CommitUnsuccessfulException.java b/src/java/org/apache/hadoop/hbase/client/transactional/CommitUnsuccessfulException.java
new file mode 100644
index 0000000..7657363
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/CommitUnsuccessfulException.java
@@ -0,0 +1,56 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+/** Thrown when a transaction cannot be committed.
+ *
+ */
+public class CommitUnsuccessfulException extends Exception {
+
+ private static final long serialVersionUID = 7062921444531109202L;
+
+ /** Default Constructor */
+ public CommitUnsuccessfulException() {
+ super();
+ }
+
+ /**
+ * @param arg0 message
+ * @param arg1 cause
+ */
+ public CommitUnsuccessfulException(String arg0, Throwable arg1) {
+ super(arg0, arg1);
+ }
+
+ /**
+ * @param arg0 message
+ */
+ public CommitUnsuccessfulException(String arg0) {
+ super(arg0);
+ }
+
+ /**
+ * @param arg0 cause
+ */
+ public CommitUnsuccessfulException(Throwable arg0) {
+ super(arg0);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/LocalTransactionLogger.java b/src/java/org/apache/hadoop/hbase/client/transactional/LocalTransactionLogger.java
new file mode 100644
index 0000000..1738315
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/LocalTransactionLogger.java
@@ -0,0 +1,71 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * A local, in-memory implementation of the transaction logger. Does not provide a global view, so
+ * it can't be relighed on by
+ *
+ */
+public class LocalTransactionLogger implements TransactionLogger {
+
+ private static LocalTransactionLogger instance;
+
+ /**
+ * Creates singleton if it does not exist
+ *
+ * @return reference to singleton
+ */
+ public synchronized static LocalTransactionLogger getInstance() {
+ if (instance == null) {
+ instance = new LocalTransactionLogger();
+ }
+ return instance;
+ }
+
+ private Random random = new Random();
+ private Map<Long, TransactionStatus> transactionIdToStatusMap = Collections
+ .synchronizedMap(new HashMap<Long, TransactionStatus>());
+
+ private LocalTransactionLogger() {
+ // Enforce singlton
+ }
+
+ /** @return random longs to minimize possibility of collision */
+ public long createNewTransactionLog() {
+ long id = random.nextLong();
+ transactionIdToStatusMap.put(id, TransactionStatus.PENDING);
+ return id;
+ }
+
+ public TransactionStatus getStatusForTransaction(final long transactionId) {
+ return transactionIdToStatusMap.get(transactionId);
+ }
+
+ public void setStatusForTransaction(final long transactionId,
+ final TransactionStatus status) {
+ transactionIdToStatusMap.put(transactionId, status);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/TransactionLogger.java b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionLogger.java
new file mode 100644
index 0000000..5ea321a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionLogger.java
@@ -0,0 +1,59 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+/**
+ * Simple interface used to provide a log about transaction status. Written to
+ * by the client, and read by regionservers in case of failure.
+ *
+ */
+public interface TransactionLogger {
+
+ /** Transaction status values */
+ enum TransactionStatus {
+ /** Transaction is pending */
+ PENDING,
+ /** Transaction was committed */
+ COMMITTED,
+ /** Transaction was aborted */
+ ABORTED
+ }
+
+ /**
+ * Create a new transaction log. Return the transaction's globally unique id.
+ * Log's initial value should be PENDING
+ *
+ * @return transaction id
+ */
+ long createNewTransactionLog();
+
+ /**
+ * @param transactionId
+ * @return transaction status
+ */
+ TransactionStatus getStatusForTransaction(long transactionId);
+
+ /**
+ * @param transactionId
+ * @param status
+ */
+ void setStatusForTransaction(long transactionId, TransactionStatus status);
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/TransactionManager.java b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionManager.java
new file mode 100644
index 0000000..766e506
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionManager.java
@@ -0,0 +1,152 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.ipc.TransactionalRegionInterface;
+import org.apache.hadoop.ipc.RemoteException;
+
+/**
+ * Transaction Manager. Responsible for committing transactions.
+ *
+ */
+public class TransactionManager {
+ static final Log LOG = LogFactory.getLog(TransactionManager.class);
+
+ private final HConnection connection;
+ private final TransactionLogger transactionLogger;
+
+ /**
+ * @param conf
+ */
+ public TransactionManager(final HBaseConfiguration conf) {
+ this(LocalTransactionLogger.getInstance(), conf);
+ }
+
+ /**
+ * @param transactionLogger
+ * @param conf
+ */
+ public TransactionManager(final TransactionLogger transactionLogger,
+ final HBaseConfiguration conf) {
+ this.transactionLogger = transactionLogger;
+ connection = HConnectionManager.getConnection(conf);
+ }
+
+ /**
+ * Called to start a transaction.
+ *
+ * @return new transaction state
+ */
+ public TransactionState beginTransaction() {
+ long transactionId = transactionLogger.createNewTransactionLog();
+ LOG.debug("Begining transaction " + transactionId);
+ return new TransactionState(transactionId);
+ }
+
+ /**
+ * Try and commit a transaction.
+ *
+ * @param transactionState
+ * @throws IOException
+ * @throws CommitUnsuccessfulException
+ */
+ public void tryCommit(final TransactionState transactionState)
+ throws CommitUnsuccessfulException, IOException {
+ LOG.debug("atempting to commit trasaction: " + transactionState.toString());
+
+ try {
+ for (HRegionLocation location : transactionState
+ .getParticipatingRegions()) {
+ TransactionalRegionInterface transactionalRegionServer = (TransactionalRegionInterface) connection
+ .getHRegionConnection(location.getServerAddress());
+ boolean canCommit = transactionalRegionServer.commitRequest(location
+ .getRegionInfo().getRegionName(), transactionState
+ .getTransactionId());
+ if (LOG.isTraceEnabled()) {
+ LOG.trace("Region ["
+ + location.getRegionInfo().getRegionNameAsString() + "] votes "
+ + (canCommit ? "to commit" : "to abort") + " transaction "
+ + transactionState.getTransactionId());
+ }
+
+ if (!canCommit) {
+ LOG.debug("Aborting [" + transactionState.getTransactionId() + "]");
+ abort(transactionState, location);
+ throw new CommitUnsuccessfulException();
+ }
+ }
+
+ LOG.debug("Commiting [" + transactionState.getTransactionId() + "]");
+
+ transactionLogger.setStatusForTransaction(transactionState
+ .getTransactionId(), TransactionLogger.TransactionStatus.COMMITTED);
+
+ for (HRegionLocation location : transactionState
+ .getParticipatingRegions()) {
+ TransactionalRegionInterface transactionalRegionServer = (TransactionalRegionInterface) connection
+ .getHRegionConnection(location.getServerAddress());
+ transactionalRegionServer.commit(location.getRegionInfo()
+ .getRegionName(), transactionState.getTransactionId());
+ }
+ } catch (RemoteException e) {
+ LOG.debug("Commit of transaction [" + transactionState.getTransactionId()
+ + "] was unsucsessful", e);
+ // FIXME, think about the what ifs
+ throw new CommitUnsuccessfulException(e);
+ }
+ // Tran log can be deleted now ...
+ }
+
+ /**
+ * Abort a s transaction.
+ *
+ * @param transactionState
+ * @throws IOException
+ */
+ public void abort(final TransactionState transactionState) throws IOException {
+ abort(transactionState, null);
+ }
+
+ private void abort(final TransactionState transactionState,
+ final HRegionLocation locationToIgnore) throws IOException {
+ transactionLogger.setStatusForTransaction(transactionState
+ .getTransactionId(), TransactionLogger.TransactionStatus.ABORTED);
+
+ for (HRegionLocation location : transactionState.getParticipatingRegions()) {
+ if (locationToIgnore != null && location.equals(locationToIgnore)) {
+ continue;
+ }
+
+ TransactionalRegionInterface transactionalRegionServer = (TransactionalRegionInterface) connection
+ .getHRegionConnection(location.getServerAddress());
+
+ transactionalRegionServer.abort(location.getRegionInfo().getRegionName(),
+ transactionState.getTransactionId());
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/TransactionScannerCallable.java b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionScannerCallable.java
new file mode 100644
index 0000000..081068f
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionScannerCallable.java
@@ -0,0 +1,51 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.ScannerCallable;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.ipc.TransactionalRegionInterface;
+
+class TransactionScannerCallable extends ScannerCallable {
+
+ private TransactionState transactionState;
+
+ TransactionScannerCallable(final TransactionState transactionState,
+ final HConnection connection, final byte[] tableName,
+ final byte[][] columns, final byte[] startRow, final long timestamp,
+ final RowFilterInterface filter) {
+ super(connection, tableName, columns, startRow, timestamp, filter);
+ this.transactionState = transactionState;
+ }
+
+ @Override
+ protected long openScanner() throws IOException {
+ if (transactionState.addRegion(location)) {
+ ((TransactionalRegionInterface) server).beginTransaction(transactionState
+ .getTransactionId(), location.getRegionInfo().getRegionName());
+ }
+ return ((TransactionalRegionInterface) server).openScanner(transactionState
+ .getTransactionId(), this.location.getRegionInfo().getRegionName(),
+ getColumns(), row, getTimestamp(), getFilter());
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/TransactionState.java b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionState.java
new file mode 100644
index 0000000..8c2f980
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionState.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionLocation;
+
+/**
+ * Holds client-side transaction information. Client's use them as opaque
+ * objects passed around to transaction operations.
+ *
+ */
+public class TransactionState {
+ static final Log LOG = LogFactory.getLog(TransactionState.class);
+
+ private final long transactionId;
+
+ private Set<HRegionLocation> participatingRegions = new HashSet<HRegionLocation>();
+
+ TransactionState(final long transactionId) {
+ this.transactionId = transactionId;
+ }
+
+ boolean addRegion(final HRegionLocation hregion) {
+ boolean added = participatingRegions.add(hregion);
+
+ if (added) {
+ LOG.debug("Adding new hregion ["
+ + hregion.getRegionInfo().getRegionNameAsString()
+ + "] to transaction [" + transactionId + "]");
+ }
+
+ return added;
+ }
+
+ Set<HRegionLocation> getParticipatingRegions() {
+ return participatingRegions;
+ }
+
+ /**
+ * Get the transactionId.
+ *
+ * @return Return the transactionId.
+ */
+ public long getTransactionId() {
+ return transactionId;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return "id: " + transactionId + ", particpants: "
+ + participatingRegions.size();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/TransactionalTable.java b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionalTable.java
new file mode 100644
index 0000000..fb5aae0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/TransactionalTable.java
@@ -0,0 +1,428 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.client.ScannerCallable;
+import org.apache.hadoop.hbase.client.ServerCallable;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.ipc.TransactionalRegionInterface;
+
+/**
+ * Table with transactional support.
+ *
+ */
+public class TransactionalTable extends HTable {
+
+ /**
+ * @param conf
+ * @param tableName
+ * @throws IOException
+ */
+ public TransactionalTable(final HBaseConfiguration conf,
+ final String tableName) throws IOException {
+ super(conf, tableName);
+ }
+
+ /**
+ * @param conf
+ * @param tableName
+ * @throws IOException
+ */
+ public TransactionalTable(final HBaseConfiguration conf,
+ final byte[] tableName) throws IOException {
+ super(conf, tableName);
+ }
+
+ private static abstract class TransactionalServerCallable<T> extends
+ ServerCallable<T> {
+ protected TransactionState transactionState;
+
+ protected TransactionalRegionInterface getTransactionServer() {
+ return (TransactionalRegionInterface) server;
+ }
+
+ protected void recordServer() throws IOException {
+ if (transactionState.addRegion(location)) {
+ getTransactionServer().beginTransaction(
+ transactionState.getTransactionId(),
+ location.getRegionInfo().getRegionName());
+ }
+ }
+
+ /**
+ * @param connection
+ * @param tableName
+ * @param row
+ * @param transactionState
+ */
+ public TransactionalServerCallable(final HConnection connection,
+ final byte[] tableName, final byte[] row,
+ final TransactionState transactionState) {
+ super(connection, tableName, row);
+ this.transactionState = transactionState;
+ }
+
+ }
+
+ /**
+ * Get a single value for the specified row and column
+ *
+ * @param transactionState
+ * @param row row key
+ * @param column column name
+ * @return value for specified row/column
+ * @throws IOException
+ */
+ public Cell get(final TransactionState transactionState, final byte[] row,
+ final byte[] column) throws IOException {
+ return super.getConnection().getRegionServerWithRetries(
+ new TransactionalServerCallable<Cell>(super.getConnection(), super
+ .getTableName(), row, transactionState) {
+ public Cell call() throws IOException {
+ recordServer();
+ return getTransactionServer().get(
+ transactionState.getTransactionId(),
+ location.getRegionInfo().getRegionName(), row, column);
+ }
+ });
+ }
+
+ /**
+ * Get the specified number of versions of the specified row and column
+ *
+ * @param transactionState
+ * @param row - row key
+ * @param column - column name
+ * @param numVersions - number of versions to retrieve
+ * @return - array byte values
+ * @throws IOException
+ */
+ public Cell[] get(final TransactionState transactionState, final byte[] row,
+ final byte[] column, final int numVersions) throws IOException {
+ Cell[] values = null;
+ values = super.getConnection().getRegionServerWithRetries(
+ new TransactionalServerCallable<Cell[]>(super.getConnection(), super
+ .getTableName(), row, transactionState) {
+ public Cell[] call() throws IOException {
+ recordServer();
+ return getTransactionServer().get(
+ transactionState.getTransactionId(),
+ location.getRegionInfo().getRegionName(), row, column,
+ numVersions);
+ }
+ });
+
+ return values;
+ }
+
+ /**
+ * Get the specified number of versions of the specified row and column with
+ * the specified timestamp.
+ *
+ * @param transactionState
+ * @param row - row key
+ * @param column - column name
+ * @param timestamp - timestamp
+ * @param numVersions - number of versions to retrieve
+ * @return - array of values that match the above criteria
+ * @throws IOException
+ */
+ public Cell[] get(final TransactionState transactionState, final byte[] row,
+ final byte[] column, final long timestamp, final int numVersions)
+ throws IOException {
+ Cell[] values = null;
+ values = super.getConnection().getRegionServerWithRetries(
+ new TransactionalServerCallable<Cell[]>(super.getConnection(), super
+ .getTableName(), row, transactionState) {
+ public Cell[] call() throws IOException {
+ recordServer();
+ return getTransactionServer().get(
+ transactionState.getTransactionId(),
+ location.getRegionInfo().getRegionName(), row, column,
+ timestamp, numVersions);
+ }
+ });
+
+ return values;
+ }
+
+ /**
+ * Get all the data for the specified row at the latest timestamp
+ *
+ * @param transactionState
+ * @param row row key
+ * @return RowResult is empty if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final TransactionState transactionState,
+ final byte[] row) throws IOException {
+ return getRow(transactionState, row, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Get all the data for the specified row at a specified timestamp
+ *
+ * @param transactionState
+ * @param row row key
+ * @param ts timestamp
+ * @return RowResult is empty if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final TransactionState transactionState,
+ final byte[] row, final long ts) throws IOException {
+ return super.getConnection().getRegionServerWithRetries(
+ new TransactionalServerCallable<RowResult>(super.getConnection(), super
+ .getTableName(), row, transactionState) {
+ public RowResult call() throws IOException {
+ recordServer();
+ return getTransactionServer().getRow(
+ transactionState.getTransactionId(),
+ location.getRegionInfo().getRegionName(), row, ts);
+ }
+ });
+ }
+
+ /**
+ * Get selected columns for the specified row at the latest timestamp
+ *
+ * @param transactionState
+ * @param row row key
+ * @param columns Array of column names you want to retrieve.
+ * @return RowResult is empty if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final TransactionState transactionState,
+ final byte[] row, final byte[][] columns) throws IOException {
+ return getRow(transactionState, row, columns, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Get selected columns for the specified row at a specified timestamp
+ *
+ * @param transactionState
+ * @param row row key
+ * @param columns Array of column names you want to retrieve.
+ * @param ts timestamp
+ * @return RowResult is empty if row does not exist.
+ * @throws IOException
+ */
+ public RowResult getRow(final TransactionState transactionState,
+ final byte[] row, final byte[][] columns, final long ts)
+ throws IOException {
+ return super.getConnection().getRegionServerWithRetries(
+ new TransactionalServerCallable<RowResult>(super.getConnection(), super
+ .getTableName(), row, transactionState) {
+ public RowResult call() throws IOException {
+ recordServer();
+ return getTransactionServer().getRow(
+ transactionState.getTransactionId(),
+ location.getRegionInfo().getRegionName(), row, columns, ts);
+ }
+ });
+ }
+
+ /**
+ * Delete all cells that match the passed row and whose timestamp is equal-to
+ * or older than the passed timestamp.
+ *
+ * @param transactionState
+ * @param row Row to update
+ * @param ts Delete all cells of the same timestamp or older.
+ * @throws IOException
+ */
+ public void deleteAll(final TransactionState transactionState,
+ final byte[] row, final long ts) throws IOException {
+ super.getConnection().getRegionServerWithRetries(
+ new TransactionalServerCallable<Boolean>(super.getConnection(), super
+ .getTableName(), row, transactionState) {
+ public Boolean call() throws IOException {
+ recordServer();
+ getTransactionServer().deleteAll(
+ transactionState.getTransactionId(),
+ location.getRegionInfo().getRegionName(), row, ts);
+ return null;
+ }
+ });
+ }
+
+ /**
+ * Get a scanner on the current table starting at first row. Return the
+ * specified columns.
+ *
+ * @param transactionState
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible to
+ * pass a regex in the column qualifier. A column qualifier is judged to be a
+ * regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final TransactionState transactionState,
+ final byte[][] columns) throws IOException {
+ return getScanner(transactionState, columns, HConstants.EMPTY_START_ROW,
+ HConstants.LATEST_TIMESTAMP, null);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row. Return
+ * the specified columns.
+ *
+ * @param transactionState
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible to
+ * pass a regex in the column qualifier. A column qualifier is judged to be a
+ * regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final TransactionState transactionState,
+ final byte[][] columns, final byte[] startRow) throws IOException {
+ return getScanner(transactionState, columns, startRow,
+ HConstants.LATEST_TIMESTAMP, null);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row. Return
+ * the specified columns.
+ *
+ * @param transactionState
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible to
+ * pass a regex in the column qualifier. A column qualifier is judged to be a
+ * regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param timestamp only return results whose timestamp <= this value
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final TransactionState transactionState,
+ final byte[][] columns, final byte[] startRow, final long timestamp)
+ throws IOException {
+ return getScanner(transactionState, columns, startRow, timestamp, null);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row. Return
+ * the specified columns.
+ *
+ * @param transactionState
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible to
+ * pass a regex in the column qualifier. A column qualifier is judged to be a
+ * regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param filter a row filter using row-key regexp and/or column data filter.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final TransactionState transactionState,
+ final byte[][] columns, final byte[] startRow,
+ final RowFilterInterface filter) throws IOException {
+ return getScanner(transactionState, columns, startRow,
+ HConstants.LATEST_TIMESTAMP, filter);
+ }
+
+ /**
+ * Get a scanner on the current table starting at the specified row. Return
+ * the specified columns.
+ *
+ * @param transactionState
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible to
+ * pass a regex in the column qualifier. A column qualifier is judged to be a
+ * regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row in table to scan
+ * @param timestamp only return results whose timestamp <= this value
+ * @param filter a row filter using row-key regexp and/or column data filter.
+ * @return scanner
+ * @throws IOException
+ */
+ public Scanner getScanner(final TransactionState transactionState,
+ final byte[][] columns, final byte[] startRow, final long timestamp,
+ final RowFilterInterface filter) throws IOException {
+ ClientScanner scanner = new TransactionalClientScanner(transactionState, columns, startRow,
+ timestamp, filter);
+ scanner.initialize();
+ return scanner;
+ }
+
+ /**
+ * Commit a BatchUpdate to the table.
+ *
+ * @param transactionState
+ * @param batchUpdate
+ * @throws IOException
+ */
+ public synchronized void commit(final TransactionState transactionState,
+ final BatchUpdate batchUpdate) throws IOException {
+ super.getConnection().getRegionServerWithRetries(
+ new TransactionalServerCallable<Boolean>(super.getConnection(), super
+ .getTableName(), batchUpdate.getRow(), transactionState) {
+ public Boolean call() throws IOException {
+ recordServer();
+ getTransactionServer().batchUpdate(
+ transactionState.getTransactionId(),
+ location.getRegionInfo().getRegionName(), batchUpdate);
+ return null;
+ }
+ });
+ }
+
+ protected class TransactionalClientScanner extends HTable.ClientScanner {
+
+ private TransactionState transactionState;
+
+ protected TransactionalClientScanner(
+ final TransactionState transactionState, final byte[][] columns,
+ final byte[] startRow, final long timestamp,
+ final RowFilterInterface filter) {
+ super(columns, startRow, timestamp, filter);
+ this.transactionState = transactionState;
+ }
+
+ @Override
+ protected ScannerCallable getScannerCallable(
+ final byte[] localStartKey, int caching) {
+ TransactionScannerCallable t =
+ new TransactionScannerCallable(transactionState, getConnection(),
+ getTableName(), getColumns(), localStartKey, getTimestamp(),
+ getFilter());
+ t.setCaching(caching);
+ return t;
+ }
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/UnknownTransactionException.java b/src/java/org/apache/hadoop/hbase/client/transactional/UnknownTransactionException.java
new file mode 100644
index 0000000..66f2bc5
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/UnknownTransactionException.java
@@ -0,0 +1,43 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Thrown if a region server is passed an unknown transaction id
+ */
+public class UnknownTransactionException extends DoNotRetryIOException {
+
+ private static final long serialVersionUID = 698575374929591099L;
+
+ /** constructor */
+ public UnknownTransactionException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public UnknownTransactionException(String s) {
+ super(s);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/client/transactional/package.html b/src/java/org/apache/hadoop/hbase/client/transactional/package.html
new file mode 100644
index 0000000..357425c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/client/transactional/package.html
@@ -0,0 +1,61 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<head />
+<body bgcolor="white">
+
+This package provides support for atomic transactions. Transactions can
+span multiple regions. Transaction writes are applied when committing a
+transaction. At commit time, the transaction is examined to see if it
+can be applied while still maintaining atomicity. This is done by
+looking for conflicts with the transactions that committed while the
+current transaction was running. This technique is known as optimistic
+concurrency control (OCC) because it relies on the assumption that
+transactions will mostly not have conflicts with each other.
+
+<p>
+For more details on OCC, see the paper <i> On Optimistic Methods for Concurrency Control </i>
+by Kung and Robinson available
+<a href=http://www.seas.upenn.edu/~zives/cis650/papers/opt-cc.pdf> here </a>.
+
+<p> To enable transactions, modify hbase-site.xml to turn on the
+TransactionalRegionServer. This is done by setting
+<i>hbase.regionserver.class</i> to
+<i>org.apache.hadoop.hbase.ipc.TransactionalRegionInterface</i> and
+<i>hbase.regionserver.impl </i> to
+<i>org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegionServer</i>
+
+<p>
+The read set claimed by a transactional scanner is determined from the start and
+ end keys which the scanner is opened with.
+
+
+
+<h3> Known Issues </h3>
+
+Recovery in the face of hregion server failure
+is not fully implemented. Thus, you cannot rely on the transactional
+properties in the face of node failure.
+
+
+
+
+</body>
+</html>
diff --git a/src/java/org/apache/hadoop/hbase/filter/ColumnValueFilter.java b/src/java/org/apache/hadoop/hbase/filter/ColumnValueFilter.java
new file mode 100644
index 0000000..51a4cbd
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/ColumnValueFilter.java
@@ -0,0 +1,294 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.List;
+import java.util.SortedMap;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.io.ObjectWritable;
+
+/**
+ * This filter is used to filter based on the value of a given column. It takes
+ * an operator (equal, greater, not equal, etc) and either a byte [] value or a
+ * byte [] comparator. If we have a byte [] value then we just do a
+ * lexicographic compare. If this is not sufficient (eg you want to deserialize
+ * a long and then compare it to a fixed long value), then you can pass in your
+ * own comparator instead.
+ */
+public class ColumnValueFilter implements RowFilterInterface {
+ /** Comparison operators. */
+ public enum CompareOp {
+ /** less than */
+ LESS,
+ /** less than or equal to */
+ LESS_OR_EQUAL,
+ /** equals */
+ EQUAL,
+ /** not equal */
+ NOT_EQUAL,
+ /** greater than or equal to */
+ GREATER_OR_EQUAL,
+ /** greater than */
+ GREATER;
+ }
+
+ private byte[] columnName;
+ private CompareOp compareOp;
+ private byte[] value;
+ private WritableByteArrayComparable comparator;
+ private boolean filterIfColumnMissing;
+
+ ColumnValueFilter() {
+ // for Writable
+ }
+
+ /**
+ * Constructor.
+ *
+ * @param columnName name of column
+ * @param compareOp operator
+ * @param value value to compare column values against
+ */
+ public ColumnValueFilter(final byte[] columnName, final CompareOp compareOp,
+ final byte[] value) {
+ this(columnName, compareOp, value, true);
+ }
+
+ /**
+ * Constructor.
+ *
+ * @param columnName name of column
+ * @param compareOp operator
+ * @param value value to compare column values against
+ * @param filterIfColumnMissing if true then we will filter rows that don't have the column.
+ */
+ public ColumnValueFilter(final byte[] columnName, final CompareOp compareOp,
+ final byte[] value, boolean filterIfColumnMissing) {
+ this.columnName = columnName;
+ this.compareOp = compareOp;
+ this.value = value;
+ this.filterIfColumnMissing = filterIfColumnMissing;
+ }
+
+ /**
+ * Constructor.
+ *
+ * @param columnName name of column
+ * @param compareOp operator
+ * @param comparator Comparator to use.
+ */
+ public ColumnValueFilter(final byte[] columnName, final CompareOp compareOp,
+ final WritableByteArrayComparable comparator) {
+ this(columnName, compareOp, comparator, true);
+ }
+
+ /**
+ * Constructor.
+ *
+ * @param columnName name of column
+ * @param compareOp operator
+ * @param comparator Comparator to use.
+ * @param filterIfColumnMissing if true then we will filter rows that don't have the column.
+ */
+ public ColumnValueFilter(final byte[] columnName, final CompareOp compareOp,
+ final WritableByteArrayComparable comparator, boolean filterIfColumnMissing) {
+ this.columnName = columnName;
+ this.compareOp = compareOp;
+ this.comparator = comparator;
+ this.filterIfColumnMissing = filterIfColumnMissing;
+ }
+
+ public boolean filterRowKey(final byte[] rowKey) {
+ return filterRowKey(rowKey, 0, rowKey.length);
+ }
+
+ public boolean filterRowKey(byte[] rowKey, int offset, int length) {
+ return false;
+ }
+
+
+ public boolean filterColumn(final byte[] rowKey,
+ final byte[] colKey, final byte[] data) {
+ if (!filterIfColumnMissing) {
+ return false; // Must filter on the whole row
+ }
+ if (!Arrays.equals(colKey, columnName)) {
+ return false;
+ }
+ return filterColumnValue(data, 0, data.length);
+ }
+
+
+ public boolean filterColumn(byte[] rowKey, int roffset, int rlength,
+ byte[] cn, int coffset, int clength, byte[] columnValue,
+ int voffset, int vlength) {
+ if (!filterIfColumnMissing) {
+ return false; // Must filter on the whole row
+ }
+ if (Bytes.compareTo(cn, coffset, clength,
+ this.columnName, 0, this.columnName.length) != 0) {
+ return false;
+ }
+ return filterColumnValue(columnValue, voffset, vlength);
+ }
+
+ private boolean filterColumnValue(final byte [] data, final int offset,
+ final int length) {
+ int compareResult;
+ if (comparator != null) {
+ compareResult = comparator.compareTo(data);
+ } else {
+ compareResult = compare(value, data);
+ }
+
+ switch (compareOp) {
+ case LESS:
+ return compareResult <= 0;
+ case LESS_OR_EQUAL:
+ return compareResult < 0;
+ case EQUAL:
+ return compareResult != 0;
+ case NOT_EQUAL:
+ return compareResult == 0;
+ case GREATER_OR_EQUAL:
+ return compareResult > 0;
+ case GREATER:
+ return compareResult >= 0;
+ default:
+ throw new RuntimeException("Unknown Compare op " + compareOp.name());
+ }
+ }
+
+ public boolean filterAllRemaining() {
+ return false;
+ }
+
+ public boolean filterRow(final SortedMap<byte[], Cell> columns) {
+ if (columns == null)
+ return false;
+ if (filterIfColumnMissing) {
+ return !columns.containsKey(columnName);
+ }
+ // Otherwise we must do the filter here
+ Cell colCell = columns.get(columnName);
+ if (colCell == null) {
+ return false;
+ }
+ byte [] v = colCell.getValue();
+ return this.filterColumnValue(v, 0, v.length);
+ }
+
+ public boolean filterRow(List<KeyValue> results) {
+ if (results == null) return false;
+ KeyValue found = null;
+ if (filterIfColumnMissing) {
+ boolean doesntHaveIt = true;
+ for (KeyValue kv: results) {
+ if (kv.matchingColumn(columnName)) {
+ doesntHaveIt = false;
+ found = kv;
+ break;
+ }
+ }
+ if (doesntHaveIt) return doesntHaveIt;
+ }
+ if (found == null) {
+ for (KeyValue kv: results) {
+ if (kv.matchingColumn(columnName)) {
+ found = kv;
+ break;
+ }
+ }
+ }
+ if (found == null) {
+ return false;
+ }
+ return this.filterColumnValue(found.getValue(), found.getValueOffset(),
+ found.getValueLength());
+ }
+
+ private int compare(final byte[] b1, final byte[] b2) {
+ int len = Math.min(b1.length, b2.length);
+
+ for (int i = 0; i < len; i++) {
+ if (b1[i] != b2[i]) {
+ return b1[i] - b2[i];
+ }
+ }
+ return b1.length - b2.length;
+ }
+
+ public boolean processAlways() {
+ return false;
+ }
+
+ public void reset() {
+ // Nothing.
+ }
+
+ public void rowProcessed(final boolean filtered,
+ final byte[] key) {
+ // Nothing
+ }
+
+
+ public void rowProcessed(boolean filtered, byte[] key, int offset, int length) {
+ // Nothing
+ }
+
+ public void validate(final byte[][] columns) {
+ // Nothing
+ }
+
+ public void readFields(final DataInput in) throws IOException {
+ int valueLen = in.readInt();
+ if (valueLen > 0) {
+ value = new byte[valueLen];
+ in.readFully(value);
+ }
+ columnName = Bytes.readByteArray(in);
+ compareOp = CompareOp.valueOf(in.readUTF());
+ comparator = (WritableByteArrayComparable) ObjectWritable.readObject(in,
+ new HBaseConfiguration());
+ filterIfColumnMissing = in.readBoolean();
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ if (value == null) {
+ out.writeInt(0);
+ } else {
+ out.writeInt(value.length);
+ out.write(value);
+ }
+ Bytes.writeByteArray(out, columnName);
+ out.writeUTF(compareOp.name());
+ ObjectWritable.writeObject(out, comparator,
+ WritableByteArrayComparable.class, new HBaseConfiguration());
+ out.writeBoolean(filterIfColumnMissing);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/filter/InclusiveStopRowFilter.java b/src/java/org/apache/hadoop/hbase/filter/InclusiveStopRowFilter.java
new file mode 100644
index 0000000..1cb572e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/InclusiveStopRowFilter.java
@@ -0,0 +1,57 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Subclass of StopRowFilter that filters rows > the stop row,
+ * making it include up to the last row but no further.
+ */
+public class InclusiveStopRowFilter extends StopRowFilter{
+ /**
+ * Default constructor, filters nothing. Required though for RPC
+ * deserialization.
+ */
+ public InclusiveStopRowFilter() {super();}
+
+ /**
+ * Constructor that takes a stopRowKey on which to filter
+ *
+ * @param stopRowKey rowKey to filter on.
+ */
+ public InclusiveStopRowFilter(final byte [] stopRowKey) {
+ super(stopRowKey);
+ }
+
+ /**
+ * @see org.apache.hadoop.hbase.filter.StopRowFilter#filterRowKey(byte[])
+ */
+ @Override
+ public boolean filterRowKey(final byte [] rowKey) {
+ if (rowKey == null) {
+ if (getStopRowKey() == null) {
+ return true;
+ }
+ return false;
+ }
+ return Bytes.compareTo(getStopRowKey(), rowKey) < 0;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java b/src/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
new file mode 100644
index 0000000..0ad057a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+/**
+ * Used to indicate an invalid RowFilter.
+ */
+public class InvalidRowFilterException extends RuntimeException {
+ private static final long serialVersionUID = 2667894046345657865L;
+
+
+ /** constructor */
+ public InvalidRowFilterException() {
+ super();
+ }
+
+ /**
+ * constructor
+ * @param s message
+ */
+ public InvalidRowFilterException(String s) {
+ super(s);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/filter/PageRowFilter.java b/src/java/org/apache/hadoop/hbase/filter/PageRowFilter.java
new file mode 100644
index 0000000..a8e73d7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/PageRowFilter.java
@@ -0,0 +1,130 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+import java.util.SortedMap;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.Cell;
+
+/**
+ * Implementation of RowFilterInterface that limits results to a specific page
+ * size. It terminates scanning once the number of filter-passed results is >=
+ * the given page size.
+ *
+ * <p>
+ * Note that this filter cannot guarantee that the number of results returned
+ * to a client are <= page size. This is because the filter is applied
+ * separately on different region servers. It does however optimize the scan of
+ * individual HRegions by making sure that the page size is never exceeded
+ * locally.
+ * </p>
+ */
+public class PageRowFilter implements RowFilterInterface {
+
+ private long pageSize = Long.MAX_VALUE;
+ private int rowsAccepted = 0;
+
+ /**
+ * Default constructor, filters nothing. Required though for RPC
+ * deserialization.
+ */
+ public PageRowFilter() {
+ super();
+ }
+
+ /**
+ * Constructor that takes a maximum page size.
+ *
+ * @param pageSize Maximum result size.
+ */
+ public PageRowFilter(final long pageSize) {
+ this.pageSize = pageSize;
+ }
+
+ public void validate(final byte [][] columns) {
+ // Doesn't filter columns
+ }
+
+ public void reset() {
+ rowsAccepted = 0;
+ }
+
+ public void rowProcessed(boolean filtered,
+ byte [] rowKey) {
+ rowProcessed(filtered, rowKey, 0, rowKey.length);
+ }
+
+ public void rowProcessed(boolean filtered, byte[] key, int offset, int length) {
+ if (!filtered) {
+ this.rowsAccepted++;
+ }
+ }
+
+ public boolean processAlways() {
+ return false;
+ }
+
+ public boolean filterAllRemaining() {
+ return this.rowsAccepted > this.pageSize;
+ }
+
+ public boolean filterRowKey(final byte [] r) {
+ return filterRowKey(r, 0, r.length);
+ }
+
+
+ public boolean filterRowKey(byte[] rowKey, int offset, int length) {
+ return filterAllRemaining();
+ }
+
+ public boolean filterColumn(final byte [] rowKey,
+ final byte [] colKey,
+ final byte[] data) {
+ return filterColumn(rowKey, 0, rowKey.length, colKey, 0, colKey.length,
+ data, 0, data.length);
+ }
+
+ public boolean filterColumn(byte[] rowKey, int roffset, int rlength,
+ byte[] colunmName, int coffset, int clength, byte[] columnValue,
+ int voffset, int vlength) {
+ return filterAllRemaining();
+ }
+
+ public boolean filterRow(final SortedMap<byte [], Cell> columns) {
+ return filterAllRemaining();
+ }
+
+ public boolean filterRow(List<KeyValue> results) {
+ return filterAllRemaining();
+ }
+
+ public void readFields(final DataInput in) throws IOException {
+ this.pageSize = in.readLong();
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ out.writeLong(pageSize);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/filter/PrefixRowFilter.java b/src/java/org/apache/hadoop/hbase/filter/PrefixRowFilter.java
new file mode 100644
index 0000000..a4e3ece
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/PrefixRowFilter.java
@@ -0,0 +1,118 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+import java.util.SortedMap;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * RowFilterInterface that filters everything that does not match a prefix
+ */
+public class PrefixRowFilter implements RowFilterInterface {
+ protected byte[] prefix;
+
+ /**
+ * Constructor that takes a row prefix to filter on
+ * @param prefix
+ */
+ public PrefixRowFilter(byte[] prefix) {
+ this.prefix = prefix;
+ }
+
+ /**
+ * Default Constructor, filters nothing. Required for RPC
+ * deserialization
+ */
+ public PrefixRowFilter() { }
+
+ public void reset() {
+ // Nothing to reset
+ }
+
+ public void rowProcessed(boolean filtered, byte [] key) {
+ rowProcessed(filtered, key, 0, key.length);
+ }
+
+ public void rowProcessed(boolean filtered, byte[] key, int offset, int length) {
+ // does not care
+ }
+
+ public boolean processAlways() {
+ return false;
+ }
+
+ public boolean filterAllRemaining() {
+ return false;
+ }
+
+ public boolean filterRowKey(final byte [] rowKey) {
+ return filterRowKey(rowKey, 0, rowKey.length);
+ }
+
+
+ public boolean filterRowKey(byte[] rowKey, int offset, int length) {
+ if (rowKey == null)
+ return true;
+ if (length < prefix.length)
+ return true;
+ for(int i = 0;i < prefix.length;i++)
+ if (prefix[i] != rowKey[i + offset])
+ return true;
+ return false;
+ }
+
+ public boolean filterColumn(final byte [] rowKey, final byte [] colunmName,
+ final byte[] columnValue) {
+ return false;
+ }
+
+ public boolean filterColumn(byte[] rowKey, int roffset, int rlength,
+ byte[] colunmName, int coffset, int clength, byte[] columnValue,
+ int voffset, int vlength) {
+ return false;
+ }
+
+ public boolean filterRow(final SortedMap<byte [], Cell> columns) {
+ return false;
+ }
+
+ public boolean filterRow(List<KeyValue> results) {
+ return false;
+ }
+
+ public void validate(final byte [][] columns) {
+ // does not do this
+ }
+
+ public void readFields(final DataInput in) throws IOException {
+ prefix = Bytes.readByteArray(in);
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, prefix);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/filter/RegExpRowFilter.java b/src/java/org/apache/hadoop/hbase/filter/RegExpRowFilter.java
new file mode 100644
index 0000000..c3fd9f0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/RegExpRowFilter.java
@@ -0,0 +1,345 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.Map.Entry;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Implementation of RowFilterInterface that can filter by rowkey regular
+ * expression and/or individual column values (equals comparison only). Multiple
+ * column filters imply an implicit conjunction of filter criteria.
+ *
+ * Note that column value filtering in this interface has been replaced by
+ * {@link ColumnValueFilter}.
+ * @deprecated This interface doesn't really work well in new KeyValue world.
+ * Needs to be refactored/removed. Marking it as deprecated till it gets
+ * cleaned up. Its also inefficient as written.
+ */
+public class RegExpRowFilter implements RowFilterInterface {
+
+ private Pattern rowKeyPattern = null;
+ private String rowKeyRegExp = null;
+ private Map<byte [], byte[]> equalsMap =
+ new TreeMap<byte [], byte[]>(Bytes.BYTES_COMPARATOR);
+ private Set<byte []> nullColumns =
+ new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+
+ /**
+ * Default constructor, filters nothing. Required though for RPC
+ * deserialization.
+ */
+ public RegExpRowFilter() {
+ super();
+ }
+
+ /**
+ * Constructor that takes a row key regular expression to filter on.
+ *
+ * @param rowKeyRegExp
+ */
+ public RegExpRowFilter(final String rowKeyRegExp) {
+ this.rowKeyRegExp = rowKeyRegExp;
+ }
+
+ /**
+ * @deprecated Column filtering has been replaced by {@link ColumnValueFilter}
+ * Constructor that takes a row key regular expression to filter on.
+ *
+ * @param rowKeyRegExp
+ * @param columnFilter
+ */
+ @Deprecated
+ public RegExpRowFilter(final String rowKeyRegExp,
+ final Map<byte [], Cell> columnFilter) {
+ this.rowKeyRegExp = rowKeyRegExp;
+ this.setColumnFilters(columnFilter);
+ }
+
+ public void rowProcessed(boolean filtered, byte [] rowKey) {
+ rowProcessed(filtered, rowKey, 0, rowKey.length);
+ }
+
+
+ public void rowProcessed(boolean filtered, byte[] key, int offset, int length) {
+ //doesn't care
+ }
+
+ public boolean processAlways() {
+ return false;
+ }
+
+ /**
+ * @deprecated Column filtering has been replaced by {@link ColumnValueFilter}
+ * Specify a value that must be matched for the given column.
+ *
+ * @param colKey
+ * the column to match on
+ * @param value
+ * the value that must equal the stored value.
+ */
+ @Deprecated
+ public void setColumnFilter(final byte [] colKey, final byte[] value) {
+ if (value == null) {
+ nullColumns.add(colKey);
+ } else {
+ equalsMap.put(colKey, value);
+ }
+ }
+
+ /**
+ * @deprecated Column filtering has been replaced by {@link ColumnValueFilter}
+ * Set column filters for a number of columns.
+ *
+ * @param columnFilter
+ * Map of columns with value criteria.
+ */
+ @Deprecated
+ public void setColumnFilters(final Map<byte [], Cell> columnFilter) {
+ if (null == columnFilter) {
+ nullColumns.clear();
+ equalsMap.clear();
+ } else {
+ for (Entry<byte [], Cell> entry : columnFilter.entrySet()) {
+ setColumnFilter(entry.getKey(), entry.getValue().getValue());
+ }
+ }
+ }
+
+ public void reset() {
+ // Nothing to reset
+ }
+
+ public boolean filterAllRemaining() {
+ return false;
+ }
+
+ public boolean filterRowKey(final byte [] rowKey) {
+ return filterRowKey(rowKey, 0, rowKey.length);
+ }
+
+ public boolean filterRowKey(byte[] rowKey, int offset, int length) {
+ return (filtersByRowKey() && rowKey != null)?
+ !getRowKeyPattern().matcher(Bytes.toString(rowKey, offset, length)).matches():
+ false;
+ }
+
+ public boolean filterColumn(final byte [] rowKey, final byte [] colKey,
+ final byte[] data) {
+ if (filterRowKey(rowKey)) {
+ return true;
+ }
+ if (filtersByColumnValue()) {
+ byte[] filterValue = equalsMap.get(colKey);
+ if (null != filterValue) {
+ return !Arrays.equals(filterValue, data);
+ }
+ }
+ if (nullColumns.contains(colKey)) {
+ if (data != null /* DELETE IS IN KEY NOW && !HLogEdit.isDeleted(data)*/) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+
+ public boolean filterColumn(byte[] rowKey, int roffset, int rlength,
+ byte [] colunmName, int coffset, int clength, byte[] columnValue,
+ int voffset, int vlength) {
+ if (filterRowKey(rowKey, roffset, rlength)) {
+ return true;
+ }
+ byte [] colkey = null;
+ if (filtersByColumnValue()) {
+ colkey = getColKey(colunmName, coffset, clength);
+ byte [] filterValue = equalsMap.get(colkey);
+ if (null != filterValue) {
+ return Bytes.compareTo(filterValue, 0, filterValue.length, columnValue,
+ voffset, vlength) != 0;
+ }
+ }
+ if (colkey == null) {
+ colkey = getColKey(colunmName, coffset, clength);
+ }
+ if (nullColumns.contains(colkey)) {
+ if (columnValue != null /* TODO: FIX!!! && !HLogEdit.isDeleted(data)*/) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ private byte [] getColKey(final byte [] c, final int offset, final int length) {
+ byte [] colkey = null;
+ if (offset == 0) {
+ colkey = c;
+ } else {
+ colkey = new byte [length];
+ System.arraycopy(c, offset, colkey, 0, length);
+ }
+ return colkey;
+ }
+
+ public boolean filterRow(final SortedMap<byte [], Cell> columns) {
+ for (Entry<byte [], Cell> col : columns.entrySet()) {
+ if (nullColumns.contains(col.getKey())
+ /* DELETE IS IN KEY NOW && !HLogEdit.isDeleted(col.getValue().getValue())*/) {
+ return true;
+ }
+ }
+ for (byte [] col : equalsMap.keySet()) {
+ if (!columns.containsKey(col)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ // THIS METHOD IS HORRIDLY EXPENSIVE TO RUN. NEEDS FIXUP.
+ public boolean filterRow(List<KeyValue> kvs) {
+ for (KeyValue kv: kvs) {
+ byte [] column = kv.getColumn();
+ if (nullColumns.contains(column) && !kv.isDeleteType()) {
+ return true;
+ }
+ if (!equalsMap.containsKey(column)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ private boolean filtersByColumnValue() {
+ return equalsMap != null && equalsMap.size() > 0;
+ }
+
+ private boolean filtersByRowKey() {
+ return null != rowKeyPattern || null != rowKeyRegExp;
+ }
+
+ private String getRowKeyRegExp() {
+ if (null == rowKeyRegExp && rowKeyPattern != null) {
+ rowKeyRegExp = rowKeyPattern.toString();
+ }
+ return rowKeyRegExp;
+ }
+
+ private Pattern getRowKeyPattern() {
+ if (rowKeyPattern == null && rowKeyRegExp != null) {
+ rowKeyPattern = Pattern.compile(rowKeyRegExp);
+ }
+ return rowKeyPattern;
+ }
+
+ public void readFields(final DataInput in) throws IOException {
+ boolean hasRowKeyPattern = in.readBoolean();
+ if (hasRowKeyPattern) {
+ rowKeyRegExp = in.readUTF();
+ }
+ // equals map
+ equalsMap.clear();
+ int size = in.readInt();
+ for (int i = 0; i < size; i++) {
+ byte [] key = Bytes.readByteArray(in);
+ int len = in.readInt();
+ byte[] value = null;
+ if (len >= 0) {
+ value = new byte[len];
+ in.readFully(value);
+ }
+ setColumnFilter(key, value);
+ }
+ // nullColumns
+ nullColumns.clear();
+ size = in.readInt();
+ for (int i = 0; i < size; i++) {
+ setColumnFilter(Bytes.readByteArray(in), null);
+ }
+ }
+
+ public void validate(final byte [][] columns) {
+ Set<byte []> invalids = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ for (byte [] colKey : getFilterColumns()) {
+ boolean found = false;
+ for (byte [] col : columns) {
+ if (Bytes.equals(col, colKey)) {
+ found = true;
+ break;
+ }
+ }
+ if (!found) {
+ invalids.add(colKey);
+ }
+ }
+
+ if (invalids.size() > 0) {
+ throw new InvalidRowFilterException(String.format(
+ "RowFilter contains criteria on columns %s not in %s", invalids,
+ Arrays.toString(columns)));
+ }
+ }
+
+ @Deprecated
+ private Set<byte []> getFilterColumns() {
+ Set<byte []> cols = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ cols.addAll(equalsMap.keySet());
+ cols.addAll(nullColumns);
+ return cols;
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ if (!filtersByRowKey()) {
+ out.writeBoolean(false);
+ } else {
+ out.writeBoolean(true);
+ out.writeUTF(getRowKeyRegExp());
+ }
+
+ // equalsMap
+ out.writeInt(equalsMap.size());
+ for (Entry<byte [], byte[]> entry : equalsMap.entrySet()) {
+ Bytes.writeByteArray(out, entry.getKey());
+ byte[] value = entry.getValue();
+ out.writeInt(value.length);
+ out.write(value);
+ }
+
+ // null columns
+ out.writeInt(nullColumns.size());
+ for (byte [] col : nullColumns) {
+ Bytes.writeByteArray(out, col);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java b/src/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java
new file mode 100644
index 0000000..32bdffc
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java
@@ -0,0 +1,82 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * This comparator is for use with ColumnValueFilter, for filtering based on
+ * the value of a given column. Use it to test if a given regular expression
+ * matches a cell value in the column.
+ * <p>
+ * Only EQUAL or NOT_EQUAL tests are valid with this comparator.
+ * <p>
+ * For example:
+ * <p>
+ * <pre>
+ * ColumnValueFilter cvf =
+ * new ColumnValueFilter("col",
+ * ColumnValueFilter.CompareOp.EQUAL,
+ * new RegexStringComparator(
+ * // v4 IP address
+ * "(((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3,3}" +
+ * "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))(\\/[0-9]+)?" +
+ * "|" +
+ * // v6 IP address
+ * "((([\\dA-Fa-f]{1,4}:){7}[\\dA-Fa-f]{1,4})(:([\\d]{1,3}.)" +
+ * "{3}[\\d]{1,3})?)(\\/[0-9]+)?"));
+ * </pre>
+ */
+public class RegexStringComparator implements WritableByteArrayComparable {
+
+ private Pattern pattern;
+
+ /** Nullary constructor for Writable */
+ public RegexStringComparator() {
+ }
+
+ /**
+ * Constructor
+ * @param expr a valid regular expression
+ */
+ public RegexStringComparator(String expr) {
+ this.pattern = Pattern.compile(expr);
+ }
+
+ public int compareTo(byte[] value) {
+ // Use find() for subsequence match instead of matches() (full sequence
+ // match) to adhere to the principle of least surprise.
+ return pattern.matcher(Bytes.toString(value)).find() ? 0 : 1;
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ this.pattern = Pattern.compile(in.readUTF());
+ }
+
+ public void write(DataOutput out) throws IOException {
+ out.writeUTF(pattern.toString());
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/filter/RowFilterInterface.java b/src/java/org/apache/hadoop/hbase/filter/RowFilterInterface.java
new file mode 100644
index 0000000..0db5f45
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/RowFilterInterface.java
@@ -0,0 +1,170 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.util.List;
+import java.util.SortedMap;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.io.Writable;
+
+/**
+ *
+ * Interface used for row-level filters applied to HRegion.HScanner scan
+ * results during calls to next().
+ * TODO: Make Filters use proper comparator comparing rows.
+ */
+public interface RowFilterInterface extends Writable {
+ /**
+ * Resets the state of the filter. Used prior to the start of a Region scan.
+ *
+ */
+ void reset();
+
+ /**
+ * Called to let filter know the final decision (to pass or filter) on a
+ * given row. With out HScanner calling this, the filter does not know if a
+ * row passed filtering even if it passed the row itself because other
+ * filters may have failed the row. E.g. when this filter is a member of a
+ * RowFilterSet with an OR operator.
+ *
+ * @see RowFilterSet
+ * @param filtered
+ * @param key
+ * @deprecated Use {@link #rowProcessed(boolean, byte[], int, int)} instead.
+ */
+ void rowProcessed(boolean filtered, byte [] key);
+
+ /**
+ * Called to let filter know the final decision (to pass or filter) on a
+ * given row. With out HScanner calling this, the filter does not know if a
+ * row passed filtering even if it passed the row itself because other
+ * filters may have failed the row. E.g. when this filter is a member of a
+ * RowFilterSet with an OR operator.
+ *
+ * @see RowFilterSet
+ * @param filtered
+ * @param key
+ * @param offset
+ * @param length
+ */
+ void rowProcessed(boolean filtered, byte [] key, int offset, int length);
+
+ /**
+ * Returns whether or not the filter should always be processed in any
+ * filtering call. This precaution is necessary for filters that maintain
+ * state and need to be updated according to their response to filtering
+ * calls (see WhileMatchRowFilter for an example). At times, filters nested
+ * in RowFilterSets may or may not be called because the RowFilterSet
+ * determines a result as fast as possible. Returning true for
+ * processAlways() ensures that the filter will always be called.
+ *
+ * @return whether or not to always process the filter
+ */
+ boolean processAlways();
+
+ /**
+ * Determines if the filter has decided that all remaining results should be
+ * filtered (skipped). This is used to prevent the scanner from scanning a
+ * the rest of the HRegion when for sure the filter will exclude all
+ * remaining rows.
+ *
+ * @return true if the filter intends to filter all remaining rows.
+ */
+ boolean filterAllRemaining();
+
+ /**
+ * Filters on just a row key. This is the first chance to stop a row.
+ *
+ * @param rowKey
+ * @return true if given row key is filtered and row should not be processed.
+ * @deprecated Use {@link #filterRowKey(byte[], int, int)} instead.
+ */
+ boolean filterRowKey(final byte [] rowKey);
+
+ /**
+ * Filters on just a row key. This is the first chance to stop a row.
+ *
+ * @param rowKey
+ * @param offset
+ * @param length
+ * @return true if given row key is filtered and row should not be processed.
+ */
+ boolean filterRowKey(final byte [] rowKey, final int offset, final int length);
+
+ /**
+ * Filters on row key, column name, and column value. This will take individual columns out of a row,
+ * but the rest of the row will still get through.
+ *
+ * @param rowKey row key to filter on.
+ * @param columnName column name to filter on
+ * @param columnValue column value to filter on
+ * @return true if row filtered and should not be processed.
+ * @deprecated Use {@link #filterColumn(byte[], int, int, byte[], int, int, byte[], int, int)}
+ * instead.
+ */
+ @Deprecated
+ boolean filterColumn(final byte [] rowKey, final byte [] columnName,
+ final byte [] columnValue);
+
+ /**
+ * Filters on row key, column name, and column value. This will take individual columns out of a row,
+ * but the rest of the row will still get through.
+ *
+ * @param rowKey row key to filter on.
+ * @param colunmName column name to filter on
+ * @param columnValue column value to filter on
+ * @return true if row filtered and should not be processed.
+ */
+ boolean filterColumn(final byte [] rowKey, final int roffset,
+ final int rlength, final byte [] colunmName, final int coffset,
+ final int clength, final byte [] columnValue, final int voffset,
+ final int vlength);
+
+ /**
+ * Filter on the fully assembled row. This is the last chance to stop a row.
+ *
+ * @param columns
+ * @return true if row filtered and should not be processed.
+ */
+ boolean filterRow(final SortedMap<byte [], Cell> columns);
+
+ /**
+ * Filter on the fully assembled row. This is the last chance to stop a row.
+ *
+ * @param results
+ * @return true if row filtered and should not be processed.
+ */
+ boolean filterRow(final List<KeyValue> results);
+
+ /**
+ * Validates that this filter applies only to a subset of the given columns.
+ * This check is done prior to opening of scanner due to the limitation that
+ * filtering of columns is dependent on the retrieval of those columns within
+ * the HRegion. Criteria on columns that are not part of a scanner's column
+ * list will be ignored. In the case of null value filters, all rows will pass
+ * the filter. This behavior should be 'undefined' for the user and therefore
+ * not permitted.
+ *
+ * @param columns
+ */
+ void validate(final byte [][] columns);
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/filter/RowFilterSet.java b/src/java/org/apache/hadoop/hbase/filter/RowFilterSet.java
new file mode 100644
index 0000000..6816845
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/RowFilterSet.java
@@ -0,0 +1,291 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.SortedMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.io.ObjectWritable;
+
+/**
+ * Implementation of RowFilterInterface that represents a set of RowFilters
+ * which will be evaluated with a specified boolean operator MUST_PASS_ALL
+ * (!AND) or MUST_PASS_ONE (!OR). Since you can use RowFilterSets as children
+ * of RowFilterSet, you can create a hierarchy of filters to be evaluated.
+ */
+public class RowFilterSet implements RowFilterInterface {
+
+ /** set operator */
+ public static enum Operator {
+ /** !AND */
+ MUST_PASS_ALL,
+ /** !OR */
+ MUST_PASS_ONE
+ }
+
+ private Operator operator = Operator.MUST_PASS_ALL;
+ private Set<RowFilterInterface> filters = new HashSet<RowFilterInterface>();
+
+ /**
+ * Default constructor, filters nothing. Required though for RPC
+ * deserialization.
+ */
+ public RowFilterSet() {
+ super();
+ }
+
+ /**
+ * Constructor that takes a set of RowFilters. The default operator
+ * MUST_PASS_ALL is assumed.
+ *
+ * @param rowFilters
+ */
+ public RowFilterSet(final Set<RowFilterInterface> rowFilters) {
+ this.filters = rowFilters;
+ }
+
+ /**
+ * Constructor that takes a set of RowFilters and an operator.
+ *
+ * @param operator Operator to process filter set with.
+ * @param rowFilters Set of row filters.
+ */
+ public RowFilterSet(final Operator operator,
+ final Set<RowFilterInterface> rowFilters) {
+ this.filters = rowFilters;
+ this.operator = operator;
+ }
+
+ /** Get the operator.
+ *
+ * @return operator
+ */
+ public Operator getOperator() {
+ return operator;
+ }
+
+ /** Get the filters.
+ *
+ * @return filters
+ */
+ public Set<RowFilterInterface> getFilters() {
+ return filters;
+ }
+
+ /** Add a filter.
+ *
+ * @param filter
+ */
+ public void addFilter(RowFilterInterface filter) {
+ this.filters.add(filter);
+ }
+
+ public void validate(final byte [][] columns) {
+ for (RowFilterInterface filter : filters) {
+ filter.validate(columns);
+ }
+ }
+
+ public void reset() {
+ for (RowFilterInterface filter : filters) {
+ filter.reset();
+ }
+ }
+
+ public void rowProcessed(boolean filtered, byte [] rowKey) {
+ rowProcessed(filtered, rowKey, 0, rowKey.length);
+ }
+
+ public void rowProcessed(boolean filtered, byte[] key, int offset, int length) {
+ for (RowFilterInterface filter : filters) {
+ filter.rowProcessed(filtered, key, offset, length);
+ }
+ }
+
+ public boolean processAlways() {
+ for (RowFilterInterface filter : filters) {
+ if (filter.processAlways()) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ public boolean filterAllRemaining() {
+ boolean result = operator == Operator.MUST_PASS_ONE;
+ for (RowFilterInterface filter : filters) {
+ if (operator == Operator.MUST_PASS_ALL) {
+ if (filter.filterAllRemaining()) {
+ return true;
+ }
+ } else if (operator == Operator.MUST_PASS_ONE) {
+ if (!filter.filterAllRemaining()) {
+ return false;
+ }
+ }
+ }
+ return result;
+ }
+
+ public boolean filterRowKey(final byte [] rowKey) {
+ return filterRowKey(rowKey, 0, rowKey.length);
+ }
+
+
+ public boolean filterRowKey(byte[] rowKey, int offset, int length) {
+ boolean resultFound = false;
+ boolean result = operator == Operator.MUST_PASS_ONE;
+ for (RowFilterInterface filter : filters) {
+ if (!resultFound) {
+ if (operator == Operator.MUST_PASS_ALL) {
+ if (filter.filterAllRemaining() ||
+ filter.filterRowKey(rowKey, offset, length)) {
+ result = true;
+ resultFound = true;
+ }
+ } else if (operator == Operator.MUST_PASS_ONE) {
+ if (!filter.filterAllRemaining() &&
+ !filter.filterRowKey(rowKey, offset, length)) {
+ result = false;
+ resultFound = true;
+ }
+ }
+ } else if (filter.processAlways()) {
+ filter.filterRowKey(rowKey, offset, length);
+ }
+ }
+ return result;
+ }
+
+ public boolean filterColumn(final byte [] rowKey, final byte [] colKey,
+ final byte[] data) {
+ return filterColumn(rowKey, 0, rowKey.length, colKey, 0, colKey.length,
+ data, 0, data.length);
+ }
+
+ public boolean filterColumn(byte[] rowKey, int roffset, int rlength,
+ byte[] columnName, int coffset, int clength, byte[] columnValue,
+ int voffset, int vlength) {
+ boolean resultFound = false;
+ boolean result = operator == Operator.MUST_PASS_ONE;
+ for (RowFilterInterface filter : filters) {
+ if (!resultFound) {
+ if (operator == Operator.MUST_PASS_ALL) {
+ if (filter.filterAllRemaining() ||
+ filter.filterColumn(rowKey, roffset, rlength, columnName, coffset,
+ clength, columnValue, voffset, vlength)) {
+ result = true;
+ resultFound = true;
+ }
+ } else if (operator == Operator.MUST_PASS_ONE) {
+ if (!filter.filterAllRemaining() &&
+ !filter.filterColumn(rowKey, roffset, rlength, columnName, coffset,
+ clength, columnValue, voffset, vlength)) {
+ result = false;
+ resultFound = true;
+ }
+ }
+ } else if (filter.processAlways()) {
+ filter.filterColumn(rowKey, roffset, rlength, columnName, coffset,
+ clength, columnValue, voffset, vlength);
+ }
+ }
+ return result;
+ }
+
+ public boolean filterRow(final SortedMap<byte [], Cell> columns) {
+ boolean resultFound = false;
+ boolean result = operator == Operator.MUST_PASS_ONE;
+ for (RowFilterInterface filter : filters) {
+ if (!resultFound) {
+ if (operator == Operator.MUST_PASS_ALL) {
+ if (filter.filterAllRemaining() || filter.filterRow(columns)) {
+ result = true;
+ resultFound = true;
+ }
+ } else if (operator == Operator.MUST_PASS_ONE) {
+ if (!filter.filterAllRemaining() && !filter.filterRow(columns)) {
+ result = false;
+ resultFound = true;
+ }
+ }
+ } else if (filter.processAlways()) {
+ filter.filterRow(columns);
+ }
+ }
+ return result;
+ }
+
+ public boolean filterRow(List<KeyValue> results) {
+ boolean resultFound = false;
+ boolean result = operator == Operator.MUST_PASS_ONE;
+ for (RowFilterInterface filter : filters) {
+ if (!resultFound) {
+ if (operator == Operator.MUST_PASS_ALL) {
+ if (filter.filterAllRemaining() || filter.filterRow(results)) {
+ result = true;
+ resultFound = true;
+ }
+ } else if (operator == Operator.MUST_PASS_ONE) {
+ if (!filter.filterAllRemaining() && !filter.filterRow(results)) {
+ result = false;
+ resultFound = true;
+ }
+ }
+ } else if (filter.processAlways()) {
+ filter.filterRow(results);
+ }
+ }
+ return result;
+ }
+
+ public void readFields(final DataInput in) throws IOException {
+ Configuration conf = new HBaseConfiguration();
+ byte opByte = in.readByte();
+ operator = Operator.values()[opByte];
+ int size = in.readInt();
+ if (size > 0) {
+ filters = new HashSet<RowFilterInterface>();
+ for (int i = 0; i < size; i++) {
+ RowFilterInterface filter = (RowFilterInterface) ObjectWritable
+ .readObject(in, conf);
+ filters.add(filter);
+ }
+ }
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ Configuration conf = new HBaseConfiguration();
+ out.writeByte(operator.ordinal());
+ out.writeInt(filters.size());
+ for (RowFilterInterface filter : filters) {
+ ObjectWritable.writeObject(out, filter, RowFilterInterface.class, conf);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/filter/StopRowFilter.java b/src/java/org/apache/hadoop/hbase/filter/StopRowFilter.java
new file mode 100644
index 0000000..5747178
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/StopRowFilter.java
@@ -0,0 +1,144 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+import java.util.SortedMap;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Implementation of RowFilterInterface that filters out rows greater than or
+ * equal to a specified rowKey.
+ */
+public class StopRowFilter implements RowFilterInterface {
+ private byte [] stopRowKey;
+
+ /**
+ * Default constructor, filters nothing. Required though for RPC
+ * deserialization.
+ */
+ public StopRowFilter() {
+ super();
+ }
+
+ /**
+ * Constructor that takes a stopRowKey on which to filter
+ *
+ * @param stopRowKey rowKey to filter on.
+ */
+ public StopRowFilter(final byte [] stopRowKey) {
+ this.stopRowKey = stopRowKey;
+ }
+
+ /**
+ * An accessor for the stopRowKey
+ *
+ * @return the filter's stopRowKey
+ */
+ public byte [] getStopRowKey() {
+ return this.stopRowKey;
+ }
+
+ public void validate(final byte [][] columns) {
+ // Doesn't filter columns
+ }
+
+ public void reset() {
+ // Nothing to reset
+ }
+
+ public void rowProcessed(boolean filtered, byte [] rowKey) {
+ // Doesn't care
+ }
+
+ public void rowProcessed(boolean filtered, byte[] key, int offset, int length) {
+ // Doesn't care
+ }
+
+ public boolean processAlways() {
+ return false;
+ }
+
+ public boolean filterAllRemaining() {
+ return false;
+ }
+
+ public boolean filterRowKey(final byte [] rowKey) {
+ return filterRowKey(rowKey, 0, rowKey.length);
+ }
+
+ public boolean filterRowKey(byte[] rowKey, int offset, int length) {
+ if (rowKey == null) {
+ if (this.stopRowKey == null) {
+ return true;
+ }
+ return false;
+ }
+ return Bytes.compareTo(stopRowKey, 0, stopRowKey.length, rowKey, offset,
+ length) <= 0;
+ }
+
+ /**
+ * Because StopRowFilter does not examine column information, this method
+ * defaults to calling the rowKey-only version of filter.
+ * @param rowKey
+ * @param colKey
+ * @param data
+ * @return boolean
+ */
+ public boolean filterColumn(final byte [] rowKey, final byte [] colKey,
+ final byte[] data) {
+ return filterRowKey(rowKey);
+ }
+
+ public boolean filterColumn(byte[] rowKey, int roffset, int rlength,
+ byte[] colunmName, int coffset, int clength, byte[] columnValue,
+ int voffset, int vlength) {
+ return filterRowKey(rowKey, roffset, rlength);
+ }
+
+ /**
+ * Because StopRowFilter does not examine column information, this method
+ * defaults to calling filterAllRemaining().
+ * @param columns
+ * @return boolean
+ */
+ public boolean filterRow(final SortedMap<byte [], Cell> columns) {
+ return filterAllRemaining();
+ }
+
+ public boolean filterRow(List<KeyValue> results) {
+ return filterAllRemaining();
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ this.stopRowKey = Bytes.readByteArray(in);
+ }
+
+ public void write(DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, this.stopRowKey);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/filter/SubstringComparator.java b/src/java/org/apache/hadoop/hbase/filter/SubstringComparator.java
new file mode 100644
index 0000000..0bb76f1
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/SubstringComparator.java
@@ -0,0 +1,71 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * This comparator is for use with ColumnValueFilter, for filtering based on
+ * the value of a given column. Use it to test if a given substring appears
+ * in a cell value in the column. The comparison is case insensitive.
+ * <p>
+ * Only EQUAL or NOT_EQUAL tests are valid with this comparator.
+ * <p>
+ * For example:
+ * <p>
+ * <pre>
+ * ColumnValueFilter cvf =
+ * new ColumnValueFilter("col", ColumnValueFilter.CompareOp.EQUAL,
+ * new SubstringComparator("substr"));
+ * </pre>
+ */
+public class SubstringComparator implements WritableByteArrayComparable {
+
+ private String substr;
+
+ /** Nullary constructor for Writable */
+ public SubstringComparator() {
+ }
+
+ /**
+ * Constructor
+ * @param substr the substring
+ */
+ public SubstringComparator(String substr) {
+ this.substr = substr.toLowerCase();
+ }
+
+ public int compareTo(byte[] value) {
+ return Bytes.toString(value).toLowerCase().contains(substr) ? 0 : 1;
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ substr = in.readUTF();
+ }
+
+ public void write(DataOutput out) throws IOException {
+ out.writeUTF(substr);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/filter/WhileMatchRowFilter.java b/src/java/org/apache/hadoop/hbase/filter/WhileMatchRowFilter.java
new file mode 100644
index 0000000..584b780
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/WhileMatchRowFilter.java
@@ -0,0 +1,165 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+import java.util.SortedMap;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.Cell;
+
+/**
+ * WhileMatchRowFilter is a wrapper filter that filters everything after the
+ * first filtered row. Once the nested filter returns true for either of it's
+ * filter(..) methods or filterNotNull(SortedMap<Text, byte[]>), this wrapper's
+ * filterAllRemaining() will return true. All filtering methods will
+ * thereafter defer to the result of filterAllRemaining().
+ */
+public class WhileMatchRowFilter implements RowFilterInterface {
+ private boolean filterAllRemaining = false;
+ private RowFilterInterface filter;
+
+ /**
+ * Default constructor, filters nothing. Required though for RPC
+ * deserialization.
+ */
+ public WhileMatchRowFilter() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param filter
+ */
+ public WhileMatchRowFilter(RowFilterInterface filter) {
+ this.filter = filter;
+ }
+
+ /**
+ * Returns the internal filter being wrapped
+ *
+ * @return the internal filter
+ */
+ public RowFilterInterface getInternalFilter() {
+ return this.filter;
+ }
+
+ public void reset() {
+ this.filterAllRemaining = false;
+ this.filter.reset();
+ }
+
+ public boolean processAlways() {
+ return true;
+ }
+
+ /**
+ * Returns true once the nested filter has filtered out a row (returned true
+ * on a call to one of it's filtering methods). Until then it returns false.
+ *
+ * @return true/false whether the nested filter has returned true on a filter
+ * call.
+ */
+ public boolean filterAllRemaining() {
+ return this.filterAllRemaining || this.filter.filterAllRemaining();
+ }
+
+ public boolean filterRowKey(final byte [] rowKey) {
+ changeFAR(this.filter.filterRowKey(rowKey, 0, rowKey.length));
+ return filterAllRemaining();
+ }
+
+ public boolean filterRowKey(byte[] rowKey, int offset, int length) {
+ changeFAR(this.filter.filterRowKey(rowKey, offset, length));
+ return filterAllRemaining();
+ }
+
+ public boolean filterColumn(final byte [] rowKey, final byte [] colKey,
+ final byte[] data) {
+ changeFAR(this.filter.filterColumn(rowKey, colKey, data));
+ return filterAllRemaining();
+ }
+
+ public boolean filterRow(final SortedMap<byte [], Cell> columns) {
+ changeFAR(this.filter.filterRow(columns));
+ return filterAllRemaining();
+ }
+
+ public boolean filterRow(List<KeyValue> results) {
+ changeFAR(this.filter.filterRow(results));
+ return filterAllRemaining();
+ }
+
+ /**
+ * Change filterAllRemaining from false to true if value is true, otherwise
+ * leave as is.
+ *
+ * @param value
+ */
+ private void changeFAR(boolean value) {
+ this.filterAllRemaining = this.filterAllRemaining || value;
+ }
+
+ public void rowProcessed(boolean filtered, byte [] rowKey) {
+ this.filter.rowProcessed(filtered, rowKey, 0, rowKey.length);
+ }
+
+ public void rowProcessed(boolean filtered, byte[] key, int offset, int length) {
+ this.filter.rowProcessed(filtered, key, offset, length);
+ }
+
+ public void validate(final byte [][] columns) {
+ this.filter.validate(columns);
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ String className = in.readUTF();
+
+ try {
+ this.filter = (RowFilterInterface)(Class.forName(className).
+ newInstance());
+ this.filter.readFields(in);
+ } catch (InstantiationException e) {
+ throw new RuntimeException("Failed to deserialize WhileMatchRowFilter.",
+ e);
+ } catch (IllegalAccessException e) {
+ throw new RuntimeException("Failed to deserialize WhileMatchRowFilter.",
+ e);
+ } catch (ClassNotFoundException e) {
+ throw new RuntimeException("Failed to deserialize WhileMatchRowFilter.",
+ e);
+ }
+ }
+
+ public void write(DataOutput out) throws IOException {
+ out.writeUTF(this.filter.getClass().getName());
+ this.filter.write(out);
+ }
+
+ public boolean filterColumn(byte[] rowKey, int roffset, int rlength,
+ byte[] colunmName, int coffset, int clength, byte[] columnValue,
+ int voffset, int vlength) {
+ // TODO Auto-generated method stub
+ return false;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/filter/WritableByteArrayComparable.java b/src/java/org/apache/hadoop/hbase/filter/WritableByteArrayComparable.java
new file mode 100644
index 0000000..e9f10f9
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/WritableByteArrayComparable.java
@@ -0,0 +1,28 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.io.Writable;
+
+/** Interface for both Comparable<byte []> and Writable. */
+public interface WritableByteArrayComparable extends Writable,
+ Comparable<byte[]> {
+ // Not methods, just tie the two interfaces together.
+}
diff --git a/src/java/org/apache/hadoop/hbase/filter/package-info.java b/src/java/org/apache/hadoop/hbase/filter/package-info.java
new file mode 100644
index 0000000..81dc032
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/filter/package-info.java
@@ -0,0 +1,27 @@
+/*
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**Provides row-level filters applied to HRegion scan results during calls to {@link org.apache.hadoop.hbase.client.Scanner#next()}.
+
+<p>Use {@link org.apache.hadoop.hbase.filter.StopRowFilter} to stop the scan once rows exceed the supplied row key.
+Filters will not stop the scan unless hosted inside of a {@link org.apache.hadoop.hbase.filter.WhileMatchRowFilter}.
+Supply a set of filters to apply using {@link org.apache.hadoop.hbase.filter.RowFilterSet}.
+</p>
+*/
+package org.apache.hadoop.hbase.filter;
diff --git a/src/java/org/apache/hadoop/hbase/io/BatchOperation.java b/src/java/org/apache/hadoop/hbase/io/BatchOperation.java
new file mode 100644
index 0000000..66c07aa
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/BatchOperation.java
@@ -0,0 +1,149 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Batch update operation.
+ *
+ * If value is null, its a DELETE operation. If its non-null, its a PUT.
+ * This object is purposely bare-bones because many instances are created
+ * during bulk uploads. We have one class for DELETEs and PUTs rather than
+ * a class per type because it makes the serialization easier.
+ * @see BatchUpdate
+ */
+public class BatchOperation implements Writable, HeapSize {
+ /**
+ * Estimated size of this object.
+ */
+ // JHat says this is 32 bytes.
+ public final int ESTIMATED_HEAP_TAX = 36;
+
+ private byte [] column = null;
+
+ // A null value defines DELETE operations.
+ private byte [] value = null;
+
+ /**
+ * Default constructor
+ */
+ public BatchOperation() {
+ this((byte [])null);
+ }
+
+ /**
+ * Creates a DELETE batch operation.
+ * @param column column name
+ */
+ public BatchOperation(final byte [] column) {
+ this(column, null);
+ }
+
+ /**
+ * Creates a DELETE batch operation.
+ * @param column column name
+ */
+ public BatchOperation(final String column) {
+ this(Bytes.toBytes(column), null);
+ }
+
+ /**
+ * Create a batch operation.
+ * @param column column name
+ * @param value column value. If non-null, this is a PUT operation.
+ */
+ public BatchOperation(final String column, String value) {
+ this(Bytes.toBytes(column), Bytes.toBytes(value));
+ }
+
+ /**
+ * Create a batch operation.
+ * @param column column name
+ * @param value column value. If non-null, this is a PUT operation.
+ */
+ public BatchOperation(final byte [] column, final byte [] value) {
+ this.column = column;
+ this.value = value;
+ }
+
+ /**
+ * @return the column
+ */
+ public byte [] getColumn() {
+ return this.column;
+ }
+
+ /**
+ * @return the value
+ */
+ public byte[] getValue() {
+ return this.value;
+ }
+
+ /**
+ * @return True if this is a PUT operation (this.value is not null).
+ */
+ public boolean isPut() {
+ return this.value != null;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return "column => " + Bytes.toString(this.column) + ", value => '...'";
+ }
+
+ // Writable methods
+
+ // This is a hotspot when updating deserializing incoming client submissions.
+ // In Performance Evaluation sequentialWrite, 70% of object allocations are
+ // done in here.
+ public void readFields(final DataInput in) throws IOException {
+ this.column = Bytes.readByteArray(in);
+ // Is there a value to read?
+ if (in.readBoolean()) {
+ this.value = new byte[in.readInt()];
+ in.readFully(this.value);
+ }
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, this.column);
+ boolean p = isPut();
+ out.writeBoolean(p);
+ if (p) {
+ out.writeInt(value.length);
+ out.write(value);
+ }
+ }
+
+ public long heapSize() {
+ return Bytes.ESTIMATED_HEAP_TAX * 2 + this.column.length +
+ this.value.length + ESTIMATED_HEAP_TAX;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/BatchUpdate.java b/src/java/org/apache/hadoop/hbase/io/BatchUpdate.java
new file mode 100644
index 0000000..3c7ad0e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/BatchUpdate.java
@@ -0,0 +1,404 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.lang.management.RuntimeMXBean;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * A Writable object that contains a series of BatchOperations
+ *
+ * There is one BatchUpdate object per server, so a series of batch operations
+ * can result in multiple BatchUpdate objects if the batch contains rows that
+ * are served by multiple region servers.
+ */
+public class BatchUpdate
+implements WritableComparable<BatchUpdate>, Iterable<BatchOperation>, HeapSize {
+ private static final Log LOG = LogFactory.getLog(BatchUpdate.class);
+
+ /**
+ * Estimated 'shallow size' of this object not counting payload.
+ */
+ // Shallow size is 56. Add 32 for the arraylist below.
+ public static final int ESTIMATED_HEAP_TAX = 56 + 32;
+
+ // the row being updated
+ private byte [] row = null;
+ private long size = 0;
+
+ // the batched operations
+ private ArrayList<BatchOperation> operations =
+ new ArrayList<BatchOperation>();
+
+ private long timestamp = HConstants.LATEST_TIMESTAMP;
+
+ private long rowLock = -1l;
+
+ /**
+ * Default constructor used serializing. Do not use directly.
+ */
+ public BatchUpdate() {
+ this ((byte [])null);
+ }
+
+ /**
+ * Initialize a BatchUpdate operation on a row. Timestamp is assumed to be
+ * now.
+ *
+ * @param row
+ */
+ public BatchUpdate(final String row) {
+ this(Bytes.toBytes(row), HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Initialize a BatchUpdate operation on a row. Timestamp is assumed to be
+ * now.
+ *
+ * @param row
+ */
+ public BatchUpdate(final byte [] row) {
+ this(row, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Initialize a BatchUpdate operation on a row with a specific timestamp.
+ *
+ * @param row
+ * @param timestamp
+ */
+ public BatchUpdate(final String row, long timestamp){
+ this(Bytes.toBytes(row), timestamp);
+ }
+
+ /**
+ * Recopy constructor
+ * @param buToCopy BatchUpdate to copy
+ */
+ public BatchUpdate(BatchUpdate buToCopy) {
+ this(buToCopy.getRow(), buToCopy.getTimestamp());
+ for(BatchOperation bo : buToCopy) {
+ byte [] val = bo.getValue();
+ if (val == null) {
+ // Presume a delete is intended.
+ this.delete(bo.getColumn());
+ } else {
+ this.put(bo.getColumn(), val);
+ }
+ }
+ }
+
+ /**
+ * Initialize a BatchUpdate operation on a row with a specific timestamp.
+ *
+ * @param row
+ * @param timestamp
+ */
+ public BatchUpdate(final byte [] row, long timestamp){
+ this.row = row;
+ this.timestamp = timestamp;
+ this.operations = new ArrayList<BatchOperation>();
+ this.size = (row == null)? 0: row.length;
+ }
+
+ /**
+ * Create a batch operation.
+ * @param rr the RowResult
+ */
+ public BatchUpdate(final RowResult rr) {
+ this(rr.getRow());
+ for(Map.Entry<byte[], Cell> entry : rr.entrySet()){
+ this.put(entry.getKey(), entry.getValue().getValue());
+ }
+ }
+
+ /**
+ * Get the row lock associated with this update
+ * @return the row lock
+ */
+ public long getRowLock() {
+ return rowLock;
+ }
+
+ /**
+ * Set the lock to be used for this update
+ * @param rowLock the row lock
+ */
+ public void setRowLock(long rowLock) {
+ this.rowLock = rowLock;
+ }
+
+
+ /** @return the row */
+ public byte [] getRow() {
+ return row;
+ }
+
+ /**
+ * @return the timestamp this BatchUpdate will be committed with.
+ */
+ public long getTimestamp() {
+ return timestamp;
+ }
+
+ /**
+ * Set this BatchUpdate's timestamp.
+ *
+ * @param timestamp
+ */
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ }
+
+ /**
+ * Get the current value of the specified column
+ *
+ * @param column column name
+ * @return byte[] the cell value, returns null if the column does not exist.
+ */
+ public synchronized byte[] get(final String column) {
+ return get(Bytes.toBytes(column));
+ }
+
+ /**
+ * Get the current value of the specified column
+ *
+ * @param column column name
+ * @return byte[] the cell value, returns null if the column does not exist.
+ */
+ public synchronized byte[] get(final byte[] column) {
+ for (BatchOperation operation: operations) {
+ if (Arrays.equals(column, operation.getColumn())) {
+ return operation.getValue();
+ }
+ }
+ return null;
+ }
+
+ /**
+ * Get the current columns
+ *
+ * @return byte[][] an array of byte[] columns
+ */
+ public synchronized byte[][] getColumns() {
+ byte[][] columns = new byte[operations.size()][];
+ for (int i = 0; i < operations.size(); i++) {
+ columns[i] = operations.get(i).getColumn();
+ }
+ return columns;
+ }
+
+ /**
+ * Check if the specified column is currently assigned a value
+ *
+ * @param column column to check for
+ * @return boolean true if the given column exists
+ */
+ public synchronized boolean hasColumn(String column) {
+ return hasColumn(Bytes.toBytes(column));
+ }
+
+ /**
+ * Check if the specified column is currently assigned a value
+ *
+ * @param column column to check for
+ * @return boolean true if the given column exists
+ */
+ public synchronized boolean hasColumn(byte[] column) {
+ byte[] getColumn = get(column);
+ if (getColumn == null) {
+ return false;
+ }
+ return true;
+ }
+
+ /**
+ * Change a value for the specified column
+ *
+ * @param column column whose value is being set
+ * @param val new value for column. Cannot be null (can be empty).
+ */
+ public synchronized void put(final String column, final byte val[]) {
+ put(Bytes.toBytes(column), val);
+ }
+
+ /**
+ * Change a value for the specified column
+ *
+ * @param column column whose value is being set
+ * @param val new value for column. Cannot be null (can be empty).
+ */
+ public synchronized void put(final byte [] column, final byte val[]) {
+ if (val == null) {
+ // If null, the PUT becomes a DELETE operation.
+ throw new IllegalArgumentException("Passed value cannot be null");
+ }
+ BatchOperation bo = new BatchOperation(column, val);
+ this.size += bo.heapSize();
+ operations.add(bo);
+ }
+
+ /**
+ * Delete the value for a column
+ * Deletes the cell whose row/column/commit-timestamp match those of the
+ * delete.
+ * @param column name of column whose value is to be deleted
+ */
+ public void delete(final String column) {
+ delete(Bytes.toBytes(column));
+ }
+
+ /**
+ * Delete the value for a column
+ * Deletes the cell whose row/column/commit-timestamp match those of the
+ * delete.
+ * @param column name of column whose value is to be deleted
+ */
+ public synchronized void delete(final byte [] column) {
+ operations.add(new BatchOperation(column));
+ }
+
+ //
+ // Iterable
+ //
+
+ /**
+ * @return Iterator<BatchOperation>
+ */
+ public Iterator<BatchOperation> iterator() {
+ return operations.iterator();
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("row => ");
+ sb.append(row == null? "": Bytes.toString(row));
+ sb.append(", {");
+ boolean morethanone = false;
+ for (BatchOperation bo: this.operations) {
+ if (morethanone) {
+ sb.append(", ");
+ }
+ morethanone = true;
+ sb.append(bo.toString());
+ }
+ sb.append("}");
+ return sb.toString();
+ }
+
+ //
+ // Writable
+ //
+
+ public void readFields(final DataInput in) throws IOException {
+ // Clear any existing operations; may be hangovers from previous use of
+ // this instance.
+ if (this.operations.size() != 0) {
+ this.operations.clear();
+ }
+ this.row = Bytes.readByteArray(in);
+ timestamp = in.readLong();
+ this.size = in.readLong();
+ int nOps = in.readInt();
+ for (int i = 0; i < nOps; i++) {
+ BatchOperation op = new BatchOperation();
+ op.readFields(in);
+ this.operations.add(op);
+ }
+ this.rowLock = in.readLong();
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, this.row);
+ out.writeLong(timestamp);
+ out.writeLong(this.size);
+ out.writeInt(operations.size());
+ for (BatchOperation op: operations) {
+ op.write(out);
+ }
+ out.writeLong(this.rowLock);
+ }
+
+ public int compareTo(BatchUpdate o) {
+ return Bytes.compareTo(this.row, o.getRow());
+ }
+
+ public long heapSize() {
+ return this.row.length + Bytes.ESTIMATED_HEAP_TAX + this.size +
+ ESTIMATED_HEAP_TAX;
+ }
+
+ /**
+ * Code to test sizes of BatchUpdate arrays.
+ * @param args
+ * @throws InterruptedException
+ */
+ public static void main(String[] args) throws InterruptedException {
+ RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
+ LOG.info("vmName=" + runtime.getVmName() + ", vmVendor="
+ + runtime.getVmVendor() + ", vmVersion=" + runtime.getVmVersion());
+ LOG.info("vmInputArguments=" + runtime.getInputArguments());
+ final int count = 10000;
+ BatchUpdate[] batch1 = new BatchUpdate[count];
+ // TODO: x32 vs x64
+ long size = 0;
+ for (int i = 0; i < count; i++) {
+ BatchUpdate bu = new BatchUpdate(HConstants.EMPTY_BYTE_ARRAY);
+ bu.put(HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY);
+ batch1[i] = bu;
+ size += bu.heapSize();
+ }
+ LOG.info("batch1 estimated size=" + size);
+ // Make a variably sized memcache.
+ size = 0;
+ BatchUpdate[] batch2 = new BatchUpdate[count];
+ for (int i = 0; i < count; i++) {
+ BatchUpdate bu = new BatchUpdate(Bytes.toBytes(i));
+ bu.put(Bytes.toBytes(i), new byte[i]);
+ batch2[i] = bu;
+ size += bu.heapSize();
+ }
+ LOG.info("batch2 estimated size=" + size);
+ final int seconds = 30;
+ LOG.info("Waiting " + seconds + " seconds while heap dump is taken");
+ for (int i = 0; i < seconds; i++) {
+ Thread.sleep(1000);
+ }
+ LOG.info("Exiting.");
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/BlockFSInputStream.java b/src/java/org/apache/hadoop/hbase/io/BlockFSInputStream.java
new file mode 100644
index 0000000..920ad3a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/BlockFSInputStream.java
@@ -0,0 +1,284 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.ThreadFactory;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FSInputStream;
+import org.apache.hadoop.fs.PositionedReadable;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hbase.util.SoftValueMap;
+import org.apache.hadoop.io.DataInputBuffer;
+
+/**
+ * An implementation of {@link FSInputStream} that reads the stream in blocks
+ * of a fixed, configurable size. The blocks are stored in a memory-sensitive
+ * cache. Implements Runnable. Run it on a period so we clean up soft
+ * references from the reference queue.
+ */
+public class BlockFSInputStream extends FSInputStream {
+ static final Log LOG = LogFactory.getLog(BlockFSInputStream.class);
+ /*
+ * Set up scheduled execution of cleanup of soft references. Run with one
+ * thread for now. May need more when many files. Should be an option but
+ * also want BlockFSInputStream to be self-contained.
+ */
+ private static final ScheduledExecutorService EXECUTOR =
+ Executors.newSingleThreadScheduledExecutor(new ThreadFactory() {
+ public Thread newThread(Runnable r) {
+ Thread t = new Thread(r);
+ t.setDaemon(true);
+ t.setName("BlockFSInputStreamReferenceQueueChecker");
+ return t;
+ }
+ });
+
+ /*
+ * The registration of this object in EXECUTOR.
+ */
+ private final ScheduledFuture<?> registration;
+
+ private final InputStream in;
+
+ private final long fileLength;
+
+ private final int blockSize;
+ private final SoftValueMap<Long, byte[]> blocks;
+
+ private boolean closed;
+
+ private DataInputBuffer blockStream = new DataInputBuffer();
+
+ private long blockEnd = -1;
+
+ private long pos = 0;
+
+ /**
+ * @param in
+ * @param fileLength
+ * @param blockSize the size of each block in bytes.
+ */
+ public BlockFSInputStream(InputStream in, long fileLength, int blockSize) {
+ this.in = in;
+ if (!(in instanceof Seekable) || !(in instanceof PositionedReadable)) {
+ throw new IllegalArgumentException(
+ "In is not an instance of Seekable or PositionedReadable");
+ }
+ this.fileLength = fileLength;
+ this.blockSize = blockSize;
+ // A memory-sensitive map that has soft references to values
+ this.blocks = new SoftValueMap<Long, byte []>() {
+ private long hits, misses;
+
+ @Override
+ public byte [] get(Object key) {
+ byte [] value = super.get(key);
+ if (value == null) {
+ misses++;
+ } else {
+ hits++;
+ }
+ if (LOG.isDebugEnabled() && ((hits + misses) % 10000) == 0) {
+ long hitRate = (100 * hits) / (hits + misses);
+ LOG.debug("Hit rate for cache " + hashCode() + ": " + hitRate + "%");
+ }
+ return value;
+ }
+ };
+ // Register a Runnable that runs checkReferences on a period.
+ final int hashcode = hashCode();
+ this.registration = EXECUTOR.scheduleWithFixedDelay(new Runnable() {
+ public void run() {
+ int cleared = checkReferences();
+ if (LOG.isDebugEnabled() && cleared > 0) {
+ LOG.debug("Checker cleared " + cleared + " in " + hashcode);
+ }
+ }
+ }, 1, 1, TimeUnit.SECONDS);
+ }
+
+ /**
+ * @see org.apache.hadoop.fs.FSInputStream#getPos()
+ */
+ @Override
+ public synchronized long getPos() {
+ return pos;
+ }
+
+ /**
+ * @see java.io.InputStream#available()
+ */
+ @Override
+ public synchronized int available() {
+ return (int) (fileLength - pos);
+ }
+
+ /**
+ * @see org.apache.hadoop.fs.FSInputStream#seek(long)
+ */
+ @Override
+ public synchronized void seek(long targetPos) throws IOException {
+ if (targetPos > fileLength) {
+ throw new IOException("Cannot seek after EOF");
+ }
+ pos = targetPos;
+ blockEnd = -1;
+ }
+
+ /**
+ * @see org.apache.hadoop.fs.FSInputStream#seekToNewSource(long)
+ */
+ @Override
+ public synchronized boolean seekToNewSource(long targetPos)
+ throws IOException {
+ return false;
+ }
+
+ /**
+ * @see java.io.InputStream#read()
+ */
+ @Override
+ public synchronized int read() throws IOException {
+ if (closed) {
+ throw new IOException("Stream closed");
+ }
+ int result = -1;
+ if (pos < fileLength) {
+ if (pos > blockEnd) {
+ blockSeekTo(pos);
+ }
+ result = blockStream.read();
+ if (result >= 0) {
+ pos++;
+ }
+ }
+ return result;
+ }
+
+ /**
+ * @see java.io.InputStream#read(byte[], int, int)
+ */
+ @Override
+ public synchronized int read(byte buf[], int off, int len) throws IOException {
+ if (closed) {
+ throw new IOException("Stream closed");
+ }
+ if (pos < fileLength) {
+ if (pos > blockEnd) {
+ blockSeekTo(pos);
+ }
+ int realLen = Math.min(len, (int) (blockEnd - pos + 1));
+ int result = blockStream.read(buf, off, realLen);
+ if (result >= 0) {
+ pos += result;
+ }
+ return result;
+ }
+ return -1;
+ }
+
+ private synchronized void blockSeekTo(long target) throws IOException {
+ long targetBlock = target/blockSize;
+ long targetBlockStart = targetBlock * blockSize;
+ long targetBlockEnd = Math.min(targetBlockStart + blockSize, fileLength) - 1;
+ long blockLength = targetBlockEnd - targetBlockStart + 1;
+ long offsetIntoBlock = target - targetBlockStart;
+
+ byte[] block = blocks.get(Long.valueOf(targetBlockStart));
+ if (block == null) {
+ block = new byte[blockSize];
+ ((PositionedReadable) in).readFully(targetBlockStart, block, 0,
+ (int) blockLength);
+ blocks.put(Long.valueOf(targetBlockStart), block);
+ }
+
+ this.pos = target;
+ this.blockEnd = targetBlockEnd;
+ this.blockStream.reset(block, (int) offsetIntoBlock,
+ (int) (blockLength - offsetIntoBlock));
+
+ }
+
+ /**
+ * @see java.io.InputStream#close()
+ */
+ @Override
+ public void close() throws IOException {
+ if (closed) {
+ throw new IOException("Stream closed");
+ }
+ if (!this.registration.cancel(false)) {
+ LOG.warn("Failed cancel of " + this.registration);
+ }
+ int cleared = checkReferences();
+ if (LOG.isDebugEnabled() && cleared > 0) {
+ LOG.debug("Close cleared " + cleared + " in " + hashCode());
+ }
+ if (blockStream != null) {
+ blockStream.close();
+ blockStream = null;
+ }
+ super.close();
+ closed = true;
+ }
+
+ /**
+ * We don't support marks.
+ */
+ @Override
+ public boolean markSupported() {
+ return false;
+ }
+
+ /**
+ * @see java.io.InputStream#mark(int)
+ */
+ @Override
+ public void mark(int readLimit) {
+ // Do nothing
+ }
+
+ /**
+ * @see java.io.InputStream#reset()
+ */
+ @Override
+ public void reset() throws IOException {
+ throw new IOException("Mark not supported");
+ }
+
+ /**
+ * Call frequently to clear Soft Reference Reference Queue.
+ * @return Count of references cleared.
+ */
+ public synchronized int checkReferences() {
+ if (this.closed) {
+ return 0;
+ }
+ return this.blocks.checkReferences();
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/Cell.java b/src/java/org/apache/hadoop/hbase/io/Cell.java
new file mode 100644
index 0000000..e3daccb
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/Cell.java
@@ -0,0 +1,278 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Comparator;
+import java.util.Iterator;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.Map;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+import agilejson.TOJSON;
+
+/**
+ * Cell - Used to transport a cell value (byte[]) and the timestamp it was
+ * stored with together as a result for get and getRow methods. This promotes
+ * the timestamp of a cell to a first-class value, making it easy to take note
+ * of temporal data. Cell is used all the way from HStore up to HTable.
+ */
+public class Cell implements Writable, Iterable<Map.Entry<Long, byte[]>>,
+ ISerializable {
+ protected final SortedMap<Long, byte[]> valueMap = new TreeMap<Long, byte[]>(
+ new Comparator<Long>() {
+ public int compare(Long l1, Long l2) {
+ return l2.compareTo(l1);
+ }
+ });
+
+ /** For Writable compatibility */
+ public Cell() {
+ super();
+ }
+
+ /**
+ * Create a new Cell with a given value and timestamp. Used by HStore.
+ *
+ * @param value
+ * @param timestamp
+ */
+ public Cell(String value, long timestamp) {
+ this(Bytes.toBytes(value), timestamp);
+ }
+
+ /**
+ * Create a new Cell with a given value and timestamp. Used by HStore.
+ *
+ * @param value
+ * @param timestamp
+ */
+ public Cell(byte[] value, long timestamp) {
+ valueMap.put(timestamp, value);
+ }
+
+ /**
+ * Create a new Cell with a given value and timestamp. Used by HStore.
+ *
+ * @param bb
+ * @param timestamp
+ */
+ public Cell(final ByteBuffer bb, long timestamp) {
+ this.valueMap.put(timestamp, Bytes.toBytes(bb));
+ }
+
+ /**
+ * @param vals
+ * array of values
+ * @param ts
+ * array of timestamps
+ */
+ public Cell(String [] vals, long[] ts) {
+ this(Bytes.toByteArrays(vals), ts);
+ }
+
+ /**
+ * @param vals
+ * array of values
+ * @param ts
+ * array of timestamps
+ */
+ public Cell(byte[][] vals, long[] ts) {
+ if (vals.length != ts.length) {
+ throw new IllegalArgumentException(
+ "number of values must be the same as the number of timestamps");
+ }
+ for (int i = 0; i < vals.length; i++) {
+ valueMap.put(ts[i], vals[i]);
+ }
+ }
+
+ /** @return the current cell's value */
+ @TOJSON(base64=true)
+ public byte[] getValue() {
+ return valueMap.get(valueMap.firstKey());
+ }
+
+ /** @return the current cell's timestamp */
+ @TOJSON
+ public long getTimestamp() {
+ return valueMap.firstKey();
+ }
+
+ /** @return the number of values this cell holds */
+ public int getNumValues() {
+ return valueMap.size();
+ }
+
+ /**
+ * Add a new timestamp and value to this cell provided timestamp does not
+ * already exist
+ *
+ * @param val
+ * @param ts
+ */
+ public void add(byte[] val, long ts) {
+ if (!valueMap.containsKey(ts)) {
+ valueMap.put(ts, val);
+ }
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ if (valueMap.size() == 1) {
+ return "timestamp=" + getTimestamp() + ", value="
+ + Bytes.toString(getValue());
+ }
+ StringBuilder s = new StringBuilder("{ ");
+ int i = 0;
+ for (Map.Entry<Long, byte[]> entry : valueMap.entrySet()) {
+ if (i > 0) {
+ s.append(", ");
+ }
+ s.append("[timestamp=");
+ s.append(entry.getKey());
+ s.append(", value=");
+ s.append(Bytes.toString(entry.getValue()));
+ s.append("]");
+ i++;
+ }
+ s.append(" }");
+ return s.toString();
+ }
+
+ //
+ // Writable
+ //
+
+ public void readFields(final DataInput in) throws IOException {
+ int nvalues = in.readInt();
+ for (int i = 0; i < nvalues; i++) {
+ long timestamp = in.readLong();
+ byte[] value = Bytes.readByteArray(in);
+ valueMap.put(timestamp, value);
+ }
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ out.writeInt(valueMap.size());
+ for (Map.Entry<Long, byte[]> entry : valueMap.entrySet()) {
+ out.writeLong(entry.getKey());
+ Bytes.writeByteArray(out, entry.getValue());
+ }
+ }
+
+ //
+ // Iterable
+ //
+
+ public Iterator<Entry<Long, byte[]>> iterator() {
+ return new CellIterator();
+ }
+
+ private class CellIterator implements Iterator<Entry<Long, byte[]>> {
+ private Iterator<Entry<Long, byte[]>> it;
+
+ CellIterator() {
+ it = valueMap.entrySet().iterator();
+ }
+
+ public boolean hasNext() {
+ return it.hasNext();
+ }
+
+ public Entry<Long, byte[]> next() {
+ return it.next();
+ }
+
+ public void remove() throws UnsupportedOperationException {
+ throw new UnsupportedOperationException("remove is not supported");
+ }
+ }
+
+ /**
+ * @param results
+ * @return
+ * TODO: This is the glue between old way of doing things and the new.
+ * Herein we are converting our clean KeyValues to Map of Cells.
+ */
+ public static HbaseMapWritable<byte [], Cell> createCells(final List<KeyValue> results) {
+ HbaseMapWritable<byte [], Cell> cells =
+ new HbaseMapWritable<byte [], Cell>();
+ // Walking backward through the list of results though it has no effect
+ // because we're inserting into a sorted map.
+ for (ListIterator<KeyValue> i = results.listIterator(results.size());
+ i.hasPrevious();) {
+ KeyValue kv = i.previous();
+ byte [] column = kv.getColumn();
+ Cell c = cells.get(column);
+ if (c == null) {
+ c = new Cell(kv.getValue(), kv.getTimestamp());
+ cells.put(column, c);
+ } else {
+ c.add(kv.getValue(), kv.getTimestamp());
+ }
+ }
+ return cells;
+ }
+
+ /**
+ * @param results
+ * @return Array of Cells.
+ * TODO: This is the glue between old way of doing things and the new.
+ * Herein we are converting our clean KeyValues to Map of Cells.
+ */
+ public static Cell [] createSingleCellArray(final List<KeyValue> results) {
+ if (results == null) return null;
+ int index = 0;
+ Cell [] cells = new Cell[results.size()];
+ for (KeyValue kv: results) {
+ cells[index++] = new Cell(kv.getValue(), kv.getTimestamp());
+ }
+ return cells;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.ISerializable#restSerialize(org
+ * .apache.hadoop.hbase.rest.serializer.IRestSerializer)
+ */
+ public void restSerialize(IRestSerializer serializer)
+ throws HBaseRestException {
+ serializer.serializeCell(this);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/CodeToClassAndBack.java b/src/java/org/apache/hadoop/hbase/io/CodeToClassAndBack.java
new file mode 100644
index 0000000..a9997a5
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/CodeToClassAndBack.java
@@ -0,0 +1,71 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.util.*;
+
+/**
+ * A Static Interface.
+ * Instead of having this code in the the HbaseMapWritable code, where it
+ * blocks the possibility of altering the variables and changing their types,
+ * it is put here in this static interface where the static final Maps are
+ * loaded one time. Only byte[] and Cell are supported at this time.
+ */
+public interface CodeToClassAndBack {
+ /**
+ * Static map that contains mapping from code to class
+ */
+ public static final Map<Byte, Class<?>> CODE_TO_CLASS =
+ new HashMap<Byte, Class<?>>();
+
+ /**
+ * Static map that contains mapping from class to code
+ */
+ public static final Map<Class<?>, Byte> CLASS_TO_CODE =
+ new HashMap<Class<?>, Byte>();
+
+ /**
+ * Class list for supported classes
+ */
+ public Class[] classList = {byte[].class, Cell.class};
+
+ /**
+ * The static loader that is used instead of the static constructor in
+ * HbaseMapWritable.
+ */
+ public InternalStaticLoader sl =
+ new InternalStaticLoader(classList, CODE_TO_CLASS, CLASS_TO_CODE);
+
+ /**
+ * Class that loads the static maps with their values.
+ */
+ public class InternalStaticLoader{
+ InternalStaticLoader(Class[] classList, Map<Byte, Class<?>> CODE_TO_CLASS,
+ Map<Class<?>, Byte> CLASS_TO_CODE){
+ byte code = 1;
+ for(int i=0; i<classList.length; i++){
+ CLASS_TO_CODE.put(classList[i], code);
+ CODE_TO_CLASS.put(code, classList[i]);
+ code++;
+ }
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/DataOutputBuffer.java b/src/java/org/apache/hadoop/hbase/io/DataOutputBuffer.java
new file mode 100644
index 0000000..acbd742
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/DataOutputBuffer.java
@@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.*;
+
+/** A reusable {@link DataOutput} implementation that writes to an in-memory
+ * buffer.
+ *
+ * <p>This is copy of Hadoop SequenceFile brought local so we can fix bugs;
+ * e.g. hbase-1097</p>
+ *
+ * <p>This saves memory over creating a new DataOutputStream and
+ * ByteArrayOutputStream each time data is written.
+ *
+ * <p>Typical usage is something like the following:<pre>
+ *
+ * DataOutputBuffer buffer = new DataOutputBuffer();
+ * while (... loop condition ...) {
+ * buffer.reset();
+ * ... write buffer using DataOutput methods ...
+ * byte[] data = buffer.getData();
+ * int dataLength = buffer.getLength();
+ * ... write data to its ultimate destination ...
+ * }
+ * </pre>
+ *
+ */
+public class DataOutputBuffer extends DataOutputStream {
+
+ private static class Buffer extends ByteArrayOutputStream {
+ public byte[] getData() { return buf; }
+ public int getLength() { return count; }
+ // Keep the initial buffer around so can put it back in place on reset.
+ private final byte [] initialBuffer;
+
+ public Buffer() {
+ super();
+ this.initialBuffer = this.buf;
+ }
+
+ public Buffer(int size) {
+ super(size);
+ this.initialBuffer = this.buf;
+ }
+
+ public void write(DataInput in, int len) throws IOException {
+ int newcount = count + len;
+ if (newcount > buf.length) {
+ byte newbuf[] = new byte[Math.max(buf.length << 1, newcount)];
+ System.arraycopy(buf, 0, newbuf, 0, count);
+ buf = newbuf;
+ }
+ in.readFully(buf, count, len);
+ count = newcount;
+ }
+
+ @Override
+ public synchronized void reset() {
+ // Rest the buffer so we don't keep around the shape of the biggest
+ // value ever read.
+ this.buf = this.initialBuffer;
+ super.reset();
+ }
+ }
+
+ private Buffer buffer;
+
+ /** Constructs a new empty buffer. */
+ public DataOutputBuffer() {
+ this(new Buffer());
+ }
+
+ public DataOutputBuffer(int size) {
+ this(new Buffer(size));
+ }
+
+ private DataOutputBuffer(Buffer buffer) {
+ super(buffer);
+ this.buffer = buffer;
+ }
+
+ /** Returns the current contents of the buffer.
+ * Data is only valid to {@link #getLength()}.
+ * @return byte[]
+ */
+ public byte[] getData() { return buffer.getData(); }
+
+ /** Returns the length of the valid data currently in the buffer.
+ * @return int
+ */
+ public int getLength() { return buffer.getLength(); }
+
+ /** Resets the buffer to empty.
+ * @return DataOutputBuffer
+ */
+ public DataOutputBuffer reset() {
+ this.written = 0;
+ buffer.reset();
+ return this;
+ }
+
+ /** Writes bytes from a DataInput directly into the buffer.
+ * @param in
+ * @param length
+ * @throws IOException
+ */
+ public void write(DataInput in, int length) throws IOException {
+ buffer.write(in, length);
+ }
+
+ /** Write to a file stream
+ * @param out
+ * @throws IOException
+ */
+ public void writeTo(OutputStream out) throws IOException {
+ buffer.writeTo(out);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/HBaseMapFile.java b/src/java/org/apache/hadoop/hbase/io/HBaseMapFile.java
new file mode 100644
index 0000000..73ad535
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/HBaseMapFile.java
@@ -0,0 +1,132 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HStoreKey;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * HBase customizations of MapFile.
+ */
+public class HBaseMapFile extends MapFile {
+ // TODO not used. remove?!
+ // private static final Log LOG = LogFactory.getLog(HBaseMapFile.class);
+
+ /**
+ * Values are instances of this class.
+ */
+ public static final Class<? extends Writable> VALUE_CLASS =
+ ImmutableBytesWritable.class;
+
+ /**
+ * A reader capable of reading and caching blocks of the data file.
+ */
+ public static class HBaseReader extends MapFile.Reader {
+ private final boolean blockCacheEnabled;
+
+ /**
+ * @param fs
+ * @param dirName
+ * @param conf
+ * @param hri
+ * @throws IOException
+ */
+ public HBaseReader(FileSystem fs, String dirName, Configuration conf,
+ HRegionInfo hri)
+ throws IOException {
+ this(fs, dirName, conf, false, hri);
+ }
+
+ /**
+ * @param fs
+ * @param dirName
+ * @param conf
+ * @param blockCacheEnabled
+ * @param hri
+ * @throws IOException
+ */
+ public HBaseReader(FileSystem fs, String dirName, Configuration conf,
+ boolean blockCacheEnabled, HRegionInfo hri)
+ throws IOException {
+ super(fs, dirName, new HStoreKey.HStoreKeyComparator(),
+ conf, false); // defer opening streams
+ this.blockCacheEnabled = blockCacheEnabled;
+ open(fs, dirName, new HStoreKey.HStoreKeyComparator(), conf);
+
+ // Force reading of the mapfile index by calling midKey. Reading the
+ // index will bring the index into memory over here on the client and
+ // then close the index file freeing up socket connection and resources
+ // in the datanode. Usually, the first access on a MapFile.Reader will
+ // load the index force the issue in HStoreFile MapFiles because an
+ // access may not happen for some time; meantime we're using up datanode
+ // resources (See HADOOP-2341). midKey() goes to index. Does not seek.
+ midKey();
+ }
+
+ @Override
+ protected org.apache.hadoop.hbase.io.SequenceFile.Reader createDataFileReader(
+ FileSystem fs, Path dataFile, Configuration conf)
+ throws IOException {
+ if (!blockCacheEnabled) {
+ return super.createDataFileReader(fs, dataFile, conf);
+ }
+ final int blockSize = conf.getInt("hbase.hstore.blockCache.blockSize",
+ 64 * 1024);
+ return new SequenceFile.Reader(fs, dataFile, conf) {
+ @Override
+ protected FSDataInputStream openFile(FileSystem fs, Path file,
+ int bufferSize, long length)
+ throws IOException {
+ return new FSDataInputStream(new BlockFSInputStream(
+ super.openFile(fs, file, bufferSize, length), length,
+ blockSize));
+ }
+ };
+ }
+ }
+
+ public static class HBaseWriter extends MapFile.Writer {
+ /**
+ * @param conf
+ * @param fs
+ * @param dirName
+ * @param compression
+ * @param hri
+ * @throws IOException
+ */
+ public HBaseWriter(Configuration conf, FileSystem fs, String dirName,
+ SequenceFile.CompressionType compression, final HRegionInfo hri)
+ throws IOException {
+ super(conf, fs, dirName, new HStoreKey.HStoreKeyComparator(),
+ VALUE_CLASS, compression);
+ // Default for mapfiles is 128. Makes random reads faster if we
+ // have more keys indexed and we're not 'next'-ing around in the
+ // mapfile.
+ setIndexInterval(conf.getInt("hbase.io.index.interval", 128));
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/HalfHFileReader.java b/src/java/org/apache/hadoop/hbase/io/HalfHFileReader.java
new file mode 100644
index 0000000..ab15477
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/HalfHFileReader.java
@@ -0,0 +1,222 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.hfile.BlockCache;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A facade for a {@link org.apache.hadoop.hbase.io.hfile.HFile.Reader} that serves up
+ * either the top or bottom half of a HFile where 'bottom' is the first half
+ * of the file containing the keys that sort lowest and 'top' is the second half
+ * of the file with keys that sort greater than those of the bottom half.
+ * The top includes the split files midkey, of the key that follows if it does
+ * not exist in the file.
+ *
+ * <p>This type works in tandem with the {@link Reference} type. This class
+ * is used reading while Reference is used writing.
+ *
+ * <p>This file is not splitable. Calls to {@link #midkey()} return null.
+ */
+public class HalfHFileReader extends HFile.Reader {
+ final Log LOG = LogFactory.getLog(HalfHFileReader.class);
+ final boolean top;
+ // This is the key we split around. Its the first possible entry on a row:
+ // i.e. empty column and a timestamp of LATEST_TIMESTAMP.
+ final byte [] splitkey;
+
+ /**
+ * @param fs
+ * @param p
+ * @param c
+ * @param r
+ * @throws IOException
+ */
+ public HalfHFileReader(final FileSystem fs, final Path p, final BlockCache c,
+ final Reference r)
+ throws IOException {
+ super(fs, p, c);
+ // This is not actual midkey for this half-file; its just border
+ // around which we split top and bottom. Have to look in files to find
+ // actual last and first keys for bottom and top halves. Half-files don't
+ // have an actual midkey themselves. No midkey is how we indicate file is
+ // not splittable.
+ this.splitkey = r.getSplitKey();
+ // Is it top or bottom half?
+ this.top = Reference.isTopFileRegion(r.getFileRegion());
+ }
+
+ protected boolean isTop() {
+ return this.top;
+ }
+
+ @Override
+ public HFileScanner getScanner() {
+ final HFileScanner s = super.getScanner();
+ return new HFileScanner() {
+ final HFileScanner delegate = s;
+
+ public ByteBuffer getKey() {
+ return delegate.getKey();
+ }
+
+ public String getKeyString() {
+ return delegate.getKeyString();
+ }
+
+ public ByteBuffer getValue() {
+ return delegate.getValue();
+ }
+
+ public String getValueString() {
+ return delegate.getValueString();
+ }
+
+ public KeyValue getKeyValue() {
+ return delegate.getKeyValue();
+ }
+
+ public boolean next() throws IOException {
+ boolean b = delegate.next();
+ if (!b) {
+ return b;
+ }
+ if (!top) {
+ ByteBuffer bb = getKey();
+ if (getComparator().compare(bb.array(), bb.arrayOffset(), bb.limit(),
+ splitkey, 0, splitkey.length) >= 0) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ public boolean seekBefore(byte[] key) throws IOException {
+ return seekBefore(key, 0, key.length);
+ }
+
+ public boolean seekBefore(byte [] key, int offset, int length)
+ throws IOException {
+ if (top) {
+ if (getComparator().compare(key, offset, length, splitkey, 0,
+ splitkey.length) < 0) {
+ return false;
+ }
+ } else {
+ if (getComparator().compare(key, offset, length, splitkey, 0,
+ splitkey.length) >= 0) {
+ return seekBefore(splitkey, 0, splitkey.length);
+ }
+ }
+ return this.delegate.seekBefore(key, offset, length);
+ }
+
+ public boolean seekTo() throws IOException {
+ if (top) {
+ int r = this.delegate.seekTo(splitkey);
+ if (r < 0) {
+ // midkey is < first key in file
+ return this.delegate.seekTo();
+ }
+ if (r > 0) {
+ return this.delegate.next();
+ }
+ return true;
+ }
+
+ boolean b = delegate.seekTo();
+ if (!b) {
+ return b;
+ }
+ // Check key.
+ ByteBuffer k = this.delegate.getKey();
+ return this.delegate.getReader().getComparator().
+ compare(k.array(), k.arrayOffset(), k.limit(),
+ splitkey, 0, splitkey.length) < 0;
+ }
+
+ public int seekTo(byte[] key) throws IOException {
+ return seekTo(key, 0, key.length);
+ }
+
+ public int seekTo(byte[] key, int offset, int length) throws IOException {
+ if (top) {
+ if (getComparator().compare(key, offset, length, splitkey, 0,
+ splitkey.length) < 0) {
+ return -1;
+ }
+ } else {
+ if (getComparator().compare(key, offset, length, splitkey, 0,
+ splitkey.length) >= 0) {
+ // we would place the scanner in the second half.
+ // it might be an error to return false here ever...
+ boolean res = delegate.seekBefore(splitkey, 0, splitkey.length);
+ if (!res) {
+ throw new IOException("Seeking for a key in bottom of file, but key exists in top of file, failed on seekBefore(midkey)");
+ }
+ return 1;
+ }
+ }
+ return delegate.seekTo(key, offset, length);
+ }
+
+ public Reader getReader() {
+ return this.delegate.getReader();
+ }
+
+ public boolean isSeeked() {
+ return this.delegate.isSeeked();
+ }
+ };
+ }
+
+ @Override
+ public byte[] getLastKey() {
+ if (top) {
+ return super.getLastKey();
+ }
+ HFileScanner scanner = getScanner();
+ try {
+ if (scanner.seekBefore(this.splitkey)) {
+ return Bytes.toBytes(scanner.getKey());
+ }
+ } catch (IOException e) {
+ LOG.warn("Failed seekBefore " + Bytes.toString(this.splitkey), e);
+ }
+ return null;
+ }
+
+ @Override
+ public byte[] midkey() throws IOException {
+ // Returns null to indicate file is not splitable.
+ return null;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/HalfMapFileReader.java b/src/java/org/apache/hadoop/hbase/io/HalfMapFileReader.java
new file mode 100644
index 0000000..aff3fc0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/HalfMapFileReader.java
@@ -0,0 +1,213 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HStoreKey;
+import org.apache.hadoop.hbase.io.Reference.Range;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * A facade for a {@link org.apache.hadoop.io.MapFile.Reader} that serves up
+ * either the top or bottom half of a MapFile where 'bottom' is the first half
+ * of the file containing the keys that sort lowest and 'top' is the second half
+ * of the file with keys that sort greater than those of the bottom half.
+ * The top includes the split files midkey, of the key that follows if it does
+ * not exist in the file.
+ *
+ * <p>This type works in tandem with the {@link Reference} type. This class
+ * is used reading while Reference is used writing.
+ *
+ * <p>This file is not splitable. Calls to {@link #midKey()} return null.
+ */
+//TODO should be fixed generic warnings from MapFile methods
+public class HalfMapFileReader extends HBaseMapFile.HBaseReader {
+ private final boolean top;
+ private final HStoreKey midkey;
+ private boolean firstNextCall = true;
+
+ /**
+ * @param fs
+ * @param dirName
+ * @param conf
+ * @param r
+ * @param mk
+ * @param hri
+ * @throws IOException
+ */
+ public HalfMapFileReader(final FileSystem fs, final String dirName,
+ final Configuration conf, final Range r,
+ final WritableComparable<HStoreKey> mk,
+ final HRegionInfo hri)
+ throws IOException {
+ this(fs, dirName, conf, r, mk, false, hri);
+ }
+
+ /**
+ * @param fs
+ * @param dirName
+ * @param conf
+ * @param r
+ * @param mk
+ * @param blockCacheEnabled
+ * @param hri
+ * @throws IOException
+ */
+ public HalfMapFileReader(final FileSystem fs, final String dirName,
+ final Configuration conf, final Range r,
+ final WritableComparable<HStoreKey> mk,
+ final boolean blockCacheEnabled,
+ final HRegionInfo hri)
+ throws IOException {
+ super(fs, dirName, conf, blockCacheEnabled, hri);
+ // This is not actual midkey for this half-file; its just border
+ // around which we split top and bottom. Have to look in files to find
+ // actual last and first keys for bottom and top halves. Half-files don't
+ // have an actual midkey themselves. No midkey is how we indicate file is
+ // not splittable.
+ this.midkey = new HStoreKey((HStoreKey)mk);
+ // Is it top or bottom half?
+ this.top = Reference.isTopFileRegion(r);
+ }
+
+ /*
+ * Check key is not bleeding into wrong half of the file.
+ * @param key
+ * @throws IOException
+ */
+ private void checkKey(final WritableComparable<HStoreKey> key)
+ throws IOException {
+ if (top) {
+ if (key.compareTo(midkey) < 0) {
+ throw new IOException("Illegal Access: Key is less than midKey of " +
+ "backing mapfile");
+ }
+ } else if (key.compareTo(midkey) >= 0) {
+ throw new IOException("Illegal Access: Key is greater than or equal " +
+ "to midKey of backing mapfile");
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ public synchronized void finalKey(WritableComparable key)
+ throws IOException {
+ if (top) {
+ super.finalKey(key);
+ } else {
+ Writable value = new ImmutableBytesWritable();
+ WritableComparable found = super.getClosest(midkey, value, true);
+ Writables.copyWritable(found, key);
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ public synchronized Writable get(WritableComparable key, Writable val)
+ throws IOException {
+ checkKey(key);
+ return super.get(key, val);
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ public synchronized WritableComparable getClosest(WritableComparable key,
+ Writable val)
+ throws IOException {
+ WritableComparable closest = null;
+ if (top) {
+ // If top, the lowest possible key is first key. Do not have to check
+ // what comes back from super getClosest. Will return exact match or
+ // greater.
+ closest = (key.compareTo(this.midkey) < 0)?
+ this.midkey: super.getClosest(key, val);
+ } else {
+ // We're serving bottom of the file.
+ if (key.compareTo(this.midkey) < 0) {
+ // Check key is within range for bottom.
+ closest = super.getClosest(key, val);
+ // midkey was made against largest store file at time of split. Smaller
+ // store files could have anything in them. Check return value is
+ // not beyond the midkey (getClosest returns exact match or next after)
+ if (closest != null && closest.compareTo(this.midkey) >= 0) {
+ // Don't let this value out.
+ closest = null;
+ }
+ }
+ // Else, key is > midkey so let out closest = null.
+ }
+ return closest;
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ public synchronized WritableComparable midKey() throws IOException {
+ // Returns null to indicate file is not splitable.
+ return null;
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ public synchronized boolean next(WritableComparable key, Writable val)
+ throws IOException {
+ if (firstNextCall) {
+ firstNextCall = false;
+ if (this.top) {
+ // Seek to midkey. Midkey may not exist in this file. That should be
+ // fine. Then we'll either be positioned at end or start of file.
+ WritableComparable nearest = getClosest(this.midkey, val);
+ // Now copy the midkey into the passed key.
+ if (nearest != null) {
+ Writables.copyWritable(nearest, key);
+ return true;
+ }
+ return false;
+ }
+ }
+ boolean result = super.next(key, val);
+ if (!top && key.compareTo(midkey) >= 0) {
+ result = false;
+ }
+ return result;
+ }
+
+ @Override
+ public synchronized void reset() throws IOException {
+ if (top) {
+ firstNextCall = true;
+ return;
+ }
+ super.reset();
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ public synchronized boolean seek(WritableComparable key)
+ throws IOException {
+ checkKey(key);
+ return super.seek(key);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/HbaseMapWritable.java b/src/java/org/apache/hadoop/hbase/io/HbaseMapWritable.java
new file mode 100644
index 0000000..a549913
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/HbaseMapWritable.java
@@ -0,0 +1,221 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Comparator;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * A Writable Map.
+ * Like {@link org.apache.hadoop.io.MapWritable} but dumb. It will fail
+ * if passed a value type that it has not already been told about. Its been
+ * primed with hbase Writables and byte []. Keys are always byte arrays.
+ *
+ * @param <K> <byte []> key TODO: Parameter K is never used, could be removed.
+ * @param <V> value Expects a Writable or byte [].
+ */
+public class HbaseMapWritable <K,V>
+implements SortedMap<byte[],V>, Configurable, Writable, CodeToClassAndBack{
+ private AtomicReference<Configuration> conf = null;
+ protected SortedMap<byte [], V> instance = null;
+
+ /**
+ * The default contructor where a TreeMap is used
+ **/
+ public HbaseMapWritable(){
+ this (new TreeMap<byte [], V>(Bytes.BYTES_COMPARATOR));
+ }
+
+ /**
+ * Contructor where another SortedMap can be used
+ *
+ * @param map the SortedMap to be used
+ */
+ public HbaseMapWritable(SortedMap<byte[], V> map){
+ conf = new AtomicReference<Configuration>();
+ instance = map;
+ }
+
+
+ /** @return the conf */
+ public Configuration getConf() {
+ return conf.get();
+ }
+
+ /** @param conf the conf to set */
+ public void setConf(Configuration conf) {
+ this.conf.set(conf);
+ }
+
+ public void clear() {
+ instance.clear();
+ }
+
+ public boolean containsKey(Object key) {
+ return instance.containsKey(key);
+ }
+
+ public boolean containsValue(Object value) {
+ return instance.containsValue(value);
+ }
+
+ public Set<Entry<byte [], V>> entrySet() {
+ return instance.entrySet();
+ }
+
+ public V get(Object key) {
+ return instance.get(key);
+ }
+
+ public boolean isEmpty() {
+ return instance.isEmpty();
+ }
+
+ public Set<byte []> keySet() {
+ return instance.keySet();
+ }
+
+ public int size() {
+ return instance.size();
+ }
+
+ public Collection<V> values() {
+ return instance.values();
+ }
+
+ public void putAll(Map<? extends byte [], ? extends V> m) {
+ this.instance.putAll(m);
+ }
+
+ public V remove(Object key) {
+ return this.instance.remove(key);
+ }
+
+ public V put(byte [] key, V value) {
+ return this.instance.put(key, value);
+ }
+
+ public Comparator<? super byte[]> comparator() {
+ return this.instance.comparator();
+ }
+
+ public byte[] firstKey() {
+ return this.instance.firstKey();
+ }
+
+ public SortedMap<byte[], V> headMap(byte[] toKey) {
+ return this.instance.headMap(toKey);
+ }
+
+ public byte[] lastKey() {
+ return this.instance.lastKey();
+ }
+
+ public SortedMap<byte[], V> subMap(byte[] fromKey, byte[] toKey) {
+ return this.instance.subMap(fromKey, toKey);
+ }
+
+ public SortedMap<byte[], V> tailMap(byte[] fromKey) {
+ return this.instance.tailMap(fromKey);
+ }
+
+ // Writable
+
+ /** @return the Class class for the specified id */
+ @SuppressWarnings("boxing")
+ protected Class<?> getClass(byte id) {
+ return CODE_TO_CLASS.get(id);
+ }
+
+ /** @return the id for the specified Class */
+ @SuppressWarnings("boxing")
+ protected byte getId(Class<?> clazz) {
+ Byte b = CLASS_TO_CODE.get(clazz);
+ if (b == null) {
+ throw new NullPointerException("Nothing for : " + clazz);
+ }
+ return b;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return this.instance.toString();
+ }
+
+ public void write(DataOutput out) throws IOException {
+ // Write out the number of entries in the map
+ out.writeInt(this.instance.size());
+ // Then write out each key/value pair
+ for (Map.Entry<byte [], V> e: instance.entrySet()) {
+ Bytes.writeByteArray(out, e.getKey());
+ Byte id = getId(e.getValue().getClass());
+ out.writeByte(id);
+ Object value = e.getValue();
+ if (value instanceof byte []) {
+ Bytes.writeByteArray(out, (byte [])value);
+ } else {
+ ((Writable)value).write(out);
+ }
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ public void readFields(DataInput in) throws IOException {
+ // First clear the map. Otherwise we will just accumulate
+ // entries every time this method is called.
+ this.instance.clear();
+ // Read the number of entries in the map
+ int entries = in.readInt();
+ // Then read each key/value pair
+ for (int i = 0; i < entries; i++) {
+ byte [] key = Bytes.readByteArray(in);
+ byte id = in.readByte();
+ Class clazz = getClass(id);
+ V value = null;
+ if (clazz.equals(byte [].class)) {
+ byte [] bytes = Bytes.readByteArray(in);
+ value = (V)bytes;
+ } else {
+ Writable w = (Writable)ReflectionUtils.
+ newInstance(clazz, getConf());
+ w.readFields(in);
+ value = (V)w;
+ }
+ this.instance.put(key, value);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java b/src/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
new file mode 100644
index 0000000..c2a3d4a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
@@ -0,0 +1,416 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Array;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.RowFilterSet;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableFactories;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * This is a customized version of the polymorphic hadoop
+ * {@link ObjectWritable}. It removes UTF8 (HADOOP-414).
+ * Using {@link Text} intead of UTF-8 saves ~2% CPU between reading and writing
+ * objects running a short sequentialWrite Performance Evaluation test just in
+ * ObjectWritable alone; more when we're doing randomRead-ing. Other
+ * optimizations include our passing codes for classes instead of the
+ * actual class names themselves. This makes it so this class needs amendment
+ * if non-Writable classes are introduced -- if passed a Writable for which we
+ * have no code, we just do the old-school passing of the class name, etc. --
+ * but passing codes the savings are large particularly when cell
+ * data is small (If < a couple of kilobytes, the encoding/decoding of class
+ * name and reflection to instantiate class was costing in excess of the cell
+ * handling).
+ */
+public class HbaseObjectWritable implements Writable, Configurable {
+ protected final static Log LOG = LogFactory.getLog(HbaseObjectWritable.class);
+
+ // Here we maintain two static maps of classes to code and vice versa.
+ // Add new classes+codes as wanted or figure way to auto-generate these
+ // maps from the HMasterInterface.
+ static final Map<Byte, Class<?>> CODE_TO_CLASS =
+ new HashMap<Byte, Class<?>>();
+ static final Map<Class<?>, Byte> CLASS_TO_CODE =
+ new HashMap<Class<?>, Byte>();
+ // Special code that means 'not-encoded'; in this case we do old school
+ // sending of the class name using reflection, etc.
+ private static final byte NOT_ENCODED = 0;
+ static {
+ byte code = NOT_ENCODED + 1;
+ // Primitive types.
+ addToMap(Boolean.TYPE, code++);
+ addToMap(Byte.TYPE, code++);
+ addToMap(Character.TYPE, code++);
+ addToMap(Short.TYPE, code++);
+ addToMap(Integer.TYPE, code++);
+ addToMap(Long.TYPE, code++);
+ addToMap(Float.TYPE, code++);
+ addToMap(Double.TYPE, code++);
+ addToMap(Void.TYPE, code++);
+ // Other java types
+ addToMap(String.class, code++);
+ addToMap(byte [].class, code++);
+ addToMap(byte [][].class, code++);
+ // Hadoop types
+ addToMap(Text.class, code++);
+ addToMap(Writable.class, code++);
+ addToMap(Writable [].class, code++);
+ addToMap(HbaseMapWritable.class, code++);
+ addToMap(NullInstance.class, code++);
+ try {
+ addToMap(Class.forName("[Lorg.apache.hadoop.io.Text;"), code++);
+ } catch (ClassNotFoundException e) {
+ e.printStackTrace();
+ }
+ // Hbase types
+ addToMap(HServerInfo.class, code++);
+ addToMap(HMsg.class, code++);
+ addToMap(HTableDescriptor.class, code++);
+ addToMap(HColumnDescriptor.class, code++);
+ addToMap(RowFilterInterface.class, code++);
+ addToMap(RowFilterSet.class, code++);
+ addToMap(HRegionInfo.class, code++);
+ addToMap(BatchUpdate.class, code++);
+ addToMap(HServerAddress.class, code++);
+ try {
+ addToMap(Class.forName("[Lorg.apache.hadoop.hbase.HMsg;"), code++);
+ } catch (ClassNotFoundException e) {
+ e.printStackTrace();
+ }
+ addToMap(Cell.class, code++);
+ try {
+ addToMap(Class.forName("[Lorg.apache.hadoop.hbase.io.Cell;"), code++);
+ } catch (ClassNotFoundException e) {
+ e.printStackTrace();
+ }
+ addToMap(RowResult.class, code++);
+ addToMap(HRegionInfo[].class, code++);
+ addToMap(MapWritable.class, code++);
+ try {
+ addToMap(Class.forName("[Lorg.apache.hadoop.hbase.io.RowResult;"), code++);
+ } catch (ClassNotFoundException e) {
+ e.printStackTrace();
+ }
+ addToMap(BatchUpdate[].class, code++);
+ }
+
+ private Class<?> declaredClass;
+ private Object instance;
+ private Configuration conf;
+
+ /** default constructor for writable */
+ public HbaseObjectWritable() {
+ super();
+ }
+
+ /**
+ * @param instance
+ */
+ public HbaseObjectWritable(Object instance) {
+ set(instance);
+ }
+
+ /**
+ * @param declaredClass
+ * @param instance
+ */
+ public HbaseObjectWritable(Class<?> declaredClass, Object instance) {
+ this.declaredClass = declaredClass;
+ this.instance = instance;
+ }
+
+ /** @return the instance, or null if none. */
+ public Object get() { return instance; }
+
+ /** @return the class this is meant to be. */
+ public Class<?> getDeclaredClass() { return declaredClass; }
+
+ /**
+ * Reset the instance.
+ * @param instance
+ */
+ public void set(Object instance) {
+ this.declaredClass = instance.getClass();
+ this.instance = instance;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return "OW[class=" + declaredClass + ",value=" + instance + "]";
+ }
+
+
+ public void readFields(DataInput in) throws IOException {
+ readObject(in, this, this.conf);
+ }
+
+ public void write(DataOutput out) throws IOException {
+ writeObject(out, instance, declaredClass, conf);
+ }
+
+ private static class NullInstance extends Configured implements Writable {
+ Class<?> declaredClass;
+ /** default constructor for writable */
+ public NullInstance() { super(null); }
+
+ /**
+ * @param declaredClass
+ * @param conf
+ */
+ public NullInstance(Class<?> declaredClass, Configuration conf) {
+ super(conf);
+ this.declaredClass = declaredClass;
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ this.declaredClass = CODE_TO_CLASS.get(in.readByte());
+ }
+
+ public void write(DataOutput out) throws IOException {
+ writeClassCode(out, this.declaredClass);
+ }
+ }
+
+ /**
+ * Write out the code byte for passed Class.
+ * @param out
+ * @param c
+ * @throws IOException
+ */
+ static void writeClassCode(final DataOutput out, final Class<?> c)
+ throws IOException {
+ Byte code = CLASS_TO_CODE.get(c);
+ if (code == null) {
+ LOG.error("Unsupported type " + c);
+ throw new UnsupportedOperationException("No code for unexpected " + c);
+ }
+ out.writeByte(code);
+ }
+
+ /**
+ * Write a {@link Writable}, {@link String}, primitive type, or an array of
+ * the preceding.
+ * @param out
+ * @param instance
+ * @param declaredClass
+ * @param conf
+ * @throws IOException
+ */
+ @SuppressWarnings("unchecked")
+ public static void writeObject(DataOutput out, Object instance,
+ Class declaredClass,
+ Configuration conf)
+ throws IOException {
+
+ Object instanceObj = instance;
+ Class declClass = declaredClass;
+
+ if (instanceObj == null) { // null
+ instanceObj = new NullInstance(declClass, conf);
+ declClass = Writable.class;
+ }
+ writeClassCode(out, declClass);
+ if (declClass.isArray()) { // array
+ // If bytearray, just dump it out -- avoid the recursion and
+ // byte-at-a-time we were previously doing.
+ if (declClass.equals(byte [].class)) {
+ Bytes.writeByteArray(out, (byte [])instanceObj);
+ } else {
+ int length = Array.getLength(instanceObj);
+ out.writeInt(length);
+ for (int i = 0; i < length; i++) {
+ writeObject(out, Array.get(instanceObj, i),
+ declClass.getComponentType(), conf);
+ }
+ }
+ } else if (declClass == String.class) { // String
+ Text.writeString(out, (String)instanceObj);
+ } else if (declClass.isPrimitive()) { // primitive type
+ if (declClass == Boolean.TYPE) { // boolean
+ out.writeBoolean(((Boolean)instanceObj).booleanValue());
+ } else if (declClass == Character.TYPE) { // char
+ out.writeChar(((Character)instanceObj).charValue());
+ } else if (declClass == Byte.TYPE) { // byte
+ out.writeByte(((Byte)instanceObj).byteValue());
+ } else if (declClass == Short.TYPE) { // short
+ out.writeShort(((Short)instanceObj).shortValue());
+ } else if (declClass == Integer.TYPE) { // int
+ out.writeInt(((Integer)instanceObj).intValue());
+ } else if (declClass == Long.TYPE) { // long
+ out.writeLong(((Long)instanceObj).longValue());
+ } else if (declClass == Float.TYPE) { // float
+ out.writeFloat(((Float)instanceObj).floatValue());
+ } else if (declClass == Double.TYPE) { // double
+ out.writeDouble(((Double)instanceObj).doubleValue());
+ } else if (declClass == Void.TYPE) { // void
+ } else {
+ throw new IllegalArgumentException("Not a primitive: "+declClass);
+ }
+ } else if (declClass.isEnum()) { // enum
+ Text.writeString(out, ((Enum)instanceObj).name());
+ } else if (Writable.class.isAssignableFrom(declClass)) { // Writable
+ Class <?> c = instanceObj.getClass();
+ Byte code = CLASS_TO_CODE.get(c);
+ if (code == null) {
+ out.writeByte(NOT_ENCODED);
+ Text.writeString(out, c.getName());
+ } else {
+ writeClassCode(out, c);
+ }
+ ((Writable)instanceObj).write(out);
+ } else {
+ throw new IOException("Can't write: "+instanceObj+" as "+declClass);
+ }
+ }
+
+
+ /**
+ * Read a {@link Writable}, {@link String}, primitive type, or an array of
+ * the preceding.
+ * @param in
+ * @param conf
+ * @return the object
+ * @throws IOException
+ */
+ public static Object readObject(DataInput in, Configuration conf)
+ throws IOException {
+ return readObject(in, null, conf);
+ }
+
+ /**
+ * Read a {@link Writable}, {@link String}, primitive type, or an array of
+ * the preceding.
+ * @param in
+ * @param objectWritable
+ * @param conf
+ * @return the object
+ * @throws IOException
+ */
+ @SuppressWarnings("unchecked")
+ public static Object readObject(DataInput in,
+ HbaseObjectWritable objectWritable, Configuration conf)
+ throws IOException {
+ Class<?> declaredClass = CODE_TO_CLASS.get(in.readByte());
+ Object instance;
+ if (declaredClass.isPrimitive()) { // primitive types
+ if (declaredClass == Boolean.TYPE) { // boolean
+ instance = Boolean.valueOf(in.readBoolean());
+ } else if (declaredClass == Character.TYPE) { // char
+ instance = Character.valueOf(in.readChar());
+ } else if (declaredClass == Byte.TYPE) { // byte
+ instance = Byte.valueOf(in.readByte());
+ } else if (declaredClass == Short.TYPE) { // short
+ instance = Short.valueOf(in.readShort());
+ } else if (declaredClass == Integer.TYPE) { // int
+ instance = Integer.valueOf(in.readInt());
+ } else if (declaredClass == Long.TYPE) { // long
+ instance = Long.valueOf(in.readLong());
+ } else if (declaredClass == Float.TYPE) { // float
+ instance = Float.valueOf(in.readFloat());
+ } else if (declaredClass == Double.TYPE) { // double
+ instance = Double.valueOf(in.readDouble());
+ } else if (declaredClass == Void.TYPE) { // void
+ instance = null;
+ } else {
+ throw new IllegalArgumentException("Not a primitive: "+declaredClass);
+ }
+ } else if (declaredClass.isArray()) { // array
+ if (declaredClass.equals(byte [].class)) {
+ instance = Bytes.readByteArray(in);
+ } else {
+ int length = in.readInt();
+ instance = Array.newInstance(declaredClass.getComponentType(), length);
+ for (int i = 0; i < length; i++) {
+ Array.set(instance, i, readObject(in, conf));
+ }
+ }
+ } else if (declaredClass == String.class) { // String
+ instance = Text.readString(in);
+ } else if (declaredClass.isEnum()) { // enum
+ instance = Enum.valueOf((Class<? extends Enum>) declaredClass,
+ Text.readString(in));
+ } else { // Writable
+ Class instanceClass = null;
+ Byte b = in.readByte();
+ if (b.byteValue() == NOT_ENCODED) {
+ String className = Text.readString(in);
+ try {
+ instanceClass = conf.getClassByName(className);
+ } catch (ClassNotFoundException e) {
+ throw new RuntimeException("Can't find class " + className);
+ }
+ } else {
+ instanceClass = CODE_TO_CLASS.get(b);
+ }
+ Writable writable = WritableFactories.newInstance(instanceClass, conf);
+ writable.readFields(in);
+ instance = writable;
+ if (instanceClass == NullInstance.class) { // null
+ declaredClass = ((NullInstance)instance).declaredClass;
+ instance = null;
+ }
+ }
+ if (objectWritable != null) { // store values
+ objectWritable.declaredClass = declaredClass;
+ objectWritable.instance = instance;
+ }
+ return instance;
+ }
+
+ private static void addToMap(final Class<?> clazz, final byte code) {
+ CLASS_TO_CODE.put(clazz, code);
+ CODE_TO_CLASS.put(code, clazz);
+ }
+
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ }
+
+ public Configuration getConf() {
+ return this.conf;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/HeapSize.java b/src/java/org/apache/hadoop/hbase/io/HeapSize.java
new file mode 100644
index 0000000..13858e9
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/HeapSize.java
@@ -0,0 +1,65 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+/**
+ * Implementations can be asked for an estimate of their size in bytes.
+ * Useful for sizing caches. Its a given that implementation approximations
+ * probably do not account for 32 vs 64 bit nor for different VM implemenations.
+ */
+public interface HeapSize {
+
+ /** Reference size is 8 bytes on 64-bit, 4 bytes on 32-bit */
+ static final int REFERENCE = 8;
+
+ /** Object overhead is minimum 2 * reference size (8 bytes on 64-bit) */
+ static final int OBJECT = 2 * REFERENCE;
+
+ /**
+ * The following types are always allocated in blocks of 8 bytes (on 64bit)
+ * For example, if you have two ints in a class, it will use 8 bytes.
+ * If you have three ints in a class, it will use 16 bytes.
+ */
+ static final int SHORT = 4;
+ static final int INT = 4;
+ static final int FLOAT = 4;
+ static final int BOOLEAN = 4;
+ static final int CHAR = 4;
+ static final int BYTE = 1;
+
+ /** These types are always 8 bytes */
+ static final int DOUBLE = 8;
+ static final int LONG = 8;
+
+ /** Array overhead */
+ static final int BYTE_ARRAY = REFERENCE;
+ static final int ARRAY = 3 * REFERENCE;
+ static final int MULTI_ARRAY = (4 * REFERENCE) + ARRAY;
+
+ static final int BLOCK_SIZE_TAX = 8;
+
+
+
+ /**
+ * @return Approximate 'exclusive deep size' of implementing object. Includes
+ * count of payload and hosting object sizings.
+ */
+ public long heapSize();
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java b/src/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java
new file mode 100644
index 0000000..36235ee
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java
@@ -0,0 +1,247 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.IOException;
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.util.List;
+
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.io.WritableComparator;
+
+/**
+ * A byte sequence that is usable as a key or value. Based on
+ * {@link org.apache.hadoop.io.BytesWritable} only this class is NOT resizable
+ * and DOES NOT distinguish between the size of the seqeunce and the current
+ * capacity as {@link org.apache.hadoop.io.BytesWritable} does. Hence its
+ * comparatively 'immutable'. When creating a new instance of this class,
+ * the underlying byte [] is not copied, just referenced. The backing
+ * buffer is accessed when we go to serialize.
+ */
+public class ImmutableBytesWritable
+implements WritableComparable<ImmutableBytesWritable> {
+ private byte[] bytes;
+ private int offset;
+ private int length;
+
+ /**
+ * Create a zero-size sequence.
+ */
+ public ImmutableBytesWritable() {
+ super();
+ }
+
+ /**
+ * Create a ImmutableBytesWritable using the byte array as the initial value.
+ * @param bytes This array becomes the backing storage for the object.
+ */
+ public ImmutableBytesWritable(byte[] bytes) {
+ this(bytes, 0, bytes.length);
+ }
+
+ /**
+ * Set the new ImmutableBytesWritable to the contents of the passed
+ * <code>ibw</code>.
+ * @param ibw the value to set this ImmutableBytesWritable to.
+ */
+ public ImmutableBytesWritable(final ImmutableBytesWritable ibw) {
+ this(ibw.get(), 0, ibw.getSize());
+ }
+
+ /**
+ * Set the value to a given byte range
+ * @param bytes the new byte range to set to
+ * @param offset the offset in newData to start at
+ * @param length the number of bytes in the range
+ */
+ public ImmutableBytesWritable(final byte[] bytes, final int offset,
+ final int length) {
+ this.bytes = bytes;
+ this.offset = offset;
+ this.length = length;
+ }
+
+ /**
+ * Get the data from the BytesWritable.
+ * @return The data is only valid between 0 and getSize() - 1.
+ */
+ public byte [] get() {
+ if (this.bytes == null) {
+ throw new IllegalStateException("Uninitialiized. Null constructor " +
+ "called w/o accompaying readFields invocation");
+ }
+ return this.bytes;
+ }
+
+ /**
+ * @param b Use passed bytes as backing array for this instance.
+ */
+ public void set(final byte [] b) {
+ set(b, 0, b.length);
+ }
+
+ /**
+ * @param b Use passed bytes as backing array for this instance.
+ * @param offset
+ * @param length
+ */
+ public void set(final byte [] b, final int offset, final int length) {
+ this.bytes = b;
+ this.offset = offset;
+ this.length = length;
+ }
+
+ /**
+ * @return the current size of the buffer.
+ */
+ public int getSize() {
+ if (this.bytes == null) {
+ throw new IllegalStateException("Uninitialiized. Null constructor " +
+ "called w/o accompaying readFields invocation");
+ }
+ return this.length;
+ }
+
+ public int getLength() {
+ return getSize();
+ }
+
+ public int getOffset(){
+ return this.offset;
+ }
+
+ public void readFields(final DataInput in) throws IOException {
+ this.length = in.readInt();
+ this.bytes = new byte[this.length];
+ in.readFully(this.bytes, 0, this.length);
+ this.offset = 0;
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ out.writeInt(this.length);
+ out.write(this.bytes, this.offset, this.length);
+ }
+
+ // Below methods copied from BytesWritable
+
+ @Override
+ public int hashCode() {
+ return WritableComparator.hashBytes(bytes, this.length);
+ }
+
+ /**
+ * Define the sort order of the BytesWritable.
+ * @param right_obj The other bytes writable
+ * @return Positive if left is bigger than right, 0 if they are equal, and
+ * negative if left is smaller than right.
+ */
+ public int compareTo(ImmutableBytesWritable right_obj) {
+ return compareTo(right_obj.get());
+ }
+
+ /**
+ * Compares the bytes in this object to the specified byte array
+ * @param that
+ * @return Positive if left is bigger than right, 0 if they are equal, and
+ * negative if left is smaller than right.
+ */
+ public int compareTo(final byte [] that) {
+ int diff = this.length - that.length;
+ return (diff != 0)?
+ diff:
+ WritableComparator.compareBytes(this.bytes, 0, this.length, that,
+ 0, that.length);
+ }
+
+ /**
+ * @see java.lang.Object#equals(java.lang.Object)
+ */
+ @Override
+ public boolean equals(Object right_obj) {
+ if (right_obj instanceof byte []) {
+ return compareTo((byte [])right_obj) == 0;
+ }
+ if (right_obj instanceof ImmutableBytesWritable) {
+ return compareTo((ImmutableBytesWritable)right_obj) == 0;
+ }
+ return false;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ StringBuffer sb = new StringBuffer(3*this.bytes.length);
+ for (int idx = 0; idx < this.bytes.length; idx++) {
+ // if not the first, put a blank separator in
+ if (idx != 0) {
+ sb.append(' ');
+ }
+ String num = Integer.toHexString(bytes[idx]);
+ // if it is only one digit, add a leading 0.
+ if (num.length() < 2) {
+ sb.append('0');
+ }
+ sb.append(num);
+ }
+ return sb.toString();
+ }
+
+ /** A Comparator optimized for ImmutableBytesWritable.
+ */
+ public static class Comparator extends WritableComparator {
+ private BytesWritable.Comparator comparator =
+ new BytesWritable.Comparator();
+
+ /** constructor */
+ public Comparator() {
+ super(ImmutableBytesWritable.class);
+ }
+
+ /**
+ * @see org.apache.hadoop.io.WritableComparator#compare(byte[], int, int, byte[], int, int)
+ */
+ @Override
+ public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
+ return comparator.compare(b1, s1, l1, b2, s2, l2);
+ }
+ }
+
+ static { // register this comparator
+ WritableComparator.define(ImmutableBytesWritable.class, new Comparator());
+ }
+
+ /**
+ * @param array List of byte [].
+ * @return Array of byte [].
+ */
+ public static byte [][] toArray(final List<byte []> array) {
+ // List#toArray doesn't work on lists of byte [].
+ byte[][] results = new byte[array.size()][];
+ for (int i = 0; i < array.size(); i++) {
+ results[i] = array.get(i);
+ }
+ return results;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/MapFile.java b/src/java/org/apache/hadoop/hbase/io/MapFile.java
new file mode 100644
index 0000000..a2c752d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/MapFile.java
@@ -0,0 +1,781 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.*;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.*;
+import org.apache.hadoop.conf.*;
+import org.apache.hadoop.util.Progressable;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.hbase.io.SequenceFile.CompressionType;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.io.WritableComparator;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.DefaultCodec;
+
+/** A file-based map from keys to values.
+ *
+ * <p>This is copy of Hadoop SequenceFile brought local so we can fix bugs;
+ * e.g. hbase-1097</p>
+ *
+ * <p>A map is a directory containing two files, the <code>data</code> file,
+ * containing all keys and values in the map, and a smaller <code>index</code>
+ * file, containing a fraction of the keys. The fraction is determined by
+ * {@link Writer#getIndexInterval()}.
+ *
+ * <p>The index file is read entirely into memory. Thus key implementations
+ * should try to keep themselves small.
+ *
+ * <p>Map files are created by adding entries in-order. To maintain a large
+ * database, perform updates by copying the previous version of a database and
+ * merging in a sorted change list, to create a new version of the database in
+ * a new file. Sorting large change lists can be done with {@link
+ * SequenceFile.Sorter}.
+ */
+public class MapFile {
+ protected static final Log LOG = LogFactory.getLog(MapFile.class);
+
+ /** The name of the index file. */
+ public static final String INDEX_FILE_NAME = "index";
+
+ /** The name of the data file. */
+ public static final String DATA_FILE_NAME = "data";
+
+ protected MapFile() {} // no public ctor
+
+ /** Writes a new map. */
+ public static class Writer implements java.io.Closeable {
+ private SequenceFile.Writer data;
+ private SequenceFile.Writer index;
+
+ final private static String INDEX_INTERVAL = "io.map.index.interval";
+ private int indexInterval = 128;
+
+ private long size;
+ private LongWritable position = new LongWritable();
+
+ // the following fields are used only for checking key order
+ private WritableComparator comparator;
+ private DataInputBuffer inBuf = new DataInputBuffer();
+ private DataOutputBuffer outBuf = new DataOutputBuffer();
+ private WritableComparable lastKey;
+
+
+ /** Create the named map for keys of the named class.
+ * @param conf
+ * @param fs
+ * @param dirName
+ * @param keyClass
+ * @param valClass
+ * @throws IOException
+ */
+ public Writer(Configuration conf, FileSystem fs, String dirName,
+ Class<? extends WritableComparable> keyClass, Class valClass)
+ throws IOException {
+ this(conf, fs, dirName,
+ WritableComparator.get(keyClass), valClass,
+ SequenceFile.getCompressionType(conf));
+ }
+
+ /** Create the named map for keys of the named class.
+ * @param conf
+ * @param fs
+ * @param dirName
+ * @param keyClass
+ * @param valClass
+ * @param compress
+ * @param progress
+ * @throws IOException
+ */
+ public Writer(Configuration conf, FileSystem fs, String dirName,
+ Class<? extends WritableComparable> keyClass, Class valClass,
+ CompressionType compress, Progressable progress)
+ throws IOException {
+ this(conf, fs, dirName, WritableComparator.get(keyClass), valClass,
+ compress, progress);
+ }
+
+ /** Create the named map for keys of the named class. */
+ public Writer(Configuration conf, FileSystem fs, String dirName,
+ Class<? extends WritableComparable> keyClass, Class valClass,
+ CompressionType compress, CompressionCodec codec,
+ Progressable progress)
+ throws IOException {
+ this(conf, fs, dirName, WritableComparator.get(keyClass), valClass,
+ compress, codec, progress);
+ }
+
+ /** Create the named map for keys of the named class.
+ * @param conf
+ * @param fs
+ * @param dirName
+ * @param keyClass
+ * @param valClass
+ * @param compress
+ * @throws IOException
+ */
+ public Writer(Configuration conf, FileSystem fs, String dirName,
+ Class<? extends WritableComparable> keyClass, Class valClass,
+ CompressionType compress)
+ throws IOException {
+ this(conf, fs, dirName, WritableComparator.get(keyClass), valClass, compress);
+ }
+
+ /** Create the named map using the named key comparator.
+ * @param conf
+ * @param fs
+ * @param dirName
+ * @param comparator
+ * @param valClass
+ * @throws IOException
+ */
+ public Writer(Configuration conf, FileSystem fs, String dirName,
+ WritableComparator comparator, Class valClass)
+ throws IOException {
+ this(conf, fs, dirName, comparator, valClass,
+ SequenceFile.getCompressionType(conf));
+ }
+ /** Create the named map using the named key comparator.
+ * @param conf
+ * @param fs
+ * @param dirName
+ * @param comparator
+ * @param valClass
+ * @param compress
+ * @throws IOException
+ */
+ public Writer(Configuration conf, FileSystem fs, String dirName,
+ WritableComparator comparator, Class valClass,
+ SequenceFile.CompressionType compress)
+ throws IOException {
+ this(conf, fs, dirName, comparator, valClass, compress, null);
+ }
+ /** Create the named map using the named key comparator.
+ * @param conf
+ * @param fs
+ * @param dirName
+ * @param comparator
+ * @param valClass
+ * @param compress
+ * @param progress
+ * @throws IOException
+ */
+ public Writer(Configuration conf, FileSystem fs, String dirName,
+ WritableComparator comparator, Class valClass,
+ SequenceFile.CompressionType compress,
+ Progressable progress)
+ throws IOException {
+ this(conf, fs, dirName, comparator, valClass,
+ compress, new DefaultCodec(), progress);
+ }
+ /** Create the named map using the named key comparator.
+ * @param conf
+ * @param fs
+ * @param dirName
+ * @param comparator
+ * @param valClass
+ * @param compress
+ * @param codec
+ * @param progress
+ * @throws IOException
+ */
+ public Writer(Configuration conf, FileSystem fs, String dirName,
+ WritableComparator comparator, Class valClass,
+ SequenceFile.CompressionType compress, CompressionCodec codec,
+ Progressable progress)
+ throws IOException {
+
+ this.indexInterval = conf.getInt(INDEX_INTERVAL, this.indexInterval);
+
+ this.comparator = comparator;
+ this.lastKey = comparator.newKey();
+
+ Path dir = new Path(dirName);
+ if (!fs.mkdirs(dir)) {
+ throw new IOException("Mkdirs failed to create directory " + dir.toString());
+ }
+ Path dataFile = new Path(dir, DATA_FILE_NAME);
+ Path indexFile = new Path(dir, INDEX_FILE_NAME);
+
+ Class keyClass = comparator.getKeyClass();
+ this.data =
+ SequenceFile.createWriter
+ (fs, conf, dataFile, keyClass, valClass, compress, codec, progress);
+ this.index =
+ SequenceFile.createWriter
+ (fs, conf, indexFile, keyClass, LongWritable.class,
+ CompressionType.BLOCK, progress);
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileWriter#getIndexInterval()
+ */
+ public int getIndexInterval() { return indexInterval; }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileWriter#setIndexInterval(int)
+ */
+ public void setIndexInterval(int interval) { indexInterval = interval; }
+
+ /** Sets the index interval and stores it in conf
+ * @param conf
+ * @param interval
+ * @see #getIndexInterval()
+ */
+ public static void setIndexInterval(Configuration conf, int interval) {
+ conf.setInt(INDEX_INTERVAL, interval);
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileWriter#close()
+ */
+ public synchronized void close() throws IOException {
+ data.close();
+ index.close();
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileWriter#append(org.apache.hadoop.io.WritableComparable, org.apache.hadoop.io.Writable)
+ */
+ public synchronized void append(WritableComparable key, Writable val)
+ throws IOException {
+
+ checkKey(key);
+
+ if (size % indexInterval == 0) { // add an index entry
+ position.set(data.getLength()); // point to current eof
+ index.append(key, position);
+ }
+
+ data.append(key, val); // append key/value to data
+ size++;
+ }
+
+ private void checkKey(WritableComparable key) throws IOException {
+ // check that keys are well-ordered
+ if (size != 0 && comparator.compare(lastKey, key) > 0)
+ throw new IOException("key out of order: "+key+" after "+lastKey);
+
+ // update lastKey with a copy of key by writing and reading
+ outBuf.reset();
+ key.write(outBuf); // write new key
+
+ inBuf.reset(outBuf.getData(), outBuf.getLength());
+ lastKey.readFields(inBuf); // read into lastKey
+ }
+
+ }
+
+ /** Provide access to an existing map. */
+ public static class Reader implements java.io.Closeable {
+
+ /** Number of index entries to skip between each entry. Zero by default.
+ * Setting this to values larger than zero can facilitate opening large map
+ * files using less memory. */
+ private int INDEX_SKIP = 0;
+
+ private WritableComparator comparator;
+
+ private WritableComparable nextKey;
+ private long seekPosition = -1;
+ private int seekIndex = -1;
+ private long firstPosition;
+
+ // the data, on disk
+ private SequenceFile.Reader data;
+ private SequenceFile.Reader index;
+
+ // whether the index Reader was closed
+ private boolean indexClosed = false;
+
+ // the index, in memory
+ private int count = -1;
+ private WritableComparable[] keys;
+ private long[] positions;
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#getKeyClass()
+ */
+ public Class<?> getKeyClass() { return data.getKeyClass(); }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#getValueClass()
+ */
+ public Class<?> getValueClass() { return data.getValueClass(); }
+
+ /** Construct a map reader for the named map.
+ * @param fs
+ * @param dirName
+ * @param conf
+ * @throws IOException
+ */
+ public Reader(FileSystem fs, String dirName, Configuration conf) throws IOException {
+ this(fs, dirName, null, conf);
+ INDEX_SKIP = conf.getInt("io.map.index.skip", 0);
+ }
+
+ /** Construct a map reader for the named map using the named comparator.
+ * @param fs
+ * @param dirName
+ * @param comparator
+ * @param conf
+ * @throws IOException
+ */
+ public Reader(FileSystem fs, String dirName, WritableComparator comparator, Configuration conf)
+ throws IOException {
+ this(fs, dirName, comparator, conf, true);
+ }
+
+ /**
+ * Hook to allow subclasses to defer opening streams until further
+ * initialization is complete.
+ * @see #createDataFileReader(FileSystem, Path, Configuration)
+ */
+ protected Reader(FileSystem fs, String dirName,
+ WritableComparator comparator, Configuration conf, boolean open)
+ throws IOException {
+
+ if (open) {
+ open(fs, dirName, comparator, conf);
+ }
+ }
+
+ protected synchronized void open(FileSystem fs, String dirName,
+ WritableComparator comparator, Configuration conf) throws IOException {
+ Path dir = new Path(dirName);
+ Path dataFile = new Path(dir, DATA_FILE_NAME);
+ Path indexFile = new Path(dir, INDEX_FILE_NAME);
+
+ // open the data
+ this.data = createDataFileReader(fs, dataFile, conf);
+ this.firstPosition = data.getPosition();
+
+ if (comparator == null)
+ this.comparator = WritableComparator.get(data.getKeyClass().asSubclass(WritableComparable.class));
+ else
+ this.comparator = comparator;
+
+ // open the index
+ this.index = new SequenceFile.Reader(fs, indexFile, conf);
+ }
+
+ /**
+ * Override this method to specialize the type of
+ * {@link SequenceFile.Reader} returned.
+ */
+ protected SequenceFile.Reader createDataFileReader(FileSystem fs,
+ Path dataFile, Configuration conf) throws IOException {
+ return new SequenceFile.Reader(fs, dataFile, conf);
+ }
+
+ private void readIndex() throws IOException {
+ // read the index entirely into memory
+ if (this.keys != null)
+ return;
+ this.count = 0;
+ this.keys = new WritableComparable[1024];
+ this.positions = new long[1024];
+ try {
+ int skip = INDEX_SKIP;
+ LongWritable position = new LongWritable();
+ WritableComparable lastKey = null;
+ while (true) {
+ WritableComparable k = comparator.newKey();
+
+ if (!index.next(k, position))
+ break;
+
+ // check order to make sure comparator is compatible
+ if (lastKey != null && comparator.compare(lastKey, k) > 0)
+ throw new IOException("key out of order: "+k+" after "+lastKey);
+ lastKey = k;
+
+ if (skip > 0) {
+ skip--;
+ continue; // skip this entry
+ }
+ skip = INDEX_SKIP; // reset skip
+
+ if (count == keys.length) { // time to grow arrays
+ int newLength = (keys.length*3)/2;
+ WritableComparable[] newKeys = new WritableComparable[newLength];
+ long[] newPositions = new long[newLength];
+ System.arraycopy(keys, 0, newKeys, 0, count);
+ System.arraycopy(positions, 0, newPositions, 0, count);
+ keys = newKeys;
+ positions = newPositions;
+ }
+
+ keys[count] = k;
+ positions[count] = position.get();
+ count++;
+ }
+ } catch (EOFException e) {
+ LOG.warn("Unexpected EOF reading " + index +
+ " at entry #" + count + ". Ignoring.");
+ } finally {
+ indexClosed = true;
+ index.close();
+ }
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#reset()
+ */
+ public synchronized void reset() throws IOException {
+ data.seek(firstPosition);
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#midKey()
+ */
+ public synchronized WritableComparable midKey() throws IOException {
+
+ readIndex();
+ int pos = ((count - 1) / 2); // middle of the index
+ if (pos < 0) {
+ throw new IOException("MapFile empty");
+ }
+
+ return keys[pos];
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#finalKey(org.apache.hadoop.io.WritableComparable)
+ */
+ public synchronized void finalKey(WritableComparable key)
+ throws IOException {
+
+ long originalPosition = data.getPosition(); // save position
+ try {
+ readIndex(); // make sure index is valid
+ if (count > 0) {
+ data.seek(positions[count-1]); // skip to last indexed entry
+ } else {
+ reset(); // start at the beginning
+ }
+ while (data.next(key)) {} // scan to eof
+
+ } finally {
+ data.seek(originalPosition); // restore position
+ }
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#seek(org.apache.hadoop.io.WritableComparable)
+ */
+ public synchronized boolean seek(WritableComparable key) throws IOException {
+ return seekInternal(key) == 0;
+ }
+
+ /**
+ * Positions the reader at the named key, or if none such exists, at the
+ * first entry after the named key.
+ *
+ * @return 0 - exact match found
+ * < 0 - positioned at next record
+ * 1 - no more records in file
+ */
+ private synchronized int seekInternal(WritableComparable key)
+ throws IOException {
+ return seekInternal(key, false);
+ }
+
+ /**
+ * Positions the reader at the named key, or if none such exists, at the
+ * key that falls just before or just after dependent on how the
+ * <code>before</code> parameter is set.
+ *
+ * @param before - IF true, and <code>key</code> does not exist, position
+ * file at entry that falls just before <code>key</code>. Otherwise,
+ * position file at record that sorts just after.
+ * @return 0 - exact match found
+ * < 0 - positioned at next record
+ * 1 - no more records in file
+ */
+ private synchronized int seekInternal(WritableComparable key,
+ final boolean before)
+ throws IOException {
+ readIndex(); // make sure index is read
+
+ if (seekIndex != -1 // seeked before
+ && seekIndex+1 < count
+ && comparator.compare(key, keys[seekIndex+1])<0 // before next indexed
+ && comparator.compare(key, nextKey)
+ >= 0) { // but after last seeked
+ // do nothing
+ } else {
+ seekIndex = binarySearch(key);
+ if (seekIndex < 0) // decode insertion point
+ seekIndex = -seekIndex-2;
+
+ if (seekIndex == -1) // belongs before first entry
+ seekPosition = firstPosition; // use beginning of file
+ else
+ seekPosition = positions[seekIndex]; // else use index
+ }
+ data.seek(seekPosition);
+
+ if (nextKey == null)
+ nextKey = comparator.newKey();
+
+ // If we're looking for the key before, we need to keep track
+ // of the position we got the current key as well as the position
+ // of the key before it.
+ long prevPosition = -1;
+ long curPosition = seekPosition;
+
+ while (data.next(nextKey)) {
+ int c = comparator.compare(key, nextKey);
+ if (c <= 0) { // at or beyond desired
+ if (before && c != 0) {
+ if (prevPosition == -1) {
+ // We're on the first record of this index block
+ // and we've already passed the search key. Therefore
+ // we must be at the beginning of the file, so seek
+ // to the beginning of this block and return c
+ data.seek(curPosition);
+ } else {
+ // We have a previous record to back up to
+ data.seek(prevPosition);
+ data.next(nextKey);
+ // now that we've rewound, the search key must be greater than this key
+ return 1;
+ }
+ }
+ return c;
+ }
+ if (before) {
+ prevPosition = curPosition;
+ curPosition = data.getPosition();
+ }
+ }
+
+ return 1;
+ }
+
+ private int binarySearch(WritableComparable key) {
+ int low = 0;
+ int high = count-1;
+
+ while (low <= high) {
+ int mid = (low + high) >>> 1;
+ WritableComparable midVal = keys[mid];
+ int cmp = comparator.compare(midVal, key);
+
+ if (cmp < 0)
+ low = mid + 1;
+ else if (cmp > 0)
+ high = mid - 1;
+ else
+ return mid; // key found
+ }
+ return -(low + 1); // key not found.
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#next(org.apache.hadoop.io.WritableComparable, org.apache.hadoop.io.Writable)
+ */
+ public synchronized boolean next(WritableComparable key, Writable val)
+ throws IOException {
+ return data.next(key, val);
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#get(org.apache.hadoop.io.WritableComparable, org.apache.hadoop.io.Writable)
+ */
+ public synchronized Writable get(WritableComparable key, Writable val)
+ throws IOException {
+ if (seek(key)) {
+ data.getCurrentValue(val);
+ return val;
+ }
+ return null;
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#getClosest(org.apache.hadoop.io.WritableComparable, org.apache.hadoop.io.Writable)
+ */
+ public synchronized WritableComparable getClosest(WritableComparable key,
+ Writable val)
+ throws IOException {
+ return getClosest(key, val, false);
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#getClosest(org.apache.hadoop.io.WritableComparable, org.apache.hadoop.io.Writable, boolean)
+ */
+ public synchronized WritableComparable getClosest(WritableComparable key,
+ Writable val, final boolean before)
+ throws IOException {
+
+ int c = seekInternal(key, before);
+
+ // If we didn't get an exact match, and we ended up in the wrong
+ // direction relative to the query key, return null since we
+ // must be at the beginning or end of the file.
+ if ((!before && c > 0) ||
+ (before && c < 0)) {
+ return null;
+ }
+
+ data.getCurrentValue(val);
+ return nextKey;
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.io.StoreFileReader#close()
+ */
+ public synchronized void close() throws IOException {
+ if (!indexClosed) {
+ index.close();
+ }
+ data.close();
+ }
+
+ }
+
+ /** Renames an existing map directory.
+ * @param fs
+ * @param oldName
+ * @param newName
+ * @throws IOException
+ */
+ public static void rename(FileSystem fs, String oldName, String newName)
+ throws IOException {
+ Path oldDir = new Path(oldName);
+ Path newDir = new Path(newName);
+ if (!fs.rename(oldDir, newDir)) {
+ throw new IOException("Could not rename " + oldDir + " to " + newDir);
+ }
+ }
+
+ /** Deletes the named map file.
+ * @param fs
+ * @param name
+ * @throws IOException
+ */
+ public static void delete(FileSystem fs, String name) throws IOException {
+ Path dir = new Path(name);
+ Path data = new Path(dir, DATA_FILE_NAME);
+ Path index = new Path(dir, INDEX_FILE_NAME);
+
+ fs.delete(data, true);
+ fs.delete(index, true);
+ fs.delete(dir, true);
+ }
+
+ /**
+ * This method attempts to fix a corrupt MapFile by re-creating its index.
+ * @param fs filesystem
+ * @param dir directory containing the MapFile data and index
+ * @param keyClass key class (has to be a subclass of Writable)
+ * @param valueClass value class (has to be a subclass of Writable)
+ * @param dryrun do not perform any changes, just report what needs to be done
+ * @param conf
+ * @return number of valid entries in this MapFile, or -1 if no fixing was needed
+ * @throws Exception
+ */
+ public static long fix(FileSystem fs, Path dir,
+ Class<? extends Writable> keyClass,
+ Class<? extends Writable> valueClass, boolean dryrun,
+ Configuration conf) throws Exception {
+ String dr = (dryrun ? "[DRY RUN ] " : "");
+ Path data = new Path(dir, DATA_FILE_NAME);
+ Path index = new Path(dir, INDEX_FILE_NAME);
+ int indexInterval = 128;
+ if (!fs.exists(data)) {
+ // there's nothing we can do to fix this!
+ throw new Exception(dr + "Missing data file in " + dir + ", impossible to fix this.");
+ }
+ if (fs.exists(index)) {
+ // no fixing needed
+ return -1;
+ }
+ SequenceFile.Reader dataReader = new SequenceFile.Reader(fs, data, conf);
+ if (!dataReader.getKeyClass().equals(keyClass)) {
+ throw new Exception(dr + "Wrong key class in " + dir + ", expected" + keyClass.getName() +
+ ", got " + dataReader.getKeyClass().getName());
+ }
+ if (!dataReader.getValueClass().equals(valueClass)) {
+ throw new Exception(dr + "Wrong value class in " + dir + ", expected" + valueClass.getName() +
+ ", got " + dataReader.getValueClass().getName());
+ }
+ long cnt = 0L;
+ Writable key = (Writable) ReflectionUtils.newInstance(keyClass, conf);
+ Writable value = (Writable) ReflectionUtils.newInstance(valueClass, conf);
+ SequenceFile.Writer indexWriter = null;
+ if (!dryrun) indexWriter = SequenceFile.createWriter(fs, conf, index, keyClass, LongWritable.class);
+ try {
+ long pos = 0L;
+ LongWritable position = new LongWritable();
+ while(dataReader.next(key, value)) {
+ cnt++;
+ if (cnt % indexInterval == 0) {
+ position.set(pos);
+ if (!dryrun) indexWriter.append(key, position);
+ }
+ pos = dataReader.getPosition();
+ }
+ } catch(Throwable t) {
+ // truncated data file. swallow it.
+ }
+ dataReader.close();
+ if (!dryrun) indexWriter.close();
+ return cnt;
+ }
+
+
+ public static void main(String[] args) throws Exception {
+ String usage = "Usage: MapFile inFile outFile";
+
+ if (args.length != 2) {
+ System.err.println(usage);
+ System.exit(-1);
+ }
+
+ String in = args[0];
+ String out = args[1];
+
+ Configuration conf = new Configuration();
+ FileSystem fs = FileSystem.getLocal(conf);
+ MapFile.Reader reader = new MapFile.Reader(fs, in, conf);
+ MapFile.Writer writer =
+ new MapFile.Writer(conf, fs, out,
+ reader.getKeyClass().asSubclass(WritableComparable.class),
+ reader.getValueClass());
+
+ WritableComparable key = (WritableComparable)
+ ReflectionUtils.newInstance(reader.getKeyClass().asSubclass(WritableComparable.class), conf);
+ Writable value = (Writable)
+ ReflectionUtils.newInstance(reader.getValueClass().asSubclass(Writable.class), conf);
+
+ while (reader.next(key, value)) // copy all entries
+ writer.append(key, value);
+
+ writer.close();
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/Reference.java b/src/java/org/apache/hadoop/hbase/io/Reference.java
new file mode 100644
index 0000000..d7d32cc
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/Reference.java
@@ -0,0 +1,133 @@
+/**
+ *
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * A reference to the top or bottom half of a store file. The file referenced
+ * lives under a different region. References are made at region split time.
+ *
+ * <p>References work with a special half store file type. References know how
+ * to write out the reference format in the file system and are whats juggled
+ * when references are mixed in with direct store files. The half store file
+ * type is used reading the referred to file.
+ *
+ * <p>References to store files located over in some other region look like
+ * this in the file system
+ * <code>1278437856009925445.3323223323</code>:
+ * i.e. an id followed by hash of the referenced region.
+ * Note, a region is itself not splitable if it has instances of store file
+ * references. References are cleaned up by compactions.
+ */
+public class Reference implements Writable {
+ private byte [] splitkey;
+ private Range region;
+
+ /**
+ * For split HStoreFiles, it specifies if the file covers the lower half or
+ * the upper half of the key range
+ */
+ public static enum Range {
+ /** HStoreFile contains upper half of key range */
+ top,
+ /** HStoreFile contains lower half of key range */
+ bottom
+ }
+
+ /**
+ * Constructor
+ * @param splitRow This is row we are splitting around.
+ * @param fr
+ */
+ public Reference(final byte [] splitRow, final Range fr) {
+ this.splitkey = splitRow == null?
+ null: KeyValue.createFirstOnRow(splitRow).getKey();
+ this.region = fr;
+ }
+
+ /**
+ * Used by serializations.
+ */
+ public Reference() {
+ this(null, Range.bottom);
+ }
+
+ public Range getFileRegion() {
+ return this.region;
+ }
+
+ public byte [] getSplitKey() {
+ return splitkey;
+ }
+
+ /**
+ * @see java.lang.Object#toString()
+ */
+ @Override
+ public String toString() {
+ return "" + this.region;
+ }
+
+ // Make it serializable.
+
+ public void write(DataOutput out) throws IOException {
+ // Write true if we're doing top of the file.
+ out.writeBoolean(isTopFileRegion(this.region));
+ Bytes.writeByteArray(out, this.splitkey);
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ boolean tmp = in.readBoolean();
+ // If true, set region to top.
+ this.region = tmp? Range.top: Range.bottom;
+ this.splitkey = Bytes.readByteArray(in);
+ }
+
+ public static boolean isTopFileRegion(final Range r) {
+ return r.equals(Range.top);
+ }
+
+ public Path write(final FileSystem fs, final Path p)
+ throws IOException {
+ FSUtils.create(fs, p);
+ FSDataOutputStream out = fs.create(p);
+ try {
+ write(out);
+ } finally {
+ out.close();
+ }
+ return p;
+ }
+
+ /**
+ * Read a Reference from FileSystem.
+ * @param fs
+ * @param p
+ * @return New Reference made from passed <code>p</code>
+ * @throws IOException
+ */
+ public static Reference read(final FileSystem fs, final Path p)
+ throws IOException {
+ FSDataInputStream in = fs.open(p);
+ try {
+ Reference r = new Reference();
+ r.readFields(in);
+ return r;
+ } finally {
+ in.close();
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/RowResult.java b/src/java/org/apache/hadoop/hbase/io/RowResult.java
new file mode 100644
index 0000000..e61bd08
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/RowResult.java
@@ -0,0 +1,326 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeSet;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.rest.descriptors.RestCell;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.Writable;
+
+import agilejson.TOJSON;
+
+/**
+ * Holds row name and then a map of columns to cells.
+ */
+public class RowResult implements Writable, SortedMap<byte [], Cell>,
+ Comparable<RowResult>, ISerializable {
+ private byte [] row = null;
+ private final HbaseMapWritable<byte [], Cell> cells;
+
+ /** default constructor for writable */
+ public RowResult() {
+ this(null, new HbaseMapWritable<byte [], Cell>());
+ }
+
+ /**
+ * Create a RowResult from a row and Cell map
+ * @param row
+ * @param m
+ */
+ public RowResult (final byte [] row,
+ final HbaseMapWritable<byte [], Cell> m) {
+ this.row = row;
+ this.cells = m;
+ }
+
+ /**
+ * Get the row for this RowResult
+ * @return the row
+ */
+ @TOJSON(base64=true)
+ public byte [] getRow() {
+ return row;
+ }
+
+ //
+ // Map interface
+ //
+ public Cell put(byte [] key,
+ Cell value) {
+ throw new UnsupportedOperationException("RowResult is read-only!");
+ }
+
+ @SuppressWarnings("unchecked")
+ public void putAll(Map map) {
+ throw new UnsupportedOperationException("RowResult is read-only!");
+ }
+
+ public Cell get(Object key) {
+ return this.cells.get(key);
+ }
+
+ public Cell remove(Object key) {
+ throw new UnsupportedOperationException("RowResult is read-only!");
+ }
+
+ public boolean containsKey(Object key) {
+ return cells.containsKey(key);
+ }
+
+ public boolean containsKey(String key) {
+ return cells.containsKey(Bytes.toBytes(key));
+ }
+
+ public boolean containsValue(Object value) {
+ throw new UnsupportedOperationException("Don't support containsValue!");
+ }
+
+ public boolean isEmpty() {
+ return cells.isEmpty();
+ }
+
+ public int size() {
+ return cells.size();
+ }
+
+ public void clear() {
+ throw new UnsupportedOperationException("RowResult is read-only!");
+ }
+
+ public Set<byte []> keySet() {
+ Set<byte []> result = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ for (byte [] w : cells.keySet()) {
+ result.add(w);
+ }
+ return result;
+ }
+
+ public Set<Map.Entry<byte [], Cell>> entrySet() {
+ return Collections.unmodifiableSet(this.cells.entrySet());
+ }
+
+ /**
+ * This method used solely for the REST serialization
+ *
+ * @return Cells
+ */
+ @TOJSON
+ public RestCell[] getCells() {
+ RestCell[] restCells = new RestCell[this.cells.size()];
+ int i = 0;
+ for (Map.Entry<byte[], Cell> entry : this.cells.entrySet()) {
+ restCells[i] = new RestCell(entry.getKey(), entry.getValue());
+ i++;
+ }
+ return restCells;
+ }
+
+ public Collection<Cell> values() {
+ ArrayList<Cell> result = new ArrayList<Cell>();
+ for (Writable w : cells.values()) {
+ result.add((Cell)w);
+ }
+ return result;
+ }
+
+ /**
+ * Get the Cell that corresponds to column
+ * @param column
+ * @return the Cell
+ */
+ public Cell get(byte [] column) {
+ return this.cells.get(column);
+ }
+
+ /**
+ * Get the Cell that corresponds to column, using a String key
+ * @param key
+ * @return the Cell
+ */
+ public Cell get(String key) {
+ return get(Bytes.toBytes(key));
+ }
+
+
+ public Comparator<? super byte[]> comparator() {
+ return this.cells.comparator();
+ }
+
+ public byte[] firstKey() {
+ return this.cells.firstKey();
+ }
+
+ public SortedMap<byte[], Cell> headMap(byte[] toKey) {
+ return this.cells.headMap(toKey);
+ }
+
+ public byte[] lastKey() {
+ return this.cells.lastKey();
+ }
+
+ public SortedMap<byte[], Cell> subMap(byte[] fromKey, byte[] toKey) {
+ return this.cells.subMap(fromKey, toKey);
+ }
+
+ public SortedMap<byte[], Cell> tailMap(byte[] fromKey) {
+ return this.cells.tailMap(fromKey);
+ }
+
+ /**
+ * Row entry.
+ */
+ public class Entry implements Map.Entry<byte [], Cell> {
+ private final byte [] column;
+ private final Cell cell;
+
+ Entry(byte [] row, Cell cell) {
+ this.column = row;
+ this.cell = cell;
+ }
+
+ public Cell setValue(Cell c) {
+ throw new UnsupportedOperationException("RowResult is read-only!");
+ }
+
+ public byte [] getKey() {
+ return column;
+ }
+
+ public Cell getValue() {
+ return cell;
+ }
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("row=");
+ sb.append(Bytes.toString(this.row));
+ sb.append(", cells={");
+ boolean moreThanOne = false;
+ for (Map.Entry<byte [], Cell> e: this.cells.entrySet()) {
+ if (moreThanOne) {
+ sb.append(", ");
+ } else {
+ moreThanOne = true;
+ }
+ sb.append("(column=");
+ sb.append(Bytes.toString(e.getKey()));
+ sb.append(", timestamp=");
+ sb.append(Long.toString(e.getValue().getTimestamp()));
+ sb.append(", value=");
+ byte [] v = e.getValue().getValue();
+ if (Bytes.equals(e.getKey(), HConstants.COL_REGIONINFO)) {
+ try {
+ sb.append(Writables.getHRegionInfo(v).toString());
+ } catch (IOException ioe) {
+ sb.append(ioe.toString());
+ }
+ } else {
+ sb.append(v);
+ }
+ sb.append(")");
+ }
+ sb.append("}");
+ return sb.toString();
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.rest.xml.IOutputXML#toXML()
+ */
+ public void restSerialize(IRestSerializer serializer) throws HBaseRestException {
+ serializer.serializeRowResult(this);
+ }
+
+ /**
+ * @param l
+ * @return
+ * TODO: This is the glue between old way of doing things and the new.
+ * Herein we are converting our clean KeyValues to old RowResult.
+ */
+ public static RowResult [] createRowResultArray(final List<List<KeyValue>> l) {
+ RowResult [] results = new RowResult[l.size()];
+ int i = 0;
+ for (List<KeyValue> kvl: l) {
+ results[i++] = createRowResult(kvl);
+ }
+ return results;
+ }
+
+ /**
+ * @param results
+ * @return
+ * TODO: This is the glue between old way of doing things and the new.
+ * Herein we are converting our clean KeyValues to old RowResult.
+ */
+ public static RowResult createRowResult(final List<KeyValue> results) {
+ if (results.isEmpty()) {
+ return null;
+ }
+ HbaseMapWritable<byte [], Cell> cells = Cell.createCells(results);
+ byte [] row = results.get(0).getRow();
+ return new RowResult(row, cells);
+ }
+
+ //
+ // Writable
+ //
+
+ public void readFields(final DataInput in) throws IOException {
+ this.row = Bytes.readByteArray(in);
+ this.cells.readFields(in);
+ }
+
+ public void write(final DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, this.row);
+ this.cells.write(out);
+ }
+
+ //
+ // Comparable
+ //
+ /**
+ * Comparing this RowResult with another one by
+ * comparing the row in it.
+ * @param o the RowResult Object to compare to
+ * @return the compare number
+ */
+ public int compareTo(RowResult o){
+ return Bytes.compareTo(this.row, o.getRow());
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/SequenceFile.java b/src/java/org/apache/hadoop/hbase/io/SequenceFile.java
new file mode 100644
index 0000000..2f3aaed
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/SequenceFile.java
@@ -0,0 +1,3367 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.*;
+import java.util.*;
+import java.rmi.server.UID;
+import java.security.MessageDigest;
+import org.apache.commons.logging.*;
+import org.apache.hadoop.fs.*;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.UTF8;
+import org.apache.hadoop.io.VersionMismatchException;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.io.WritableComparator;
+import org.apache.hadoop.io.WritableName;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.io.compress.CodecPool;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.CompressionInputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.Decompressor;
+import org.apache.hadoop.io.compress.DefaultCodec;
+import org.apache.hadoop.io.compress.GzipCodec;
+import org.apache.hadoop.io.compress.zlib.ZlibFactory;
+import org.apache.hadoop.io.serializer.Deserializer;
+import org.apache.hadoop.io.serializer.SerializationFactory;
+import org.apache.hadoop.io.serializer.Serializer;
+import org.apache.hadoop.conf.*;
+import org.apache.hadoop.util.Progressable;
+import org.apache.hadoop.util.Progress;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.NativeCodeLoader;
+import org.apache.hadoop.util.MergeSort;
+import org.apache.hadoop.util.PriorityQueue;
+
+/**
+ * <code>SequenceFile</code>s are flat files consisting of binary key/value
+ * pairs.
+ *
+ * <p>This is copy of Hadoop SequenceFile brought local so we can fix bugs;
+ * e.g. hbase-1097</p>
+ *
+ * <p><code>SequenceFile</code> provides {@link Writer}, {@link Reader} and
+ * {@link Sorter} classes for writing, reading and sorting respectively.</p>
+ *
+ * There are three <code>SequenceFile</code> <code>Writer</code>s based on the
+ * {@link CompressionType} used to compress key/value pairs:
+ * <ol>
+ * <li>
+ * <code>Writer</code> : Uncompressed records.
+ * </li>
+ * <li>
+ * <code>RecordCompressWriter</code> : Record-compressed files, only compress
+ * values.
+ * </li>
+ * <li>
+ * <code>BlockCompressWriter</code> : Block-compressed files, both keys &
+ * values are collected in 'blocks'
+ * separately and compressed. The size of
+ * the 'block' is configurable.
+ * </ol>
+ *
+ * <p>The actual compression algorithm used to compress key and/or values can be
+ * specified by using the appropriate {@link CompressionCodec}.</p>
+ *
+ * <p>The recommended way is to use the static <tt>createWriter</tt> methods
+ * provided by the <code>SequenceFile</code> to chose the preferred format.</p>
+ *
+ * <p>The {@link Reader} acts as the bridge and can read any of the above
+ * <code>SequenceFile</code> formats.</p>
+ *
+ * <h4 id="Formats">SequenceFile Formats</h4>
+ *
+ * <p>Essentially there are 3 different formats for <code>SequenceFile</code>s
+ * depending on the <code>CompressionType</code> specified. All of them share a
+ * <a href="#Header">common header</a> described below.
+ *
+ * <h5 id="Header">SequenceFile Header</h5>
+ * <ul>
+ * <li>
+ * version - 3 bytes of magic header <b>SEQ</b>, followed by 1 byte of actual
+ * version number (e.g. SEQ4 or SEQ6)
+ * </li>
+ * <li>
+ * keyClassName -key class
+ * </li>
+ * <li>
+ * valueClassName - value class
+ * </li>
+ * <li>
+ * compression - A boolean which specifies if compression is turned on for
+ * keys/values in this file.
+ * </li>
+ * <li>
+ * blockCompression - A boolean which specifies if block-compression is
+ * turned on for keys/values in this file.
+ * </li>
+ * <li>
+ * compression codec - <code>CompressionCodec</code> class which is used for
+ * compression of keys and/or values (if compression is
+ * enabled).
+ * </li>
+ * <li>
+ * metadata - {@link Metadata} for this file.
+ * </li>
+ * <li>
+ * sync - A sync marker to denote end of the header.
+ * </li>
+ * </ul>
+ *
+ * <h5 id="#UncompressedFormat">Uncompressed SequenceFile Format</h5>
+ * <ul>
+ * <li>
+ * <a href="#Header">Header</a>
+ * </li>
+ * <li>
+ * Record
+ * <ul>
+ * <li>Record length</li>
+ * <li>Key length</li>
+ * <li>Key</li>
+ * <li>Value</li>
+ * </ul>
+ * </li>
+ * <li>
+ * A sync-marker every few <code>100</code> bytes or so.
+ * </li>
+ * </ul>
+ *
+ * <h5 id="#RecordCompressedFormat">Record-Compressed SequenceFile Format</h5>
+ * <ul>
+ * <li>
+ * <a href="#Header">Header</a>
+ * </li>
+ * <li>
+ * Record
+ * <ul>
+ * <li>Record length</li>
+ * <li>Key length</li>
+ * <li>Key</li>
+ * <li><i>Compressed</i> Value</li>
+ * </ul>
+ * </li>
+ * <li>
+ * A sync-marker every few <code>100</code> bytes or so.
+ * </li>
+ * </ul>
+ *
+ * <h5 id="#BlockCompressedFormat">Block-Compressed SequenceFile Format</h5>
+ * <ul>
+ * <li>
+ * <a href="#Header">Header</a>
+ * </li>
+ * <li>
+ * Record <i>Block</i>
+ * <ul>
+ * <li>Compressed key-lengths block-size</li>
+ * <li>Compressed key-lengths block</li>
+ * <li>Compressed keys block-size</li>
+ * <li>Compressed keys block</li>
+ * <li>Compressed value-lengths block-size</li>
+ * <li>Compressed value-lengths block</li>
+ * <li>Compressed values block-size</li>
+ * <li>Compressed values block</li>
+ * </ul>
+ * </li>
+ * <li>
+ * A sync-marker every few <code>100</code> bytes or so.
+ * </li>
+ * </ul>
+ *
+ * <p>The compressed blocks of key lengths and value lengths consist of the
+ * actual lengths of individual keys/values encoded in ZeroCompressedInteger
+ * format.</p>
+ *
+ * @see CompressionCodec
+ */
+public class SequenceFile {
+ private static final Log LOG = LogFactory.getLog(SequenceFile.class);
+
+ private SequenceFile() {} // no public ctor
+
+ private static final byte BLOCK_COMPRESS_VERSION = (byte)4;
+ private static final byte CUSTOM_COMPRESS_VERSION = (byte)5;
+ private static final byte VERSION_WITH_METADATA = (byte)6;
+ protected static byte[] VERSION = new byte[] {
+ (byte)'S', (byte)'E', (byte)'Q', VERSION_WITH_METADATA
+ };
+
+ private static final int SYNC_ESCAPE = -1; // "length" of sync entries
+ private static final int SYNC_HASH_SIZE = 16; // number of bytes in hash
+ private static final int SYNC_SIZE = 4+SYNC_HASH_SIZE; // escape + hash
+
+ /** The number of bytes between sync points.*/
+ public static final int SYNC_INTERVAL = 100*SYNC_SIZE;
+
+ /**
+ * The compression type used to compress key/value pairs in the
+ * {@link SequenceFile}.
+ *
+ * @see SequenceFile.Writer
+ */
+ public static enum CompressionType {
+ /** Do not compress records. */
+ NONE,
+ /** Compress values only, each separately. */
+ RECORD,
+ /** Compress sequences of records together in blocks. */
+ BLOCK
+ }
+
+ /**
+ * Get the compression type for the reduce outputs
+ * @param job the job config to look in
+ * @return the kind of compression to use
+ * @deprecated Use
+ * {@link org.apache.hadoop.mapred.SequenceFileOutputFormat#getOutputCompressionType(org.apache.hadoop.mapred.JobConf)}
+ * to get {@link CompressionType} for job-outputs.
+ */
+ @Deprecated
+ static public CompressionType getCompressionType(Configuration job) {
+ String name = job.get("io.seqfile.compression.type");
+ return name == null ? CompressionType.RECORD :
+ CompressionType.valueOf(name);
+ }
+
+ /**
+ * Set the compression type for sequence files.
+ * @param job the configuration to modify
+ * @param val the new compression type (none, block, record)
+ * @deprecated Use the one of the many SequenceFile.createWriter methods to specify
+ * the {@link CompressionType} while creating the {@link SequenceFile} or
+ * {@link org.apache.hadoop.mapred.SequenceFileOutputFormat#setOutputCompressionType(org.apache.hadoop.mapred.JobConf, org.apache.hadoop.io.SequenceFile.CompressionType)}
+ * to specify the {@link CompressionType} for job-outputs.
+ * or
+ */
+ @Deprecated
+ static public void setCompressionType(Configuration job,
+ CompressionType val) {
+ job.set("io.seqfile.compression.type", val.toString());
+ }
+
+ /**
+ * Construct the preferred type of SequenceFile Writer.
+ * @param fs The configured filesystem.
+ * @param conf The configuration.
+ * @param name The name of the file.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass)
+ throws IOException {
+ return createWriter(fs, conf, name, keyClass, valClass,
+ getCompressionType(conf));
+ }
+
+ /**
+ * Construct the preferred type of SequenceFile Writer.
+ * @param fs The configured filesystem.
+ * @param conf The configuration.
+ * @param name The name of the file.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compressionType The compression type.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, CompressionType compressionType)
+ throws IOException {
+ return createWriter(fs, conf, name, keyClass, valClass,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), fs.getDefaultBlockSize(),
+ compressionType, new DefaultCodec(), null, new Metadata());
+ }
+
+ /**
+ * Construct the preferred type of SequenceFile Writer.
+ * @param fs The configured filesystem.
+ * @param conf The configuration.
+ * @param name The name of the file.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compressionType The compression type.
+ * @param progress The Progressable object to track progress.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, CompressionType compressionType,
+ Progressable progress) throws IOException {
+ return createWriter(fs, conf, name, keyClass, valClass,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), fs.getDefaultBlockSize(),
+ compressionType, new DefaultCodec(), progress, new Metadata());
+ }
+
+ /**
+ * Construct the preferred type of SequenceFile Writer.
+ * @param fs The configured filesystem.
+ * @param conf The configuration.
+ * @param name The name of the file.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compressionType The compression type.
+ * @param codec The compression codec.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass,
+ CompressionType compressionType, CompressionCodec codec)
+ throws IOException {
+ return createWriter(fs, conf, name, keyClass, valClass,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), fs.getDefaultBlockSize(),
+ compressionType, codec, null, new Metadata());
+ }
+
+ /**
+ * Construct the preferred type of SequenceFile Writer.
+ * @param fs The configured filesystem.
+ * @param conf The configuration.
+ * @param name The name of the file.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compressionType The compression type.
+ * @param codec The compression codec.
+ * @param progress The Progressable object to track progress.
+ * @param metadata The metadata of the file.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass,
+ CompressionType compressionType, CompressionCodec codec,
+ Progressable progress, Metadata metadata) throws IOException {
+ return createWriter(fs, conf, name, keyClass, valClass,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), fs.getDefaultBlockSize(),
+ compressionType, codec, progress, metadata);
+ }
+
+ /**
+ * Construct the preferred type of SequenceFile Writer.
+ * @param fs The configured filesystem.
+ * @param conf The configuration.
+ * @param name The name of the file.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param bufferSize buffer size for the underlaying outputstream.
+ * @param replication replication factor for the file.
+ * @param blockSize block size for the file.
+ * @param compressionType The compression type.
+ * @param codec The compression codec.
+ * @param progress The Progressable object to track progress.
+ * @param metadata The metadata of the file.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, int bufferSize,
+ short replication, long blockSize,
+ CompressionType compressionType, CompressionCodec codec,
+ Progressable progress, Metadata metadata) throws IOException {
+ if ((codec instanceof GzipCodec) &&
+ !NativeCodeLoader.isNativeCodeLoaded() &&
+ !ZlibFactory.isNativeZlibLoaded(conf)) {
+ throw new IllegalArgumentException("SequenceFile doesn't work with " +
+ "GzipCodec without native-hadoop code!");
+ }
+
+ Writer writer = null;
+
+ if (compressionType == CompressionType.NONE) {
+ writer = new Writer(fs, conf, name, keyClass, valClass,
+ bufferSize, replication, blockSize,
+ progress, metadata);
+ } else if (compressionType == CompressionType.RECORD) {
+ writer = new RecordCompressWriter(fs, conf, name, keyClass, valClass,
+ bufferSize, replication, blockSize,
+ codec, progress, metadata);
+ } else if (compressionType == CompressionType.BLOCK){
+ writer = new BlockCompressWriter(fs, conf, name, keyClass, valClass,
+ bufferSize, replication, blockSize,
+ codec, progress, metadata);
+ }
+
+ return writer;
+ }
+
+ /**
+ * Construct the preferred type of SequenceFile Writer.
+ * @param fs The configured filesystem.
+ * @param conf The configuration.
+ * @param name The name of the file.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compressionType The compression type.
+ * @param codec The compression codec.
+ * @param progress The Progressable object to track progress.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass,
+ CompressionType compressionType, CompressionCodec codec,
+ Progressable progress) throws IOException {
+ Writer writer = createWriter(fs, conf, name, keyClass, valClass,
+ compressionType, codec, progress, new Metadata());
+ return writer;
+ }
+
+ /**
+ * Construct the preferred type of 'raw' SequenceFile Writer.
+ * @param out The stream on top which the writer is to be constructed.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compress Compress data?
+ * @param blockCompress Compress blocks?
+ * @param metadata The metadata of the file.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ private static Writer
+ createWriter(Configuration conf, FSDataOutputStream out,
+ Class keyClass, Class valClass, boolean compress, boolean blockCompress,
+ CompressionCodec codec, Metadata metadata)
+ throws IOException {
+ if (codec != null && (codec instanceof GzipCodec) &&
+ !NativeCodeLoader.isNativeCodeLoaded() &&
+ !ZlibFactory.isNativeZlibLoaded(conf)) {
+ throw new IllegalArgumentException("SequenceFile doesn't work with " +
+ "GzipCodec without native-hadoop code!");
+ }
+
+ Writer writer = null;
+
+ if (!compress) {
+ writer = new Writer(conf, out, keyClass, valClass, metadata);
+ } else if (compress && !blockCompress) {
+ writer = new RecordCompressWriter(conf, out, keyClass, valClass, codec, metadata);
+ } else {
+ writer = new BlockCompressWriter(conf, out, keyClass, valClass, codec, metadata);
+ }
+
+ return writer;
+ }
+
+ /**
+ * Construct the preferred type of 'raw' SequenceFile Writer.
+ * @param fs The configured filesystem.
+ * @param conf The configuration.
+ * @param file The name of the file.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compress Compress data?
+ * @param blockCompress Compress blocks?
+ * @param codec The compression codec.
+ * @param progress
+ * @param metadata The metadata of the file.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ private static Writer
+ createWriter(FileSystem fs, Configuration conf, Path file,
+ Class keyClass, Class valClass,
+ boolean compress, boolean blockCompress,
+ CompressionCodec codec, Progressable progress, Metadata metadata)
+ throws IOException {
+ if (codec != null && (codec instanceof GzipCodec) &&
+ !NativeCodeLoader.isNativeCodeLoaded() &&
+ !ZlibFactory.isNativeZlibLoaded(conf)) {
+ throw new IllegalArgumentException("SequenceFile doesn't work with " +
+ "GzipCodec without native-hadoop code!");
+ }
+
+ Writer writer = null;
+
+ if (!compress) {
+ writer = new Writer(fs, conf, file, keyClass, valClass, progress, metadata);
+ } else if (compress && !blockCompress) {
+ writer = new RecordCompressWriter(fs, conf, file, keyClass, valClass,
+ codec, progress, metadata);
+ } else {
+ writer = new BlockCompressWriter(fs, conf, file, keyClass, valClass,
+ codec, progress, metadata);
+ }
+
+ return writer;
+}
+
+ /**
+ * Construct the preferred type of 'raw' SequenceFile Writer.
+ * @param conf The configuration.
+ * @param out The stream on top which the writer is to be constructed.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compressionType The compression type.
+ * @param codec The compression codec.
+ * @param metadata The metadata of the file.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(Configuration conf, FSDataOutputStream out,
+ Class keyClass, Class valClass, CompressionType compressionType,
+ CompressionCodec codec, Metadata metadata)
+ throws IOException {
+ if ((codec instanceof GzipCodec) &&
+ !NativeCodeLoader.isNativeCodeLoaded() &&
+ !ZlibFactory.isNativeZlibLoaded(conf)) {
+ throw new IllegalArgumentException("SequenceFile doesn't work with " +
+ "GzipCodec without native-hadoop code!");
+ }
+
+ Writer writer = null;
+
+ if (compressionType == CompressionType.NONE) {
+ writer = new Writer(conf, out, keyClass, valClass, metadata);
+ } else if (compressionType == CompressionType.RECORD) {
+ writer = new RecordCompressWriter(conf, out, keyClass, valClass, codec, metadata);
+ } else if (compressionType == CompressionType.BLOCK){
+ writer = new BlockCompressWriter(conf, out, keyClass, valClass, codec, metadata);
+ }
+
+ return writer;
+ }
+
+ /**
+ * Construct the preferred type of 'raw' SequenceFile Writer.
+ * @param conf The configuration.
+ * @param out The stream on top which the writer is to be constructed.
+ * @param keyClass The 'key' type.
+ * @param valClass The 'value' type.
+ * @param compressionType The compression type.
+ * @param codec The compression codec.
+ * @return Returns the handle to the constructed SequenceFile Writer.
+ * @throws IOException
+ */
+ public static Writer
+ createWriter(Configuration conf, FSDataOutputStream out,
+ Class keyClass, Class valClass, CompressionType compressionType,
+ CompressionCodec codec)
+ throws IOException {
+ Writer writer = createWriter(conf, out, keyClass, valClass, compressionType,
+ codec, new Metadata());
+ return writer;
+ }
+
+
+ /** The interface to 'raw' values of SequenceFiles. */
+ public static interface ValueBytes {
+
+ /** Writes the uncompressed bytes to the outStream.
+ * @param outStream : Stream to write uncompressed bytes into.
+ * @throws IOException
+ */
+ public void writeUncompressedBytes(DataOutputStream outStream)
+ throws IOException;
+
+ /** Write compressed bytes to outStream.
+ * Note: that it will NOT compress the bytes if they are not compressed.
+ * @param outStream : Stream to write compressed bytes into.
+ * @throws IllegalArgumentException
+ * @throws IOException
+ */
+ public void writeCompressedBytes(DataOutputStream outStream)
+ throws IllegalArgumentException, IOException;
+
+ /**
+ * Size of stored data.
+ * @return int
+ */
+ public int getSize();
+ }
+
+ private static class UncompressedBytes implements ValueBytes {
+ private int dataSize;
+ private byte[] data;
+
+ private UncompressedBytes() {
+ data = null;
+ dataSize = 0;
+ }
+
+ private void reset(DataInputStream in, int length) throws IOException {
+ data = new byte[length];
+ dataSize = -1;
+
+ in.readFully(data);
+ dataSize = data.length;
+ }
+
+ public int getSize() {
+ return dataSize;
+ }
+
+ public void writeUncompressedBytes(DataOutputStream outStream)
+ throws IOException {
+ outStream.write(data, 0, dataSize);
+ }
+
+ public void writeCompressedBytes(DataOutputStream outStream)
+ throws IllegalArgumentException, IOException {
+ throw
+ new IllegalArgumentException("UncompressedBytes cannot be compressed!");
+ }
+
+ } // UncompressedBytes
+
+ private static class CompressedBytes implements ValueBytes {
+ private int dataSize;
+ private byte[] data;
+ DataInputBuffer rawData = null;
+ CompressionCodec codec = null;
+ CompressionInputStream decompressedStream = null;
+
+ private CompressedBytes(CompressionCodec codec) {
+ data = null;
+ dataSize = 0;
+ this.codec = codec;
+ }
+
+ private void reset(DataInputStream in, int length) throws IOException {
+ data = new byte[length];
+ dataSize = -1;
+
+ in.readFully(data);
+ dataSize = data.length;
+ }
+
+ public int getSize() {
+ return dataSize;
+ }
+
+ public void writeUncompressedBytes(DataOutputStream outStream)
+ throws IOException {
+ if (decompressedStream == null) {
+ rawData = new DataInputBuffer();
+ decompressedStream = codec.createInputStream(rawData);
+ } else {
+ decompressedStream.resetState();
+ }
+ rawData.reset(data, 0, dataSize);
+
+ byte[] buffer = new byte[8192];
+ int bytesRead = 0;
+ while ((bytesRead = decompressedStream.read(buffer, 0, 8192)) != -1) {
+ outStream.write(buffer, 0, bytesRead);
+ }
+ }
+
+ public void writeCompressedBytes(DataOutputStream outStream)
+ throws IllegalArgumentException, IOException {
+ outStream.write(data, 0, dataSize);
+ }
+
+ } // CompressedBytes
+
+ /**
+ * The class encapsulating with the metadata of a file.
+ * The metadata of a file is a list of attribute name/value
+ * pairs of Text type.
+ *
+ */
+ public static class Metadata implements Writable {
+
+ private TreeMap<Text, Text> theMetadata;
+
+ public Metadata() {
+ this(new TreeMap<Text, Text>());
+ }
+
+ public Metadata(TreeMap<Text, Text> arg) {
+ if (arg == null) {
+ this.theMetadata = new TreeMap<Text, Text>();
+ } else {
+ this.theMetadata = arg;
+ }
+ }
+
+ public Text get(Text name) {
+ return this.theMetadata.get(name);
+ }
+
+ public void set(Text name, Text value) {
+ this.theMetadata.put(name, value);
+ }
+
+ public TreeMap<Text, Text> getMetadata() {
+ return new TreeMap<Text, Text>(this.theMetadata);
+ }
+
+ public void write(DataOutput out) throws IOException {
+ out.writeInt(this.theMetadata.size());
+ Iterator<Map.Entry<Text, Text>> iter =
+ this.theMetadata.entrySet().iterator();
+ while (iter.hasNext()) {
+ Map.Entry<Text, Text> en = iter.next();
+ en.getKey().write(out);
+ en.getValue().write(out);
+ }
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ int sz = in.readInt();
+ if (sz < 0) throw new IOException("Invalid size: " + sz + " for file metadata object");
+ this.theMetadata = new TreeMap<Text, Text>();
+ for (int i = 0; i < sz; i++) {
+ Text key = new Text();
+ Text val = new Text();
+ key.readFields(in);
+ val.readFields(in);
+ this.theMetadata.put(key, val);
+ }
+ }
+
+ public boolean equals(Metadata other) {
+ if (other == null) return false;
+ if (this.theMetadata.size() != other.theMetadata.size()) {
+ return false;
+ }
+ Iterator<Map.Entry<Text, Text>> iter1 =
+ this.theMetadata.entrySet().iterator();
+ Iterator<Map.Entry<Text, Text>> iter2 =
+ other.theMetadata.entrySet().iterator();
+ while (iter1.hasNext() && iter2.hasNext()) {
+ Map.Entry<Text, Text> en1 = iter1.next();
+ Map.Entry<Text, Text> en2 = iter2.next();
+ if (!en1.getKey().equals(en2.getKey())) {
+ return false;
+ }
+ if (!en1.getValue().equals(en2.getValue())) {
+ return false;
+ }
+ }
+ if (iter1.hasNext() || iter2.hasNext()) {
+ return false;
+ }
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ assert false : "hashCode not designed";
+ return 42; // any arbitrary constant will do
+ }
+
+ @Override
+ public String toString() {
+ StringBuffer sb = new StringBuffer();
+ sb.append("size: ").append(this.theMetadata.size()).append("\n");
+ Iterator<Map.Entry<Text, Text>> iter =
+ this.theMetadata.entrySet().iterator();
+ while (iter.hasNext()) {
+ Map.Entry<Text, Text> en = iter.next();
+ sb.append("\t").append(en.getKey().toString()).append("\t").append(en.getValue().toString());
+ sb.append("\n");
+ }
+ return sb.toString();
+ }
+ }
+
+ /** Write key/value pairs to a sequence-format file. */
+ public static class Writer implements java.io.Closeable {
+ Configuration conf;
+ FSDataOutputStream out;
+ boolean ownOutputStream = true;
+ DataOutputBuffer buffer = new DataOutputBuffer();
+
+ Class keyClass;
+ Class valClass;
+
+ private boolean compress;
+ CompressionCodec codec = null;
+ CompressionOutputStream deflateFilter = null;
+ DataOutputStream deflateOut = null;
+ Metadata metadata = null;
+ Compressor compressor = null;
+
+ protected Serializer keySerializer;
+ protected Serializer uncompressedValSerializer;
+ protected Serializer compressedValSerializer;
+
+ // Insert a globally unique 16-byte value every few entries, so that one
+ // can seek into the middle of a file and then synchronize with record
+ // starts and ends by scanning for this value.
+ long lastSyncPos; // position of last sync
+ byte[] sync; // 16 random bytes
+ {
+ try {
+ MessageDigest digester = MessageDigest.getInstance("MD5");
+ long time = System.currentTimeMillis();
+ digester.update((new UID()+"@"+time).getBytes());
+ sync = digester.digest();
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ /** Implicit constructor: needed for the period of transition!*/
+ Writer()
+ {}
+
+ /** Create the named file.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @throws IOException
+ */
+ public Writer(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass)
+ throws IOException {
+ this(fs, conf, name, keyClass, valClass, null, new Metadata());
+ }
+
+ /** Create the named file with write-progress reporter.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param progress
+ * @param metadata
+ * @throws IOException
+ */
+ public Writer(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass,
+ Progressable progress, Metadata metadata)
+ throws IOException {
+ this(fs, conf, name, keyClass, valClass,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), fs.getDefaultBlockSize(),
+ progress, metadata);
+ }
+
+ /** Create the named file with write-progress reporter.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param bufferSize
+ * @param replication
+ * @param blockSize
+ * @param progress
+ * @param metadata
+ * @throws IOException
+ */
+ public Writer(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass,
+ int bufferSize, short replication, long blockSize,
+ Progressable progress, Metadata metadata)
+ throws IOException {
+ init(conf,
+ fs.create(name, true, bufferSize, replication, blockSize, progress),
+ keyClass, valClass, false, null, metadata);
+ initializeFileHeader();
+ writeFileHeader();
+ finalizeFileHeader();
+ }
+
+ /** Write to an arbitrary stream using a specified buffer size. */
+ protected Writer(Configuration conf, FSDataOutputStream out,
+ Class keyClass, Class valClass, Metadata metadata)
+ throws IOException {
+ this.ownOutputStream = false;
+ init(conf, out, keyClass, valClass, false, null, metadata);
+
+ initializeFileHeader();
+ writeFileHeader();
+ finalizeFileHeader();
+ }
+
+ /** Write the initial part of file header. */
+ void initializeFileHeader()
+ throws IOException{
+ out.write(VERSION);
+ }
+
+ /** Write the final part of file header. */
+ void finalizeFileHeader()
+ throws IOException{
+ out.write(sync); // write the sync bytes
+ out.flush(); // flush header
+ }
+
+ boolean isCompressed() { return compress; }
+ boolean isBlockCompressed() { return false; }
+
+ /** Write and flush the file header. */
+ void writeFileHeader()
+ throws IOException {
+ Text.writeString(out, keyClass.getName());
+ Text.writeString(out, valClass.getName());
+
+ out.writeBoolean(this.isCompressed());
+ out.writeBoolean(this.isBlockCompressed());
+
+ if (this.isCompressed()) {
+ Text.writeString(out, (codec.getClass()).getName());
+ }
+ this.metadata.write(out);
+ }
+
+ /** Initialize. */
+ @SuppressWarnings("unchecked")
+ void init(Configuration conf, FSDataOutputStream out,
+ Class keyClass, Class valClass,
+ boolean compress, CompressionCodec codec, Metadata metadata)
+ throws IOException {
+ this.conf = conf;
+ this.out = out;
+ this.keyClass = keyClass;
+ this.valClass = valClass;
+ this.compress = compress;
+ this.codec = codec;
+ this.metadata = metadata;
+ SerializationFactory serializationFactory = new SerializationFactory(conf);
+ this.keySerializer = serializationFactory.getSerializer(keyClass);
+ this.keySerializer.open(buffer);
+ this.uncompressedValSerializer = serializationFactory.getSerializer(valClass);
+ this.uncompressedValSerializer.open(buffer);
+ if (this.codec != null) {
+ ReflectionUtils.setConf(this.codec, this.conf);
+ this.compressor = CodecPool.getCompressor(this.codec);
+ this.deflateFilter = this.codec.createOutputStream(buffer, compressor);
+ this.deflateOut =
+ new DataOutputStream(new BufferedOutputStream(deflateFilter));
+ this.compressedValSerializer = serializationFactory.getSerializer(valClass);
+ this.compressedValSerializer.open(deflateOut);
+ }
+ }
+
+ /** Returns the class of keys in this file.
+ * @return Class
+ */
+ public Class getKeyClass() { return keyClass; }
+
+ /** Returns the class of values in this file.
+ * @return Class
+ */
+ public Class getValueClass() { return valClass; }
+
+ /** Returns the compression codec of data in this file.
+ * @return CompressionCodec
+ */
+ public CompressionCodec getCompressionCodec() { return codec; }
+
+ /** create a sync point
+ * @throws IOException
+ */
+ public void sync() throws IOException {
+ if (sync != null && lastSyncPos != out.getPos()) {
+ out.writeInt(SYNC_ESCAPE); // mark the start of the sync
+ out.write(sync); // write sync
+ lastSyncPos = out.getPos(); // update lastSyncPos
+ }
+ }
+
+ /** Returns the configuration of this file. */
+ Configuration getConf() { return conf; }
+
+ /** Close the file.
+ * @throws IOException
+ */
+ public synchronized void close() throws IOException {
+ keySerializer.close();
+ uncompressedValSerializer.close();
+ if (compressedValSerializer != null) {
+ compressedValSerializer.close();
+ }
+
+ CodecPool.returnCompressor(compressor);
+ compressor = null;
+
+ if (out != null) {
+
+ // Close the underlying stream iff we own it...
+ if (ownOutputStream) {
+ out.close();
+ } else {
+ out.flush();
+ }
+ out = null;
+ }
+ }
+
+ synchronized void checkAndWriteSync() throws IOException {
+ if (sync != null &&
+ out.getPos() >= lastSyncPos+SYNC_INTERVAL) { // time to emit sync
+ sync();
+ }
+ }
+
+ /** Append a key/value pair.
+ * @param key
+ * @param val
+ * @throws IOException
+ */
+ public synchronized void append(Writable key, Writable val)
+ throws IOException {
+ append((Object) key, (Object) val);
+ }
+
+ /** Append a key/value pair.
+ * @param key
+ * @param val
+ * @throws IOException
+ */
+ @SuppressWarnings("unchecked")
+ public synchronized void append(Object key, Object val)
+ throws IOException {
+ if (key.getClass() != keyClass)
+ throw new IOException("wrong key class: "+key.getClass().getName()
+ +" is not "+keyClass);
+ if (val.getClass() != valClass)
+ throw new IOException("wrong value class: "+val.getClass().getName()
+ +" is not "+valClass);
+
+ buffer.reset();
+
+ // Append the 'key'
+ keySerializer.serialize(key);
+ int keyLength = buffer.getLength();
+ if (keyLength < 0)
+ throw new IOException("negative length keys not allowed: " + key);
+
+ // Append the 'value'
+ if (compress) {
+ deflateFilter.resetState();
+ compressedValSerializer.serialize(val);
+ deflateOut.flush();
+ deflateFilter.finish();
+ } else {
+ uncompressedValSerializer.serialize(val);
+ }
+
+ // Write the record out
+ checkAndWriteSync(); // sync
+ out.writeInt(buffer.getLength()); // total record length
+ out.writeInt(keyLength); // key portion length
+ out.write(buffer.getData(), 0, buffer.getLength()); // data
+ }
+
+ public synchronized void appendRaw(byte[] keyData, int keyOffset,
+ int keyLength, ValueBytes val) throws IOException {
+ if (keyLength < 0)
+ throw new IOException("negative length keys not allowed: " + keyLength);
+
+ int valLength = val.getSize();
+
+ checkAndWriteSync();
+
+ out.writeInt(keyLength+valLength); // total record length
+ out.writeInt(keyLength); // key portion length
+ out.write(keyData, keyOffset, keyLength); // key
+ val.writeUncompressedBytes(out); // value
+ }
+
+ /** Returns the current length of the output file.
+ *
+ * <p>This always returns a synchronized position. In other words,
+ * immediately after calling {@link SequenceFile.Reader#seek(long)} with a position
+ * returned by this method, {@link SequenceFile.Reader#next(Writable)} may be called. However
+ * the key may be earlier in the file than key last written when this
+ * method was called (e.g., with block-compression, it may be the first key
+ * in the block that was being written when this method was called).
+ */
+ public synchronized long getLength() throws IOException {
+ return out.getPos();
+ }
+
+ } // class Writer
+
+ /** Write key/compressed-value pairs to a sequence-format file. */
+ static class RecordCompressWriter extends Writer {
+
+ /** Create the named file.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param codec
+ * @throws IOException
+ */
+ public RecordCompressWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, CompressionCodec codec)
+ throws IOException {
+ this(conf, fs.create(name), keyClass, valClass, codec, new Metadata());
+ }
+
+ /** Create the named file with write-progress reporter.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param codec
+ * @param progress
+ * @param metadata
+ * @throws IOException
+ */
+ public RecordCompressWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, CompressionCodec codec,
+ Progressable progress, Metadata metadata)
+ throws IOException {
+ this(fs, conf, name, keyClass, valClass,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), fs.getDefaultBlockSize(), codec,
+ progress, metadata);
+ }
+
+ /** Create the named file with write-progress reporter.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param bufferSize
+ * @param replication
+ * @param blockSize
+ * @param codec
+ * @param progress
+ * @param metadata
+ * @throws IOException
+ */
+ public RecordCompressWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass,
+ int bufferSize, short replication, long blockSize,
+ CompressionCodec codec,
+ Progressable progress, Metadata metadata)
+ throws IOException {
+ super.init(conf,
+ fs.create(name, true, bufferSize, replication, blockSize, progress),
+ keyClass, valClass, true, codec, metadata);
+
+ initializeFileHeader();
+ writeFileHeader();
+ finalizeFileHeader();
+ }
+
+ /** Create the named file with write-progress reporter.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param codec
+ * @param progress
+ * @throws IOException
+ */
+ public RecordCompressWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, CompressionCodec codec,
+ Progressable progress)
+ throws IOException {
+ this(fs, conf, name, keyClass, valClass, codec, progress, new Metadata());
+ }
+
+ /** Write to an arbitrary stream using a specified buffer size. */
+ protected RecordCompressWriter(Configuration conf, FSDataOutputStream out,
+ Class keyClass, Class valClass, CompressionCodec codec, Metadata metadata)
+ throws IOException {
+ this.ownOutputStream = false;
+ super.init(conf, out, keyClass, valClass, true, codec, metadata);
+
+ initializeFileHeader();
+ writeFileHeader();
+ finalizeFileHeader();
+
+ }
+
+ @Override
+ boolean isCompressed() { return true; }
+ @Override
+ boolean isBlockCompressed() { return false; }
+
+ /** Append a key/value pair. */
+ @Override
+ @SuppressWarnings("unchecked")
+ public synchronized void append(Object key, Object val)
+ throws IOException {
+ if (key.getClass() != keyClass)
+ throw new IOException("wrong key class: "+key.getClass().getName()
+ +" is not "+keyClass);
+ if (val.getClass() != valClass)
+ throw new IOException("wrong value class: "+val.getClass().getName()
+ +" is not "+valClass);
+
+ buffer.reset();
+
+ // Append the 'key'
+ keySerializer.serialize(key);
+ int keyLength = buffer.getLength();
+ if (keyLength < 0)
+ throw new IOException("negative length keys not allowed: " + key);
+
+ // Compress 'value' and append it
+ deflateFilter.resetState();
+ compressedValSerializer.serialize(val);
+ deflateOut.flush();
+ deflateFilter.finish();
+
+ // Write the record out
+ checkAndWriteSync(); // sync
+ out.writeInt(buffer.getLength()); // total record length
+ out.writeInt(keyLength); // key portion length
+ out.write(buffer.getData(), 0, buffer.getLength()); // data
+ }
+
+ /** Append a key/value pair. */
+ @Override
+ public synchronized void appendRaw(byte[] keyData, int keyOffset,
+ int keyLength, ValueBytes val) throws IOException {
+
+ if (keyLength < 0)
+ throw new IOException("negative length keys not allowed: " + keyLength);
+
+ int valLength = val.getSize();
+
+ checkAndWriteSync(); // sync
+ out.writeInt(keyLength+valLength); // total record length
+ out.writeInt(keyLength); // key portion length
+ out.write(keyData, keyOffset, keyLength); // 'key' data
+ val.writeCompressedBytes(out); // 'value' data
+ }
+
+ } // RecordCompressionWriter
+
+ /** Write compressed key/value blocks to a sequence-format file. */
+ static class BlockCompressWriter extends Writer {
+
+ private int noBufferedRecords = 0;
+
+ private DataOutputBuffer keyLenBuffer = new DataOutputBuffer();
+ private DataOutputBuffer keyBuffer = new DataOutputBuffer();
+
+ private DataOutputBuffer valLenBuffer = new DataOutputBuffer();
+ private DataOutputBuffer valBuffer = new DataOutputBuffer();
+
+ private int compressionBlockSize;
+
+ /** Create the named file.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param codec
+ * @throws IOException
+ */
+ public BlockCompressWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, CompressionCodec codec)
+ throws IOException {
+ this(fs, conf, name, keyClass, valClass,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), fs.getDefaultBlockSize(), codec,
+ null, new Metadata());
+ }
+
+ /** Create the named file with write-progress reporter.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param codec
+ * @param progress
+ * @param metadata
+ * @throws IOException
+ */
+ public BlockCompressWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, CompressionCodec codec,
+ Progressable progress, Metadata metadata)
+ throws IOException {
+ this(fs, conf, name, keyClass, valClass,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), fs.getDefaultBlockSize(), codec,
+ progress, metadata);
+ }
+
+ /** Create the named file with write-progress reporter.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param bufferSize
+ * @param replication
+ * @param blockSize
+ * @param codec
+ * @param progress
+ * @param metadata
+ * @throws IOException
+ */
+ public BlockCompressWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass,
+ int bufferSize, short replication, long blockSize,
+ CompressionCodec codec,
+ Progressable progress, Metadata metadata)
+ throws IOException {
+ super.init(conf,
+ fs.create(name, true, bufferSize, replication, blockSize, progress),
+ keyClass, valClass, true, codec, metadata);
+ init(conf.getInt("io.seqfile.compress.blocksize", 1000000));
+
+ initializeFileHeader();
+ writeFileHeader();
+ finalizeFileHeader();
+ }
+
+ /** Create the named file with write-progress reporter.
+ * @param fs
+ * @param conf
+ * @param name
+ * @param keyClass
+ * @param valClass
+ * @param codec
+ * @param progress
+ * @throws IOException
+ */
+ public BlockCompressWriter(FileSystem fs, Configuration conf, Path name,
+ Class keyClass, Class valClass, CompressionCodec codec,
+ Progressable progress)
+ throws IOException {
+ this(fs, conf, name, keyClass, valClass, codec, progress, new Metadata());
+ }
+
+ /** Write to an arbitrary stream using a specified buffer size. */
+ protected BlockCompressWriter(Configuration conf, FSDataOutputStream out,
+ Class keyClass, Class valClass, CompressionCodec codec, Metadata metadata)
+ throws IOException {
+ this.ownOutputStream = false;
+ super.init(conf, out, keyClass, valClass, true, codec, metadata);
+ init(1000000);
+
+ initializeFileHeader();
+ writeFileHeader();
+ finalizeFileHeader();
+ }
+
+ @Override
+ boolean isCompressed() { return true; }
+ @Override
+ boolean isBlockCompressed() { return true; }
+
+ /** Initialize */
+ void init(int compressionBlockSize) throws IOException {
+ this.compressionBlockSize = compressionBlockSize;
+ keySerializer.close();
+ keySerializer.open(keyBuffer);
+ uncompressedValSerializer.close();
+ uncompressedValSerializer.open(valBuffer);
+ }
+
+ /** Workhorse to check and write out compressed data/lengths */
+ private synchronized
+ void writeBuffer(DataOutputBuffer uncompressedDataBuffer)
+ throws IOException {
+ deflateFilter.resetState();
+ buffer.reset();
+ deflateOut.write(uncompressedDataBuffer.getData(), 0,
+ uncompressedDataBuffer.getLength());
+ deflateOut.flush();
+ deflateFilter.finish();
+
+ WritableUtils.writeVInt(out, buffer.getLength());
+ out.write(buffer.getData(), 0, buffer.getLength());
+ }
+
+ /** Compress and flush contents to dfs */
+ @Override
+ public synchronized void sync() throws IOException {
+ if (noBufferedRecords > 0) {
+ super.sync();
+
+ // No. of records
+ WritableUtils.writeVInt(out, noBufferedRecords);
+
+ // Write 'keys' and lengths
+ writeBuffer(keyLenBuffer);
+ writeBuffer(keyBuffer);
+
+ // Write 'values' and lengths
+ writeBuffer(valLenBuffer);
+ writeBuffer(valBuffer);
+
+ // Flush the file-stream
+ out.flush();
+
+ // Reset internal states
+ keyLenBuffer.reset();
+ keyBuffer.reset();
+ valLenBuffer.reset();
+ valBuffer.reset();
+ noBufferedRecords = 0;
+ }
+
+ }
+
+ /** Close the file. */
+ public synchronized void close() throws IOException {
+ if (out != null) {
+ sync();
+ }
+ super.close();
+ }
+
+ /** Append a key/value pair. */
+ @Override
+ @SuppressWarnings("unchecked")
+ public synchronized void append(Object key, Object val)
+ throws IOException {
+ if (key.getClass() != keyClass)
+ throw new IOException("wrong key class: "+key+" is not "+keyClass);
+ if (val.getClass() != valClass)
+ throw new IOException("wrong value class: "+val+" is not "+valClass);
+
+ // Save key/value into respective buffers
+ int oldKeyLength = keyBuffer.getLength();
+ keySerializer.serialize(key);
+ int keyLength = keyBuffer.getLength() - oldKeyLength;
+ if (keyLength < 0)
+ throw new IOException("negative length keys not allowed: " + key);
+ WritableUtils.writeVInt(keyLenBuffer, keyLength);
+
+ int oldValLength = valBuffer.getLength();
+ uncompressedValSerializer.serialize(val);
+ int valLength = valBuffer.getLength() - oldValLength;
+ WritableUtils.writeVInt(valLenBuffer, valLength);
+
+ // Added another key/value pair
+ ++noBufferedRecords;
+
+ // Compress and flush?
+ int currentBlockSize = keyBuffer.getLength() + valBuffer.getLength();
+ if (currentBlockSize >= compressionBlockSize) {
+ sync();
+ }
+ }
+
+ /** Append a key/value pair. */
+ @Override
+ public synchronized void appendRaw(byte[] keyData, int keyOffset,
+ int keyLength, ValueBytes val) throws IOException {
+
+ if (keyLength < 0)
+ throw new IOException("negative length keys not allowed");
+
+ int valLength = val.getSize();
+
+ // Save key/value data in relevant buffers
+ WritableUtils.writeVInt(keyLenBuffer, keyLength);
+ keyBuffer.write(keyData, keyOffset, keyLength);
+ WritableUtils.writeVInt(valLenBuffer, valLength);
+ val.writeUncompressedBytes(valBuffer);
+
+ // Added another key/value pair
+ ++noBufferedRecords;
+
+ // Compress and flush?
+ int currentBlockSize = keyBuffer.getLength() + valBuffer.getLength();
+ if (currentBlockSize >= compressionBlockSize) {
+ sync();
+ }
+ }
+
+ } // BlockCompressionWriter
+
+ /** Reads key/value pairs from a sequence-format file. */
+ public static class Reader implements java.io.Closeable {
+ private Path file;
+ private FSDataInputStream in;
+ private DataOutputBuffer outBuf = new DataOutputBuffer(32);
+
+ private byte version;
+
+ private String keyClassName;
+ private String valClassName;
+ private Class keyClass;
+ private Class valClass;
+
+ private CompressionCodec codec = null;
+ private Metadata metadata = null;
+
+ private byte[] sync = new byte[SYNC_HASH_SIZE];
+ private byte[] syncCheck = new byte[SYNC_HASH_SIZE];
+ private boolean syncSeen;
+
+ private long end;
+ private int keyLength;
+ private int recordLength;
+
+ private boolean decompress;
+ private boolean blockCompressed;
+
+ private Configuration conf;
+
+ private int noBufferedRecords = 0;
+ private boolean lazyDecompress = true;
+ private boolean valuesDecompressed = true;
+
+ private int noBufferedKeys = 0;
+ private int noBufferedValues = 0;
+
+ private DataInputBuffer keyLenBuffer = null;
+ private CompressionInputStream keyLenInFilter = null;
+ private DataInputStream keyLenIn = null;
+ private Decompressor keyLenDecompressor = null;
+ private DataInputBuffer keyBuffer = null;
+ private CompressionInputStream keyInFilter = null;
+ private DataInputStream keyIn = null;
+ private Decompressor keyDecompressor = null;
+
+ private DataInputBuffer valLenBuffer = null;
+ private CompressionInputStream valLenInFilter = null;
+ private DataInputStream valLenIn = null;
+ private Decompressor valLenDecompressor = null;
+ private DataInputBuffer valBuffer = null;
+ private CompressionInputStream valInFilter = null;
+ private DataInputStream valIn = null;
+ private Decompressor valDecompressor = null;
+
+ private Deserializer keyDeserializer;
+ private Deserializer valDeserializer;
+
+ /** Open the named file.
+ * @param fs
+ * @param file
+ * @param conf
+ * @throws IOException
+ */
+ public Reader(FileSystem fs, Path file, Configuration conf)
+ throws IOException {
+ this(fs, file, conf.getInt("io.file.buffer.size", 4096), conf, false);
+ }
+
+ private Reader(FileSystem fs, Path file, int bufferSize,
+ Configuration conf, boolean tempReader) throws IOException {
+ this(fs, file, bufferSize, 0, fs.getLength(file), conf, tempReader);
+ }
+
+ private Reader(FileSystem fs, Path file, int bufferSize, long start,
+ long length, Configuration conf, boolean tempReader)
+ throws IOException {
+ this.file = file;
+ this.in = openFile(fs, file, bufferSize, length);
+ this.conf = conf;
+ seek(start);
+ this.end = in.getPos() + length;
+ init(tempReader);
+ }
+
+ /**
+ * Override this method to specialize the type of
+ * {@link FSDataInputStream} returned.
+ */
+ protected FSDataInputStream openFile(FileSystem fs, Path file,
+ int bufferSize, long length) throws IOException {
+ return fs.open(file, bufferSize);
+ }
+
+ /**
+ * Initialize the {@link Reader}
+ * @param tmpReader <code>true</code> if we are constructing a temporary
+ * reader {@link SequenceFile.Sorter.cloneFileAttributes},
+ * and hence do not initialize every component;
+ * <code>false</code> otherwise.
+ * @throws IOException
+ */
+ private void init(boolean tempReader) throws IOException {
+ byte[] versionBlock = new byte[VERSION.length];
+ in.readFully(versionBlock);
+
+ if ((versionBlock[0] != VERSION[0]) ||
+ (versionBlock[1] != VERSION[1]) ||
+ (versionBlock[2] != VERSION[2]))
+ throw new IOException(file + " not a SequenceFile");
+
+ // Set 'version'
+ version = versionBlock[3];
+ if (version > VERSION[3])
+ throw new VersionMismatchException(VERSION[3], version);
+
+ if (version < BLOCK_COMPRESS_VERSION) {
+ UTF8 className = new UTF8();
+
+ className.readFields(in);
+ keyClassName = className.toString(); // key class name
+
+ className.readFields(in);
+ valClassName = className.toString(); // val class name
+ } else {
+ keyClassName = Text.readString(in);
+ valClassName = Text.readString(in);
+ }
+
+ if (version > 2) { // if version > 2
+ this.decompress = in.readBoolean(); // is compressed?
+ } else {
+ decompress = false;
+ }
+
+ if (version >= BLOCK_COMPRESS_VERSION) { // if version >= 4
+ this.blockCompressed = in.readBoolean(); // is block-compressed?
+ } else {
+ blockCompressed = false;
+ }
+
+ // if version >= 5
+ // setup the compression codec
+ if (decompress) {
+ if (version >= CUSTOM_COMPRESS_VERSION) {
+ String codecClassname = Text.readString(in);
+ try {
+ Class<? extends CompressionCodec> codecClass
+ = conf.getClassByName(codecClassname).asSubclass(CompressionCodec.class);
+ this.codec = (CompressionCodec) ReflectionUtils.newInstance(codecClass, conf);
+ } catch (ClassNotFoundException cnfe) {
+ throw new IllegalArgumentException("Unknown codec: " +
+ codecClassname, cnfe);
+ }
+ } else {
+ codec = new DefaultCodec();
+ ((Configurable)codec).setConf(conf);
+ }
+ }
+
+ this.metadata = new Metadata();
+ if (version >= VERSION_WITH_METADATA) { // if version >= 6
+ this.metadata.readFields(in);
+ }
+
+ if (version > 1) { // if version > 1
+ in.readFully(sync); // read sync bytes
+ }
+
+ // Initialize... *not* if this we are constructing a temporary Reader
+ if (!tempReader) {
+ valBuffer = new DataInputBuffer();
+ if (decompress) {
+ valDecompressor = CodecPool.getDecompressor(codec);
+ valInFilter = codec.createInputStream(valBuffer, valDecompressor);
+ valIn = new DataInputStream(valInFilter);
+ } else {
+ valIn = valBuffer;
+ }
+
+ if (blockCompressed) {
+ keyLenBuffer = new DataInputBuffer();
+ keyBuffer = new DataInputBuffer();
+ valLenBuffer = new DataInputBuffer();
+
+ keyLenDecompressor = CodecPool.getDecompressor(codec);
+ keyLenInFilter = codec.createInputStream(keyLenBuffer,
+ keyLenDecompressor);
+ keyLenIn = new DataInputStream(keyLenInFilter);
+
+ keyDecompressor = CodecPool.getDecompressor(codec);
+ keyInFilter = codec.createInputStream(keyBuffer, keyDecompressor);
+ keyIn = new DataInputStream(keyInFilter);
+
+ valLenDecompressor = CodecPool.getDecompressor(codec);
+ valLenInFilter = codec.createInputStream(valLenBuffer,
+ valLenDecompressor);
+ valLenIn = new DataInputStream(valLenInFilter);
+ }
+
+ SerializationFactory serializationFactory =
+ new SerializationFactory(conf);
+ this.keyDeserializer =
+ getDeserializer(serializationFactory, getKeyClass());
+ if (!blockCompressed) {
+ this.keyDeserializer.open(valBuffer);
+ } else {
+ this.keyDeserializer.open(keyIn);
+ }
+ this.valDeserializer =
+ getDeserializer(serializationFactory, getValueClass());
+ this.valDeserializer.open(valIn);
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ private Deserializer getDeserializer(SerializationFactory sf, Class c) {
+ return sf.getDeserializer(c);
+ }
+
+ /** Close the file.
+ * @throws IOException
+ */
+ public synchronized void close() throws IOException {
+ // Return the decompressors to the pool
+ CodecPool.returnDecompressor(keyLenDecompressor);
+ CodecPool.returnDecompressor(keyDecompressor);
+ CodecPool.returnDecompressor(valLenDecompressor);
+ CodecPool.returnDecompressor(valDecompressor);
+ keyLenDecompressor = keyDecompressor = null;
+ valLenDecompressor = valDecompressor = null;
+
+ if (keyDeserializer != null) {
+ keyDeserializer.close();
+ }
+ if (valDeserializer != null) {
+ valDeserializer.close();
+ }
+
+ // Close the input-stream
+ in.close();
+ }
+
+ /** Returns the name of the key class.
+ * @return String
+ */
+ public String getKeyClassName() {
+ return keyClassName;
+ }
+
+ /** Returns the class of keys in this file.
+ * @return Class
+ */
+ public synchronized Class<?> getKeyClass() {
+ if (null == keyClass) {
+ try {
+ keyClass = WritableName.getClass(getKeyClassName(), conf);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ return keyClass;
+ }
+
+ /** Returns the name of the value class.
+ * @return String
+ */
+ public String getValueClassName() {
+ return valClassName;
+ }
+
+ /** Returns the class of values in this file.
+ * @return Class
+ */
+ public synchronized Class<?> getValueClass() {
+ if (null == valClass) {
+ try {
+ valClass = WritableName.getClass(getValueClassName(), conf);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ return valClass;
+ }
+
+ /** @return true if values are compressed. */
+ public boolean isCompressed() { return decompress; }
+
+ /** @return true if records are block-compressed. */
+ public boolean isBlockCompressed() { return blockCompressed; }
+
+ /** @return the compression codec of data in this file. */
+ public CompressionCodec getCompressionCodec() { return codec; }
+
+ /** @return the metadata object of the file */
+ public Metadata getMetadata() {
+ return this.metadata;
+ }
+
+ /** Returns the configuration used for this file. */
+ Configuration getConf() { return conf; }
+
+ /** Read a compressed buffer */
+ private synchronized void readBuffer(DataInputBuffer buffer,
+ CompressionInputStream filter) throws IOException {
+ // Read data into a temporary buffer
+ DataOutputBuffer dataBuffer = new DataOutputBuffer();
+
+ try {
+ int dataBufferLength = WritableUtils.readVInt(in);
+ dataBuffer.write(in, dataBufferLength);
+
+ // Set up 'buffer' connected to the input-stream
+ buffer.reset(dataBuffer.getData(), 0, dataBuffer.getLength());
+ } finally {
+ dataBuffer.close();
+ }
+
+ // Reset the codec
+ filter.resetState();
+ }
+
+ /** Read the next 'compressed' block */
+ private synchronized void readBlock() throws IOException {
+ // Check if we need to throw away a whole block of
+ // 'values' due to 'lazy decompression'
+ if (lazyDecompress && !valuesDecompressed) {
+ in.seek(WritableUtils.readVInt(in)+in.getPos());
+ in.seek(WritableUtils.readVInt(in)+in.getPos());
+ }
+
+ // Reset internal states
+ noBufferedKeys = 0; noBufferedValues = 0; noBufferedRecords = 0;
+ valuesDecompressed = false;
+
+ //Process sync
+ if (sync != null) {
+ in.readInt();
+ in.readFully(syncCheck); // read syncCheck
+ if (!Arrays.equals(sync, syncCheck)) // check it
+ throw new IOException("File is corrupt!");
+ }
+ syncSeen = true;
+
+ // Read number of records in this block
+ noBufferedRecords = WritableUtils.readVInt(in);
+
+ // Read key lengths and keys
+ readBuffer(keyLenBuffer, keyLenInFilter);
+ readBuffer(keyBuffer, keyInFilter);
+ noBufferedKeys = noBufferedRecords;
+
+ // Read value lengths and values
+ if (!lazyDecompress) {
+ readBuffer(valLenBuffer, valLenInFilter);
+ readBuffer(valBuffer, valInFilter);
+ noBufferedValues = noBufferedRecords;
+ valuesDecompressed = true;
+ }
+ }
+
+ /**
+ * Position valLenIn/valIn to the 'value'
+ * corresponding to the 'current' key
+ */
+ private synchronized void seekToCurrentValue() throws IOException {
+ if (!blockCompressed) {
+ if (decompress) {
+ valInFilter.resetState();
+ }
+ valBuffer.reset();
+ } else {
+ // Check if this is the first value in the 'block' to be read
+ if (lazyDecompress && !valuesDecompressed) {
+ // Read the value lengths and values
+ readBuffer(valLenBuffer, valLenInFilter);
+ readBuffer(valBuffer, valInFilter);
+ noBufferedValues = noBufferedRecords;
+ valuesDecompressed = true;
+ }
+
+ // Calculate the no. of bytes to skip
+ // Note: 'current' key has already been read!
+ int skipValBytes = 0;
+ int currentKey = noBufferedKeys + 1;
+ for (int i=noBufferedValues; i > currentKey; --i) {
+ skipValBytes += WritableUtils.readVInt(valLenIn);
+ --noBufferedValues;
+ }
+
+ // Skip to the 'val' corresponding to 'current' key
+ if (skipValBytes > 0) {
+ if (valIn.skipBytes(skipValBytes) != skipValBytes) {
+ throw new IOException("Failed to seek to " + currentKey +
+ "(th) value!");
+ }
+ }
+ }
+ }
+
+ /**
+ * Get the 'value' corresponding to the last read 'key'.
+ * @param val : The 'value' to be read.
+ * @throws IOException
+ */
+ public synchronized void getCurrentValue(Writable val)
+ throws IOException {
+ if (val instanceof Configurable) {
+ ((Configurable) val).setConf(this.conf);
+ }
+
+ // Position stream to 'current' value
+ seekToCurrentValue();
+
+ if (!blockCompressed) {
+ val.readFields(valIn);
+
+ if (valIn.read() > 0) {
+ LOG.info("available bytes: " + valIn.available());
+ throw new IOException(val+" read "+(valBuffer.getPosition()-keyLength)
+ + " bytes, should read " +
+ (valBuffer.getLength()-keyLength));
+ }
+ } else {
+ // Get the value
+ int valLength = WritableUtils.readVInt(valLenIn);
+ val.readFields(valIn);
+
+ // Read another compressed 'value'
+ --noBufferedValues;
+
+ // Sanity check
+ if (valLength < 0) {
+ LOG.debug(val + " is a zero-length value");
+ }
+ }
+
+ }
+
+ /**
+ * Get the 'value' corresponding to the last read 'key'.
+ * @param val : The 'value' to be read.
+ * @throws IOException
+ */
+ public synchronized Object getCurrentValue(Object val)
+ throws IOException {
+ if (val instanceof Configurable) {
+ ((Configurable) val).setConf(this.conf);
+ }
+
+ // Position stream to 'current' value
+ seekToCurrentValue();
+
+ if (!blockCompressed) {
+ val = deserializeValue(val);
+
+ if (valIn.read() > 0) {
+ LOG.info("available bytes: " + valIn.available());
+ throw new IOException(val+" read "+(valBuffer.getPosition()-keyLength)
+ + " bytes, should read " +
+ (valBuffer.getLength()-keyLength));
+ }
+ } else {
+ // Get the value
+ int valLength = WritableUtils.readVInt(valLenIn);
+ val = deserializeValue(val);
+
+ // Read another compressed 'value'
+ --noBufferedValues;
+
+ // Sanity check
+ if (valLength < 0) {
+ LOG.debug(val + " is a zero-length value");
+ }
+ }
+ return val;
+
+ }
+
+ @SuppressWarnings("unchecked")
+ private Object deserializeValue(Object val) throws IOException {
+ return valDeserializer.deserialize(val);
+ }
+
+ /** Read the next key in the file into <code>key</code>, skipping its
+ * value. True if another entry exists, and false at end of file. */
+ public synchronized boolean next(Writable key) throws IOException {
+ if (key.getClass() != getKeyClass())
+ throw new IOException("wrong key class: "+key.getClass().getName()
+ +" is not "+keyClass);
+
+ if (!blockCompressed) {
+ outBuf.reset();
+
+ keyLength = next(outBuf);
+ if (keyLength < 0)
+ return false;
+
+ valBuffer.reset(outBuf.getData(), outBuf.getLength());
+
+ key.readFields(valBuffer);
+ valBuffer.mark(0);
+ if (valBuffer.getPosition() != keyLength)
+ throw new IOException(key + " read " + valBuffer.getPosition()
+ + " bytes, should read " + keyLength);
+ } else {
+ //Reset syncSeen
+ syncSeen = false;
+
+ if (noBufferedKeys == 0) {
+ try {
+ readBlock();
+ } catch (EOFException eof) {
+ return false;
+ }
+ }
+
+ int keyLength = WritableUtils.readVInt(keyLenIn);
+
+ // Sanity check
+ if (keyLength < 0) {
+ return false;
+ }
+
+ //Read another compressed 'key'
+ key.readFields(keyIn);
+ --noBufferedKeys;
+ }
+
+ return true;
+ }
+
+ /** Read the next key/value pair in the file into <code>key</code> and
+ * <code>val</code>. Returns true if such a pair exists and false when at
+ * end of file */
+ public synchronized boolean next(Writable key, Writable val)
+ throws IOException {
+ if (val.getClass() != getValueClass())
+ throw new IOException("wrong value class: "+val+" is not "+valClass);
+
+ boolean more = next(key);
+
+ if (more) {
+ getCurrentValue(val);
+ }
+
+ return more;
+ }
+
+ /**
+ * Read and return the next record length, potentially skipping over
+ * a sync block.
+ * @return the length of the next record or -1 if there is no next record
+ * @throws IOException
+ */
+ private synchronized int readRecordLength() throws IOException {
+ if (in.getPos() >= end) {
+ return -1;
+ }
+ int length = in.readInt();
+ if (version > 1 && sync != null &&
+ length == SYNC_ESCAPE) { // process a sync entry
+ in.readFully(syncCheck); // read syncCheck
+ if (!Arrays.equals(sync, syncCheck)) // check it
+ throw new IOException("File is corrupt!");
+ syncSeen = true;
+ if (in.getPos() >= end) {
+ return -1;
+ }
+ length = in.readInt(); // re-read length
+ } else {
+ syncSeen = false;
+ }
+
+ return length;
+ }
+
+ /** Read the next key/value pair in the file into <code>buffer</code>.
+ * Returns the length of the key read, or -1 if at end of file. The length
+ * of the value may be computed by calling buffer.getLength() before and
+ * after calls to this method. */
+ /** @deprecated Call {@link #nextRaw(DataOutputBuffer,SequenceFile.ValueBytes)}. */
+ public synchronized int next(DataOutputBuffer buffer) throws IOException {
+ // Unsupported for block-compressed sequence files
+ if (blockCompressed) {
+ throw new IOException("Unsupported call for block-compressed" +
+ " SequenceFiles - use SequenceFile.Reader.next(DataOutputStream, ValueBytes)");
+ }
+ try {
+ int length = readRecordLength();
+ if (length == -1) {
+ return -1;
+ }
+ int keyLength = in.readInt();
+ buffer.write(in, length);
+ return keyLength;
+ } catch (ChecksumException e) { // checksum failure
+ handleChecksumException(e);
+ return next(buffer);
+ }
+ }
+
+ public ValueBytes createValueBytes() {
+ ValueBytes val = null;
+ if (!decompress || blockCompressed) {
+ val = new UncompressedBytes();
+ } else {
+ val = new CompressedBytes(codec);
+ }
+ return val;
+ }
+
+ /**
+ * Read 'raw' records.
+ * @param key - The buffer into which the key is read
+ * @param val - The 'raw' value
+ * @return Returns the total record length or -1 for end of file
+ * @throws IOException
+ */
+ public synchronized int nextRaw(DataOutputBuffer key, ValueBytes val)
+ throws IOException {
+ if (!blockCompressed) {
+ int length = readRecordLength();
+ if (length == -1) {
+ return -1;
+ }
+ int keyLength = in.readInt();
+ int valLength = length - keyLength;
+ key.write(in, keyLength);
+ if (decompress) {
+ CompressedBytes value = (CompressedBytes)val;
+ value.reset(in, valLength);
+ } else {
+ UncompressedBytes value = (UncompressedBytes)val;
+ value.reset(in, valLength);
+ }
+
+ return length;
+ } else {
+ //Reset syncSeen
+ syncSeen = false;
+
+ // Read 'key'
+ if (noBufferedKeys == 0) {
+ if (in.getPos() >= end)
+ return -1;
+
+ try {
+ readBlock();
+ } catch (EOFException eof) {
+ return -1;
+ }
+ }
+ int keyLength = WritableUtils.readVInt(keyLenIn);
+ if (keyLength < 0) {
+ throw new IOException("zero length key found!");
+ }
+ key.write(keyIn, keyLength);
+ --noBufferedKeys;
+
+ // Read raw 'value'
+ seekToCurrentValue();
+ int valLength = WritableUtils.readVInt(valLenIn);
+ UncompressedBytes rawValue = (UncompressedBytes)val;
+ rawValue.reset(valIn, valLength);
+ --noBufferedValues;
+
+ return (keyLength+valLength);
+ }
+
+ }
+
+ /**
+ * Read 'raw' keys.
+ * @param key - The buffer into which the key is read
+ * @return Returns the key length or -1 for end of file
+ * @throws IOException
+ */
+ public int nextRawKey(DataOutputBuffer key)
+ throws IOException {
+ if (!blockCompressed) {
+ recordLength = readRecordLength();
+ if (recordLength == -1) {
+ return -1;
+ }
+ keyLength = in.readInt();
+ key.write(in, keyLength);
+ return keyLength;
+ } else {
+ //Reset syncSeen
+ syncSeen = false;
+
+ // Read 'key'
+ if (noBufferedKeys == 0) {
+ if (in.getPos() >= end)
+ return -1;
+
+ try {
+ readBlock();
+ } catch (EOFException eof) {
+ return -1;
+ }
+ }
+ int keyLength = WritableUtils.readVInt(keyLenIn);
+ if (keyLength < 0) {
+ throw new IOException("zero length key found!");
+ }
+ key.write(keyIn, keyLength);
+ --noBufferedKeys;
+
+ return keyLength;
+ }
+
+ }
+
+ /** Read the next key in the file, skipping its
+ * value. Return null at end of file. */
+ public synchronized Object next(Object key) throws IOException {
+ if (key != null && key.getClass() != getKeyClass()) {
+ throw new IOException("wrong key class: "+key.getClass().getName()
+ +" is not "+keyClass);
+ }
+
+ if (!blockCompressed) {
+ outBuf.reset();
+
+ keyLength = next(outBuf);
+ if (keyLength < 0)
+ return null;
+
+ valBuffer.reset(outBuf.getData(), outBuf.getLength());
+
+ key = deserializeKey(key);
+ valBuffer.mark(0);
+ if (valBuffer.getPosition() != keyLength)
+ throw new IOException(key + " read " + valBuffer.getPosition()
+ + " bytes, should read " + keyLength);
+ } else {
+ //Reset syncSeen
+ syncSeen = false;
+
+ if (noBufferedKeys == 0) {
+ try {
+ readBlock();
+ } catch (EOFException eof) {
+ return null;
+ }
+ }
+
+ int keyLength = WritableUtils.readVInt(keyLenIn);
+
+ // Sanity check
+ if (keyLength < 0) {
+ return null;
+ }
+
+ //Read another compressed 'key'
+ key = deserializeKey(key);
+ --noBufferedKeys;
+ }
+
+ return key;
+ }
+
+ @SuppressWarnings("unchecked")
+ private Object deserializeKey(Object key) throws IOException {
+ return keyDeserializer.deserialize(key);
+ }
+
+ /**
+ * Read 'raw' values.
+ * @param val - The 'raw' value
+ * @return Returns the value length
+ * @throws IOException
+ */
+ public synchronized int nextRawValue(ValueBytes val)
+ throws IOException {
+
+ // Position stream to current value
+ seekToCurrentValue();
+
+ if (!blockCompressed) {
+ int valLength = recordLength - keyLength;
+ if (decompress) {
+ CompressedBytes value = (CompressedBytes)val;
+ value.reset(in, valLength);
+ } else {
+ UncompressedBytes value = (UncompressedBytes)val;
+ value.reset(in, valLength);
+ }
+
+ return valLength;
+ } else {
+ int valLength = WritableUtils.readVInt(valLenIn);
+ UncompressedBytes rawValue = (UncompressedBytes)val;
+ rawValue.reset(valIn, valLength);
+ --noBufferedValues;
+ return valLength;
+ }
+
+ }
+
+ private void handleChecksumException(ChecksumException e)
+ throws IOException {
+ if (this.conf.getBoolean("io.skip.checksum.errors", false)) {
+ LOG.warn("Bad checksum at "+getPosition()+". Skipping entries.");
+ sync(getPosition()+this.conf.getInt("io.bytes.per.checksum", 512));
+ } else {
+ throw e;
+ }
+ }
+
+ /** Set the current byte position in the input file.
+ *
+ * <p>The position passed must be a position returned by {@link
+ * SequenceFile.Writer#getLength()} when writing this file. To seek to an arbitrary
+ * position, use {@link SequenceFile.Reader#sync(long)}.
+ */
+ public synchronized void seek(long position) throws IOException {
+ in.seek(position);
+ if (blockCompressed) { // trigger block read
+ noBufferedKeys = 0;
+ valuesDecompressed = true;
+ }
+ }
+
+ /** Seek to the next sync mark past a given position.*/
+ public synchronized void sync(long position) throws IOException {
+ if (position+SYNC_SIZE >= end) {
+ seek(end);
+ return;
+ }
+
+ try {
+ seek(position+4); // skip escape
+ in.readFully(syncCheck);
+ int syncLen = sync.length;
+ for (int i = 0; in.getPos() < end; i++) {
+ int j = 0;
+ for (; j < syncLen; j++) {
+ if (sync[j] != syncCheck[(i+j)%syncLen])
+ break;
+ }
+ if (j == syncLen) {
+ in.seek(in.getPos() - SYNC_SIZE); // position before sync
+ return;
+ }
+ syncCheck[i%syncLen] = in.readByte();
+ }
+ } catch (ChecksumException e) { // checksum failure
+ handleChecksumException(e);
+ }
+ }
+
+ /** Returns true iff the previous call to next passed a sync mark.*/
+ public boolean syncSeen() { return syncSeen; }
+
+ /** Return the current byte position in the input file. */
+ public synchronized long getPosition() throws IOException {
+ return in.getPos();
+ }
+
+ /** Returns the name of the file. */
+ public String toString() {
+ return file.toString();
+ }
+
+ }
+
+ /** Sorts key/value pairs in a sequence-format file.
+ *
+ * <p>For best performance, applications should make sure that the {@link
+ * Writable#readFields(DataInput)} implementation of their keys is
+ * very efficient. In particular, it should avoid allocating memory.
+ */
+ public static class Sorter {
+
+ private RawComparator comparator;
+
+ private MergeSort mergeSort; //the implementation of merge sort
+
+ private Path[] inFiles; // when merging or sorting
+
+ private Path outFile;
+
+ private int memory; // bytes
+ private int factor; // merged per pass
+
+ private FileSystem fs = null;
+
+ private Class keyClass;
+ private Class valClass;
+
+ private Configuration conf;
+
+ private Progressable progressable = null;
+
+ /** Sort and merge files containing the named classes. */
+ public Sorter(FileSystem fs, Class<? extends WritableComparable> keyClass,
+ Class valClass, Configuration conf) {
+ this(fs, WritableComparator.get(keyClass), keyClass, valClass, conf);
+ }
+
+ /** Sort and merge using an arbitrary {@link RawComparator}. */
+ public Sorter(FileSystem fs, RawComparator comparator, Class keyClass,
+ Class valClass, Configuration conf) {
+ this.fs = fs;
+ this.comparator = comparator;
+ this.keyClass = keyClass;
+ this.valClass = valClass;
+ this.memory = conf.getInt("io.sort.mb", 100) * 1024 * 1024;
+ this.factor = conf.getInt("io.sort.factor", 100);
+ this.conf = conf;
+ }
+
+ /** Set the number of streams to merge at once.*/
+ public void setFactor(int factor) { this.factor = factor; }
+
+ /** Get the number of streams to merge at once.*/
+ public int getFactor() { return factor; }
+
+ /** Set the total amount of buffer memory, in bytes.*/
+ public void setMemory(int memory) { this.memory = memory; }
+
+ /** Get the total amount of buffer memory, in bytes.*/
+ public int getMemory() { return memory; }
+
+ /** Set the progressable object in order to report progress. */
+ public void setProgressable(Progressable progressable) {
+ this.progressable = progressable;
+ }
+
+ /**
+ * Perform a file sort from a set of input files into an output file.
+ * @param inFiles the files to be sorted
+ * @param outFile the sorted output file
+ * @param deleteInput should the input files be deleted as they are read?
+ */
+ public void sort(Path[] inFiles, Path outFile,
+ boolean deleteInput) throws IOException {
+ if (fs.exists(outFile)) {
+ throw new IOException("already exists: " + outFile);
+ }
+
+ this.inFiles = inFiles;
+ this.outFile = outFile;
+
+ int segments = sortPass(deleteInput);
+ if (segments > 1) {
+ mergePass(outFile.getParent());
+ }
+ }
+
+ /**
+ * Perform a file sort from a set of input files and return an iterator.
+ * @param inFiles the files to be sorted
+ * @param tempDir the directory where temp files are created during sort
+ * @param deleteInput should the input files be deleted as they are read?
+ * @return iterator the RawKeyValueIterator
+ */
+ public RawKeyValueIterator sortAndIterate(Path[] inFiles, Path tempDir,
+ boolean deleteInput) throws IOException {
+ Path outFile = new Path(tempDir + Path.SEPARATOR + "all.2");
+ if (fs.exists(outFile)) {
+ throw new IOException("already exists: " + outFile);
+ }
+ this.inFiles = inFiles;
+ //outFile will basically be used as prefix for temp files in the cases
+ //where sort outputs multiple sorted segments. For the single segment
+ //case, the outputFile itself will contain the sorted data for that
+ //segment
+ this.outFile = outFile;
+
+ int segments = sortPass(deleteInput);
+ if (segments > 1)
+ return merge(outFile.suffix(".0"), outFile.suffix(".0.index"),
+ tempDir);
+ else if (segments == 1)
+ return merge(new Path[]{outFile}, true, tempDir);
+ else return null;
+ }
+
+ /**
+ * The backwards compatible interface to sort.
+ * @param inFile the input file to sort
+ * @param outFile the sorted output file
+ */
+ public void sort(Path inFile, Path outFile) throws IOException {
+ sort(new Path[]{inFile}, outFile, false);
+ }
+
+ private int sortPass(boolean deleteInput) throws IOException {
+ LOG.debug("running sort pass");
+ SortPass sortPass = new SortPass(); // make the SortPass
+ sortPass.setProgressable(progressable);
+ mergeSort = new MergeSort(sortPass.new SeqFileComparator());
+ try {
+ return sortPass.run(deleteInput); // run it
+ } finally {
+ sortPass.close(); // close it
+ }
+ }
+
+ private class SortPass {
+ private int memoryLimit = memory/4;
+ private int recordLimit = 1000000;
+
+ private DataOutputBuffer rawKeys = new DataOutputBuffer();
+ private byte[] rawBuffer;
+
+ private int[] keyOffsets = new int[1024];
+ private int[] pointers = new int[keyOffsets.length];
+ private int[] pointersCopy = new int[keyOffsets.length];
+ private int[] keyLengths = new int[keyOffsets.length];
+ private ValueBytes[] rawValues = new ValueBytes[keyOffsets.length];
+
+ private ArrayList segmentLengths = new ArrayList();
+
+ private Reader in = null;
+ private FSDataOutputStream out = null;
+ private FSDataOutputStream indexOut = null;
+ private Path outName;
+
+ private Progressable progressable = null;
+
+ public int run(boolean deleteInput) throws IOException {
+ int segments = 0;
+ int currentFile = 0;
+ boolean atEof = (currentFile >= inFiles.length);
+ boolean isCompressed = false;
+ boolean isBlockCompressed = false;
+ CompressionCodec codec = null;
+ segmentLengths.clear();
+ if (atEof) {
+ return 0;
+ }
+
+ // Initialize
+ in = new Reader(fs, inFiles[currentFile], conf);
+ isCompressed = in.isCompressed();
+ isBlockCompressed = in.isBlockCompressed();
+ codec = in.getCompressionCodec();
+
+ for (int i=0; i < rawValues.length; ++i) {
+ rawValues[i] = null;
+ }
+
+ while (!atEof) {
+ int count = 0;
+ int bytesProcessed = 0;
+ rawKeys.reset();
+ while (!atEof &&
+ bytesProcessed < memoryLimit && count < recordLimit) {
+
+ // Read a record into buffer
+ // Note: Attempt to re-use 'rawValue' as far as possible
+ int keyOffset = rawKeys.getLength();
+ ValueBytes rawValue =
+ (count == keyOffsets.length || rawValues[count] == null) ?
+ in.createValueBytes() :
+ rawValues[count];
+ int recordLength = in.nextRaw(rawKeys, rawValue);
+ if (recordLength == -1) {
+ in.close();
+ if (deleteInput) {
+ fs.delete(inFiles[currentFile], true);
+ }
+ currentFile += 1;
+ atEof = currentFile >= inFiles.length;
+ if (!atEof) {
+ in = new Reader(fs, inFiles[currentFile], conf);
+ } else {
+ in = null;
+ }
+ continue;
+ }
+
+ int keyLength = rawKeys.getLength() - keyOffset;
+
+ if (count == keyOffsets.length)
+ grow();
+
+ keyOffsets[count] = keyOffset; // update pointers
+ pointers[count] = count;
+ keyLengths[count] = keyLength;
+ rawValues[count] = rawValue;
+
+ bytesProcessed += recordLength;
+ count++;
+ }
+
+ // buffer is full -- sort & flush it
+ LOG.debug("flushing segment " + segments);
+ rawBuffer = rawKeys.getData();
+ sort(count);
+ // indicate we're making progress
+ if (progressable != null) {
+ progressable.progress();
+ }
+ flush(count, bytesProcessed, isCompressed, isBlockCompressed, codec,
+ segments==0 && atEof);
+ segments++;
+ }
+ return segments;
+ }
+
+ public void close() throws IOException {
+ if (in != null) {
+ in.close();
+ }
+ if (out != null) {
+ out.close();
+ }
+ if (indexOut != null) {
+ indexOut.close();
+ }
+ }
+
+ private void grow() {
+ int newLength = keyOffsets.length * 3 / 2;
+ keyOffsets = grow(keyOffsets, newLength);
+ pointers = grow(pointers, newLength);
+ pointersCopy = new int[newLength];
+ keyLengths = grow(keyLengths, newLength);
+ rawValues = grow(rawValues, newLength);
+ }
+
+ private int[] grow(int[] old, int newLength) {
+ int[] result = new int[newLength];
+ System.arraycopy(old, 0, result, 0, old.length);
+ return result;
+ }
+
+ private ValueBytes[] grow(ValueBytes[] old, int newLength) {
+ ValueBytes[] result = new ValueBytes[newLength];
+ System.arraycopy(old, 0, result, 0, old.length);
+ for (int i=old.length; i < newLength; ++i) {
+ result[i] = null;
+ }
+ return result;
+ }
+
+ private void flush(int count, int bytesProcessed, boolean isCompressed,
+ boolean isBlockCompressed, CompressionCodec codec, boolean done)
+ throws IOException {
+ if (out == null) {
+ outName = done ? outFile : outFile.suffix(".0");
+ out = fs.create(outName);
+ if (!done) {
+ indexOut = fs.create(outName.suffix(".index"));
+ }
+ }
+
+ long segmentStart = out.getPos();
+ Writer writer = createWriter(conf, out, keyClass, valClass,
+ isCompressed, isBlockCompressed, codec,
+ new Metadata());
+
+ if (!done) {
+ writer.sync = null; // disable sync on temp files
+ }
+
+ for (int i = 0; i < count; i++) { // write in sorted order
+ int p = pointers[i];
+ writer.appendRaw(rawBuffer, keyOffsets[p], keyLengths[p], rawValues[p]);
+ }
+ writer.close();
+
+ if (!done) {
+ // Save the segment length
+ WritableUtils.writeVLong(indexOut, segmentStart);
+ WritableUtils.writeVLong(indexOut, (out.getPos()-segmentStart));
+ indexOut.flush();
+ }
+ }
+
+ private void sort(int count) {
+ System.arraycopy(pointers, 0, pointersCopy, 0, count);
+ mergeSort.mergeSort(pointersCopy, pointers, 0, count);
+ }
+ class SeqFileComparator implements Comparator<IntWritable> {
+ public int compare(IntWritable I, IntWritable J) {
+ return comparator.compare(rawBuffer, keyOffsets[I.get()],
+ keyLengths[I.get()], rawBuffer,
+ keyOffsets[J.get()], keyLengths[J.get()]);
+ }
+ }
+
+ /** set the progressable object in order to report progress */
+ public void setProgressable(Progressable progressable)
+ {
+ this.progressable = progressable;
+ }
+
+ } // SequenceFile.Sorter.SortPass
+
+ /** The interface to iterate over raw keys/values of SequenceFiles. */
+ public static interface RawKeyValueIterator {
+ /** Gets the current raw key
+ * @return DataOutputBuffer
+ * @throws IOException
+ */
+ DataOutputBuffer getKey() throws IOException;
+ /** Gets the current raw value
+ * @return ValueBytes
+ * @throws IOException
+ */
+ ValueBytes getValue() throws IOException;
+ /** Sets up the current key and value (for getKey and getValue)
+ * @return true if there exists a key/value, false otherwise
+ * @throws IOException
+ */
+ boolean next() throws IOException;
+ /** closes the iterator so that the underlying streams can be closed
+ * @throws IOException
+ */
+ void close() throws IOException;
+ /** Gets the Progress object; this has a float (0.0 - 1.0)
+ * indicating the bytes processed by the iterator so far
+ */
+ Progress getProgress();
+ }
+
+ /**
+ * Merges the list of segments of type <code>SegmentDescriptor</code>
+ * @param segments the list of SegmentDescriptors
+ * @param tmpDir the directory to write temporary files into
+ * @return RawKeyValueIterator
+ * @throws IOException
+ */
+ public RawKeyValueIterator merge(List <SegmentDescriptor> segments,
+ Path tmpDir)
+ throws IOException {
+ // pass in object to report progress, if present
+ MergeQueue mQueue = new MergeQueue(segments, tmpDir, progressable);
+ return mQueue.merge();
+ }
+
+ /**
+ * Merges the contents of files passed in Path[] using a max factor value
+ * that is already set
+ * @param inNames the array of path names
+ * @param deleteInputs true if the input files should be deleted when
+ * unnecessary
+ * @param tmpDir the directory to write temporary files into
+ * @return RawKeyValueIteratorMergeQueue
+ * @throws IOException
+ */
+ public RawKeyValueIterator merge(Path [] inNames, boolean deleteInputs,
+ Path tmpDir)
+ throws IOException {
+ return merge(inNames, deleteInputs,
+ (inNames.length < factor) ? inNames.length : factor,
+ tmpDir);
+ }
+
+ /**
+ * Merges the contents of files passed in Path[]
+ * @param inNames the array of path names
+ * @param deleteInputs true if the input files should be deleted when
+ * unnecessary
+ * @param factor the factor that will be used as the maximum merge fan-in
+ * @param tmpDir the directory to write temporary files into
+ * @return RawKeyValueIteratorMergeQueue
+ * @throws IOException
+ */
+ public RawKeyValueIterator merge(Path [] inNames, boolean deleteInputs,
+ int factor, Path tmpDir)
+ throws IOException {
+ //get the segments from inNames
+ ArrayList <SegmentDescriptor> a = new ArrayList <SegmentDescriptor>();
+ for (int i = 0; i < inNames.length; i++) {
+ SegmentDescriptor s = new SegmentDescriptor(0,
+ fs.getLength(inNames[i]), inNames[i]);
+ s.preserveInput(!deleteInputs);
+ s.doSync();
+ a.add(s);
+ }
+ this.factor = factor;
+ MergeQueue mQueue = new MergeQueue(a, tmpDir, progressable);
+ return mQueue.merge();
+ }
+
+ /**
+ * Merges the contents of files passed in Path[]
+ * @param inNames the array of path names
+ * @param tempDir the directory for creating temp files during merge
+ * @param deleteInputs true if the input files should be deleted when
+ * unnecessary
+ * @return RawKeyValueIteratorMergeQueue
+ * @throws IOException
+ */
+ public RawKeyValueIterator merge(Path [] inNames, Path tempDir,
+ boolean deleteInputs)
+ throws IOException {
+ //outFile will basically be used as prefix for temp files for the
+ //intermediate merge outputs
+ this.outFile = new Path(tempDir + Path.SEPARATOR + "merged");
+ //get the segments from inNames
+ ArrayList <SegmentDescriptor> a = new ArrayList <SegmentDescriptor>();
+ for (int i = 0; i < inNames.length; i++) {
+ SegmentDescriptor s = new SegmentDescriptor(0,
+ fs.getLength(inNames[i]), inNames[i]);
+ s.preserveInput(!deleteInputs);
+ s.doSync();
+ a.add(s);
+ }
+ factor = (inNames.length < factor) ? inNames.length : factor;
+ // pass in object to report progress, if present
+ MergeQueue mQueue = new MergeQueue(a, tempDir, progressable);
+ return mQueue.merge();
+ }
+
+ /**
+ * Clones the attributes (like compression of the input file and creates a
+ * corresponding Writer
+ * @param inputFile the path of the input file whose attributes should be
+ * cloned
+ * @param outputFile the path of the output file
+ * @param prog the Progressable to report status during the file write
+ * @return Writer
+ * @throws IOException
+ */
+ public Writer cloneFileAttributes(Path inputFile, Path outputFile,
+ Progressable prog)
+ throws IOException {
+ FileSystem srcFileSys = inputFile.getFileSystem(conf);
+ Reader reader = new Reader(srcFileSys, inputFile, 4096, conf, true);
+ boolean compress = reader.isCompressed();
+ boolean blockCompress = reader.isBlockCompressed();
+ CompressionCodec codec = reader.getCompressionCodec();
+ reader.close();
+
+ Writer writer = createWriter(outputFile.getFileSystem(conf), conf,
+ outputFile, keyClass, valClass, compress,
+ blockCompress, codec, prog,
+ new Metadata());
+ return writer;
+ }
+
+ /**
+ * Writes records from RawKeyValueIterator into a file represented by the
+ * passed writer
+ * @param records the RawKeyValueIterator
+ * @param writer the Writer created earlier
+ * @throws IOException
+ */
+ public void writeFile(RawKeyValueIterator records, Writer writer)
+ throws IOException {
+ while(records.next()) {
+ writer.appendRaw(records.getKey().getData(), 0,
+ records.getKey().getLength(), records.getValue());
+ }
+ writer.sync();
+ }
+
+ /** Merge the provided files.
+ * @param inFiles the array of input path names
+ * @param outFile the final output file
+ * @throws IOException
+ */
+ public void merge(Path[] inFiles, Path outFile) throws IOException {
+ if (fs.exists(outFile)) {
+ throw new IOException("already exists: " + outFile);
+ }
+ RawKeyValueIterator r = merge(inFiles, false, outFile.getParent());
+ Writer writer = cloneFileAttributes(inFiles[0], outFile, null);
+
+ writeFile(r, writer);
+
+ writer.close();
+ }
+
+ /** sort calls this to generate the final merged output */
+ private int mergePass(Path tmpDir) throws IOException {
+ LOG.debug("running merge pass");
+ Writer writer = cloneFileAttributes(
+ outFile.suffix(".0"), outFile, null);
+ RawKeyValueIterator r = merge(outFile.suffix(".0"),
+ outFile.suffix(".0.index"), tmpDir);
+ writeFile(r, writer);
+
+ writer.close();
+ return 0;
+ }
+
+ /** Used by mergePass to merge the output of the sort
+ * @param inName the name of the input file containing sorted segments
+ * @param indexIn the offsets of the sorted segments
+ * @param tmpDir the relative directory to store intermediate results in
+ * @return RawKeyValueIterator
+ * @throws IOException
+ */
+ private RawKeyValueIterator merge(Path inName, Path indexIn, Path tmpDir)
+ throws IOException {
+ //get the segments from indexIn
+ //we create a SegmentContainer so that we can track segments belonging to
+ //inName and delete inName as soon as we see that we have looked at all
+ //the contained segments during the merge process & hence don't need
+ //them anymore
+ SegmentContainer container = new SegmentContainer(inName, indexIn);
+ MergeQueue mQueue = new MergeQueue(container.getSegmentList(), tmpDir, progressable);
+ return mQueue.merge();
+ }
+
+ /** This class implements the core of the merge logic */
+ private class MergeQueue extends PriorityQueue
+ implements RawKeyValueIterator {
+ private boolean compress;
+ private boolean blockCompress;
+ private DataOutputBuffer rawKey = new DataOutputBuffer();
+ private ValueBytes rawValue;
+ private long totalBytesProcessed;
+ private float progPerByte;
+ private Progress mergeProgress = new Progress();
+ private Path tmpDir;
+ private Progressable progress = null; //handle to the progress reporting object
+ private SegmentDescriptor minSegment;
+
+ //a TreeMap used to store the segments sorted by size (segment offset and
+ //segment path name is used to break ties between segments of same sizes)
+ private Map<SegmentDescriptor, Void> sortedSegmentSizes =
+ new TreeMap<SegmentDescriptor, Void>();
+
+ @SuppressWarnings("unchecked")
+ public void put(SegmentDescriptor stream) throws IOException {
+ if (size() == 0) {
+ compress = stream.in.isCompressed();
+ blockCompress = stream.in.isBlockCompressed();
+ } else if (compress != stream.in.isCompressed() ||
+ blockCompress != stream.in.isBlockCompressed()) {
+ throw new IOException("All merged files must be compressed or not.");
+ }
+ super.put(stream);
+ }
+
+ /**
+ * A queue of file segments to merge
+ * @param segments the file segments to merge
+ * @param tmpDir a relative local directory to save intermediate files in
+ * @param progress the reference to the Progressable object
+ */
+ public MergeQueue(List <SegmentDescriptor> segments,
+ Path tmpDir, Progressable progress) {
+ int size = segments.size();
+ for (int i = 0; i < size; i++) {
+ sortedSegmentSizes.put(segments.get(i), null);
+ }
+ this.tmpDir = tmpDir;
+ this.progress = progress;
+ }
+ protected boolean lessThan(Object a, Object b) {
+ // indicate we're making progress
+ if (progress != null) {
+ progress.progress();
+ }
+ SegmentDescriptor msa = (SegmentDescriptor)a;
+ SegmentDescriptor msb = (SegmentDescriptor)b;
+ return comparator.compare(msa.getKey().getData(), 0,
+ msa.getKey().getLength(), msb.getKey().getData(), 0,
+ msb.getKey().getLength()) < 0;
+ }
+ public void close() throws IOException {
+ SegmentDescriptor ms; // close inputs
+ while ((ms = (SegmentDescriptor)pop()) != null) {
+ ms.cleanup();
+ }
+ minSegment = null;
+ }
+ public DataOutputBuffer getKey() throws IOException {
+ return rawKey;
+ }
+ public ValueBytes getValue() throws IOException {
+ return rawValue;
+ }
+ public boolean next() throws IOException {
+ if (size() == 0)
+ return false;
+ if (minSegment != null) {
+ //minSegment is non-null for all invocations of next except the first
+ //one. For the first invocation, the priority queue is ready for use
+ //but for the subsequent invocations, first adjust the queue
+ adjustPriorityQueue(minSegment);
+ if (size() == 0) {
+ minSegment = null;
+ return false;
+ }
+ }
+ minSegment = (SegmentDescriptor)top();
+ long startPos = minSegment.in.getPosition(); // Current position in stream
+ //save the raw key reference
+ rawKey = minSegment.getKey();
+ //load the raw value. Re-use the existing rawValue buffer
+ if (rawValue == null) {
+ rawValue = minSegment.in.createValueBytes();
+ }
+ minSegment.nextRawValue(rawValue);
+ long endPos = minSegment.in.getPosition(); // End position after reading value
+ updateProgress(endPos - startPos);
+ return true;
+ }
+
+ public Progress getProgress() {
+ return mergeProgress;
+ }
+
+ private void adjustPriorityQueue(SegmentDescriptor ms) throws IOException{
+ long startPos = ms.in.getPosition(); // Current position in stream
+ boolean hasNext = ms.nextRawKey();
+ long endPos = ms.in.getPosition(); // End position after reading key
+ updateProgress(endPos - startPos);
+ if (hasNext) {
+ adjustTop();
+ } else {
+ pop();
+ ms.cleanup();
+ }
+ }
+
+ private void updateProgress(long bytesProcessed) {
+ totalBytesProcessed += bytesProcessed;
+ if (progPerByte > 0) {
+ mergeProgress.set(totalBytesProcessed * progPerByte);
+ }
+ }
+
+ /** This is the single level merge that is called multiple times
+ * depending on the factor size and the number of segments
+ * @return RawKeyValueIterator
+ * @throws IOException
+ */
+ public RawKeyValueIterator merge() throws IOException {
+ //create the MergeStreams from the sorted map created in the constructor
+ //and dump the final output to a file
+ int numSegments = sortedSegmentSizes.size();
+ int origFactor = factor;
+ int passNo = 1;
+ LocalDirAllocator lDirAlloc = new LocalDirAllocator("mapred.local.dir");
+ do {
+ //get the factor for this pass of merge
+ factor = getPassFactor(passNo, numSegments);
+ List<SegmentDescriptor> segmentsToMerge =
+ new ArrayList<SegmentDescriptor>();
+ int segmentsConsidered = 0;
+ int numSegmentsToConsider = factor;
+ while (true) {
+ //extract the smallest 'factor' number of segment pointers from the
+ //TreeMap. Call cleanup on the empty segments (no key/value data)
+ SegmentDescriptor[] mStream =
+ getSegmentDescriptors(numSegmentsToConsider);
+ for (int i = 0; i < mStream.length; i++) {
+ if (mStream[i].nextRawKey()) {
+ segmentsToMerge.add(mStream[i]);
+ segmentsConsidered++;
+ // Count the fact that we read some bytes in calling nextRawKey()
+ updateProgress(mStream[i].in.getPosition());
+ }
+ else {
+ mStream[i].cleanup();
+ numSegments--; //we ignore this segment for the merge
+ }
+ }
+ //if we have the desired number of segments
+ //or looked at all available segments, we break
+ if (segmentsConsidered == factor ||
+ sortedSegmentSizes.size() == 0) {
+ break;
+ }
+
+ numSegmentsToConsider = factor - segmentsConsidered;
+ }
+ //feed the streams to the priority queue
+ initialize(segmentsToMerge.size()); clear();
+ for (int i = 0; i < segmentsToMerge.size(); i++) {
+ put(segmentsToMerge.get(i));
+ }
+ //if we have lesser number of segments remaining, then just return the
+ //iterator, else do another single level merge
+ if (numSegments <= factor) {
+ //calculate the length of the remaining segments. Required for
+ //calculating the merge progress
+ long totalBytes = 0;
+ for (int i = 0; i < segmentsToMerge.size(); i++) {
+ totalBytes += segmentsToMerge.get(i).segmentLength;
+ }
+ if (totalBytes != 0) //being paranoid
+ progPerByte = 1.0f / (float)totalBytes;
+ //reset factor to what it originally was
+ factor = origFactor;
+ return this;
+ } else {
+ //we want to spread the creation of temp files on multiple disks if
+ //available under the space constraints
+ long approxOutputSize = 0;
+ for (SegmentDescriptor s : segmentsToMerge) {
+ approxOutputSize += s.segmentLength +
+ ChecksumFileSystem.getApproxChkSumLength(
+ s.segmentLength);
+ }
+ Path tmpFilename =
+ new Path(tmpDir, "intermediate").suffix("." + passNo);
+
+ Path outputFile = lDirAlloc.getLocalPathForWrite(
+ tmpFilename.toString(),
+ approxOutputSize, conf);
+ LOG.debug("writing intermediate results to " + outputFile);
+ Writer writer = cloneFileAttributes(
+ fs.makeQualified(segmentsToMerge.get(0).segmentPathName),
+ fs.makeQualified(outputFile), null);
+ writer.sync = null; //disable sync for temp files
+ writeFile(this, writer);
+ writer.close();
+
+ //we finished one single level merge; now clean up the priority
+ //queue
+ this.close();
+
+ SegmentDescriptor tempSegment =
+ new SegmentDescriptor(0, fs.getLength(outputFile), outputFile);
+ //put the segment back in the TreeMap
+ sortedSegmentSizes.put(tempSegment, null);
+ numSegments = sortedSegmentSizes.size();
+ passNo++;
+ }
+ //we are worried about only the first pass merge factor. So reset the
+ //factor to what it originally was
+ factor = origFactor;
+ } while(true);
+ }
+
+ //Hadoop-591
+ public int getPassFactor(int passNo, int numSegments) {
+ if (passNo > 1 || numSegments <= factor || factor == 1)
+ return factor;
+ int mod = (numSegments - 1) % (factor - 1);
+ if (mod == 0)
+ return factor;
+ return mod + 1;
+ }
+
+ /** Return (& remove) the requested number of segment descriptors from the
+ * sorted map.
+ */
+ public SegmentDescriptor[] getSegmentDescriptors(int numDescriptors) {
+ if (numDescriptors > sortedSegmentSizes.size())
+ numDescriptors = sortedSegmentSizes.size();
+ SegmentDescriptor[] SegmentDescriptors =
+ new SegmentDescriptor[numDescriptors];
+ Iterator iter = sortedSegmentSizes.keySet().iterator();
+ int i = 0;
+ while (i < numDescriptors) {
+ SegmentDescriptors[i++] = (SegmentDescriptor)iter.next();
+ iter.remove();
+ }
+ return SegmentDescriptors;
+ }
+ } // SequenceFile.Sorter.MergeQueue
+
+ /** This class defines a merge segment. This class can be subclassed to
+ * provide a customized cleanup method implementation. In this
+ * implementation, cleanup closes the file handle and deletes the file
+ */
+ public class SegmentDescriptor implements Comparable {
+
+ long segmentOffset; //the start of the segment in the file
+ long segmentLength; //the length of the segment
+ Path segmentPathName; //the path name of the file containing the segment
+ boolean ignoreSync = true; //set to true for temp files
+ private Reader in = null;
+ private DataOutputBuffer rawKey = null; //this will hold the current key
+ private boolean preserveInput = false; //delete input segment files?
+
+ /** Constructs a segment
+ * @param segmentOffset the offset of the segment in the file
+ * @param segmentLength the length of the segment
+ * @param segmentPathName the path name of the file containing the segment
+ */
+ public SegmentDescriptor (long segmentOffset, long segmentLength,
+ Path segmentPathName) {
+ this.segmentOffset = segmentOffset;
+ this.segmentLength = segmentLength;
+ this.segmentPathName = segmentPathName;
+ }
+
+ /** Do the sync checks */
+ public void doSync() {ignoreSync = false;}
+
+ /** Whether to delete the files when no longer needed */
+ public void preserveInput(boolean preserve) {
+ preserveInput = preserve;
+ }
+
+ public boolean shouldPreserveInput() {
+ return preserveInput;
+ }
+
+ public int compareTo(Object o) {
+ SegmentDescriptor that = (SegmentDescriptor)o;
+ if (this.segmentLength != that.segmentLength) {
+ return (this.segmentLength < that.segmentLength ? -1 : 1);
+ }
+ if (this.segmentOffset != that.segmentOffset) {
+ return (this.segmentOffset < that.segmentOffset ? -1 : 1);
+ }
+ return (this.segmentPathName.toString()).
+ compareTo(that.segmentPathName.toString());
+ }
+
+ public boolean equals(Object o) {
+ if (!(o instanceof SegmentDescriptor)) {
+ return false;
+ }
+ SegmentDescriptor that = (SegmentDescriptor)o;
+ if (this.segmentLength == that.segmentLength &&
+ this.segmentOffset == that.segmentOffset &&
+ this.segmentPathName.toString().equals(
+ that.segmentPathName.toString())) {
+ return true;
+ }
+ return false;
+ }
+
+ public int hashCode() {
+ return 37 * 17 + (int) (segmentOffset^(segmentOffset>>>32));
+ }
+
+ /** Fills up the rawKey object with the key returned by the Reader
+ * @return true if there is a key returned; false, otherwise
+ * @throws IOException
+ */
+ public boolean nextRawKey() throws IOException {
+ if (in == null) {
+ int bufferSize = conf.getInt("io.file.buffer.size", 4096);
+ if (fs.getUri().getScheme().startsWith("ramfs")) {
+ bufferSize = conf.getInt("io.bytes.per.checksum", 512);
+ }
+ Reader reader = new Reader(fs, segmentPathName,
+ bufferSize, segmentOffset,
+ segmentLength, conf, false);
+
+ //sometimes we ignore syncs especially for temp merge files
+ if (ignoreSync) reader.sync = null;
+
+ if (reader.getKeyClass() != keyClass)
+ throw new IOException("wrong key class: " + reader.getKeyClass() +
+ " is not " + keyClass);
+ if (reader.getValueClass() != valClass)
+ throw new IOException("wrong value class: "+reader.getValueClass()+
+ " is not " + valClass);
+ this.in = reader;
+ rawKey = new DataOutputBuffer();
+ }
+ rawKey.reset();
+ int keyLength =
+ in.nextRawKey(rawKey);
+ return (keyLength >= 0);
+ }
+
+ /** Fills up the passed rawValue with the value corresponding to the key
+ * read earlier
+ * @param rawValue
+ * @return the length of the value
+ * @throws IOException
+ */
+ public int nextRawValue(ValueBytes rawValue) throws IOException {
+ int valLength = in.nextRawValue(rawValue);
+ return valLength;
+ }
+
+ /** Returns the stored rawKey */
+ public DataOutputBuffer getKey() {
+ return rawKey;
+ }
+
+ /** closes the underlying reader */
+ private void close() throws IOException {
+ this.in.close();
+ this.in = null;
+ }
+
+ /** The default cleanup. Subclasses can override this with a custom
+ * cleanup
+ */
+ public void cleanup() throws IOException {
+ close();
+ if (!preserveInput) {
+ fs.delete(segmentPathName, true);
+ }
+ }
+ } // SequenceFile.Sorter.SegmentDescriptor
+
+ /** This class provisions multiple segments contained within a single
+ * file
+ */
+ private class LinkedSegmentsDescriptor extends SegmentDescriptor {
+
+ SegmentContainer parentContainer = null;
+
+ /** Constructs a segment
+ * @param segmentOffset the offset of the segment in the file
+ * @param segmentLength the length of the segment
+ * @param segmentPathName the path name of the file containing the segment
+ * @param parent the parent SegmentContainer that holds the segment
+ */
+ public LinkedSegmentsDescriptor (long segmentOffset, long segmentLength,
+ Path segmentPathName, SegmentContainer parent) {
+ super(segmentOffset, segmentLength, segmentPathName);
+ this.parentContainer = parent;
+ }
+ /** The default cleanup. Subclasses can override this with a custom
+ * cleanup
+ */
+ public void cleanup() throws IOException {
+ super.close();
+ if (super.shouldPreserveInput()) return;
+ parentContainer.cleanup();
+ }
+ } //SequenceFile.Sorter.LinkedSegmentsDescriptor
+
+ /** The class that defines a container for segments to be merged. Primarily
+ * required to delete temp files as soon as all the contained segments
+ * have been looked at */
+ private class SegmentContainer {
+ private int numSegmentsCleanedUp = 0; //track the no. of segment cleanups
+ private int numSegmentsContained; //# of segments contained
+ private Path inName; //input file from where segments are created
+
+ //the list of segments read from the file
+ private ArrayList <SegmentDescriptor> segments =
+ new ArrayList <SegmentDescriptor>();
+ /** This constructor is there primarily to serve the sort routine that
+ * generates a single output file with an associated index file */
+ public SegmentContainer(Path inName, Path indexIn) throws IOException {
+ //get the segments from indexIn
+ FSDataInputStream fsIndexIn = fs.open(indexIn);
+ long end = fs.getLength(indexIn);
+ while (fsIndexIn.getPos() < end) {
+ long segmentOffset = WritableUtils.readVLong(fsIndexIn);
+ long segmentLength = WritableUtils.readVLong(fsIndexIn);
+ Path segmentName = inName;
+ segments.add(new LinkedSegmentsDescriptor(segmentOffset,
+ segmentLength, segmentName, this));
+ }
+ fsIndexIn.close();
+ fs.delete(indexIn, true);
+ numSegmentsContained = segments.size();
+ this.inName = inName;
+ }
+
+ public List <SegmentDescriptor> getSegmentList() {
+ return segments;
+ }
+ public void cleanup() throws IOException {
+ numSegmentsCleanedUp++;
+ if (numSegmentsCleanedUp == numSegmentsContained) {
+ fs.delete(inName, true);
+ }
+ }
+ } //SequenceFile.Sorter.SegmentContainer
+
+ } // SequenceFile.Sorter
+
+} // SequenceFile
diff --git a/src/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java b/src/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java
new file mode 100644
index 0000000..b2ed866
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.nio.ByteBuffer;
+
+/**
+ * Block cache interface.
+ * TODO: Add filename or hash of filename to block cache key.
+ */
+public interface BlockCache {
+ /**
+ * Add block to cache.
+ * @param blockName Zero-based file block number.
+ * @param buf The block contents wrapped in a ByteBuffer.
+ */
+ public void cacheBlock(String blockName, ByteBuffer buf);
+
+ /**
+ * Fetch block from cache.
+ * @param blockName Block number to fetch.
+ * @return Block or null if block is not in the cache.
+ */
+ public ByteBuffer getBlock(String blockName);
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/hfile/BoundedRangeFileInputStream.java b/src/java/org/apache/hadoop/hbase/io/hfile/BoundedRangeFileInputStream.java
new file mode 100644
index 0000000..ae7734a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/hfile/BoundedRangeFileInputStream.java
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+
+/**
+ * BoundedRangeFIleInputStream abstracts a contiguous region of a Hadoop
+ * FSDataInputStream as a regular input stream. One can create multiple
+ * BoundedRangeFileInputStream on top of the same FSDataInputStream and they
+ * would not interfere with each other.
+ * Copied from hadoop-335 tfile.
+ */
+class BoundedRangeFileInputStream extends InputStream {
+
+ private FSDataInputStream in;
+ private long pos;
+ private long end;
+ private long mark;
+ private final byte[] oneByte = new byte[1];
+
+ /**
+ * Constructor
+ *
+ * @param in
+ * The FSDataInputStream we connect to.
+ * @param offset
+ * Beginning offset of the region.
+ * @param length
+ * Length of the region.
+ *
+ * The actual length of the region may be smaller if (off_begin +
+ * length) goes beyond the end of FS input stream.
+ */
+ public BoundedRangeFileInputStream(FSDataInputStream in, long offset,
+ long length) {
+ if (offset < 0 || length < 0) {
+ throw new IndexOutOfBoundsException("Invalid offset/length: " + offset
+ + "/" + length);
+ }
+
+ this.in = in;
+ this.pos = offset;
+ this.end = offset + length;
+ this.mark = -1;
+ }
+
+ @Override
+ public int available() throws IOException {
+ int avail = in.available();
+ if (pos + avail > end) {
+ avail = (int) (end - pos);
+ }
+
+ return avail;
+ }
+
+ @Override
+ public int read() throws IOException {
+ int ret = read(oneByte);
+ if (ret == 1) return oneByte[0] & 0xff;
+ return -1;
+ }
+
+ @Override
+ public int read(byte[] b) throws IOException {
+ return read(b, 0, b.length);
+ }
+
+ @Override
+ public int read(byte[] b, int off, int len) throws IOException {
+ if ((off | len | (off + len) | (b.length - (off + len))) < 0) {
+ throw new IndexOutOfBoundsException();
+ }
+
+ int n = (int) Math.min(Integer.MAX_VALUE, Math.min(len, (end - pos)));
+ if (n == 0) return -1;
+ int ret = 0;
+ synchronized (in) {
+ in.seek(pos);
+ ret = in.read(b, off, n);
+ }
+ // / ret = in.read(pos, b, off, n);
+ if (ret < 0) {
+ end = pos;
+ return -1;
+ }
+ pos += ret;
+ return ret;
+ }
+
+ @Override
+ /*
+ * We may skip beyond the end of the file.
+ */
+ public long skip(long n) throws IOException {
+ long len = Math.min(n, end - pos);
+ pos += len;
+ return len;
+ }
+
+ @Override
+ public void mark(int readlimit) {
+ mark = pos;
+ }
+
+ @Override
+ public void reset() throws IOException {
+ if (mark < 0) throw new IOException("Resetting to invalid mark");
+ pos = mark;
+ }
+
+ @Override
+ public boolean markSupported() {
+ return true;
+ }
+
+ @Override
+ public void close() {
+ // Invalidate the state of the stream.
+ in = null;
+ pos = end;
+ mark = -1;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/hfile/Compression.java b/src/java/org/apache/hadoop/hbase/io/hfile/Compression.java
new file mode 100644
index 0000000..e27261d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/hfile/Compression.java
@@ -0,0 +1,277 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.FilterOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.io.compress.CodecPool;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.CompressionInputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.Decompressor;
+import org.apache.hadoop.io.compress.GzipCodec;
+import org.apache.hadoop.io.compress.DefaultCodec;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * Compression related stuff.
+ * Copied from hadoop-3315 tfile.
+ */
+public final class Compression {
+ static final Log LOG = LogFactory.getLog(Compression.class);
+
+ /**
+ * Prevent the instantiation of class.
+ */
+ private Compression() {
+ super();
+ }
+
+ static class FinishOnFlushCompressionStream extends FilterOutputStream {
+ public FinishOnFlushCompressionStream(CompressionOutputStream cout) {
+ super(cout);
+ }
+
+ @Override
+ public void write(byte b[], int off, int len) throws IOException {
+ out.write(b, off, len);
+ }
+
+ @Override
+ public void flush() throws IOException {
+ CompressionOutputStream cout = (CompressionOutputStream) out;
+ cout.finish();
+ cout.flush();
+ cout.resetState();
+ }
+ }
+
+ /**
+ * Compression algorithms.
+ */
+ public static enum Algorithm {
+ LZO("lzo") {
+ // Use base type to avoid compile-time dependencies.
+ private CompressionCodec lzoCodec;
+
+ @Override
+ CompressionCodec getCodec() {
+ if (lzoCodec == null) {
+ Configuration conf = new Configuration();
+ conf.setBoolean("hadoop.native.lib", true);
+ try {
+ Class<?> externalCodec =
+ ClassLoader.getSystemClassLoader().loadClass("com.hadoop.compression.lzo.LzoCodec");
+ lzoCodec = (CompressionCodec) ReflectionUtils.newInstance(externalCodec, conf);
+ } catch (ClassNotFoundException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ return lzoCodec;
+ }
+ },
+ GZ("gz") {
+ private GzipCodec codec;
+
+ @Override
+ DefaultCodec getCodec() {
+ if (codec == null) {
+ Configuration conf = new Configuration();
+ conf.setBoolean("hadoop.native.lib", true);
+ codec = new GzipCodec();
+ codec.setConf(conf);
+ }
+
+ return codec;
+ }
+ },
+
+ NONE("none") {
+ @Override
+ DefaultCodec getCodec() {
+ return null;
+ }
+
+ @Override
+ public synchronized InputStream createDecompressionStream(
+ InputStream downStream, Decompressor decompressor,
+ int downStreamBufferSize) throws IOException {
+ if (downStreamBufferSize > 0) {
+ return new BufferedInputStream(downStream, downStreamBufferSize);
+ }
+ // else {
+ // Make sure we bypass FSInputChecker buffer.
+ // return new BufferedInputStream(downStream, 1024);
+ // }
+ // }
+ return downStream;
+ }
+
+ @Override
+ public synchronized OutputStream createCompressionStream(
+ OutputStream downStream, Compressor compressor,
+ int downStreamBufferSize) throws IOException {
+ if (downStreamBufferSize > 0) {
+ return new BufferedOutputStream(downStream, downStreamBufferSize);
+ }
+
+ return downStream;
+ }
+ };
+
+ private final String compressName;
+ // data input buffer size to absorb small reads from application.
+ private static final int DATA_IBUF_SIZE = 1 * 1024;
+ // data output buffer size to absorb small writes from application.
+ private static final int DATA_OBUF_SIZE = 4 * 1024;
+
+ Algorithm(String name) {
+ this.compressName = name;
+ }
+
+ abstract CompressionCodec getCodec();
+
+ public InputStream createDecompressionStream(
+ InputStream downStream, Decompressor decompressor,
+ int downStreamBufferSize) throws IOException {
+ CompressionCodec codec = getCodec();
+ // Set the internal buffer size to read from down stream.
+ if (downStreamBufferSize > 0) {
+ Configurable c = (Configurable) codec;
+ c.getConf().setInt("io.file.buffer.size", downStreamBufferSize);
+ }
+ CompressionInputStream cis =
+ codec.createInputStream(downStream, decompressor);
+ BufferedInputStream bis2 = new BufferedInputStream(cis, DATA_IBUF_SIZE);
+ return bis2;
+
+ }
+
+ public OutputStream createCompressionStream(
+ OutputStream downStream, Compressor compressor, int downStreamBufferSize)
+ throws IOException {
+ CompressionCodec codec = getCodec();
+ OutputStream bos1 = null;
+ if (downStreamBufferSize > 0) {
+ bos1 = new BufferedOutputStream(downStream, downStreamBufferSize);
+ }
+ else {
+ bos1 = downStream;
+ }
+ Configurable c = (Configurable) codec;
+ c.getConf().setInt("io.file.buffer.size", 32 * 1024);
+ CompressionOutputStream cos =
+ codec.createOutputStream(bos1, compressor);
+ BufferedOutputStream bos2 =
+ new BufferedOutputStream(new FinishOnFlushCompressionStream(cos),
+ DATA_OBUF_SIZE);
+ return bos2;
+ }
+
+ public Compressor getCompressor() {
+ CompressionCodec codec = getCodec();
+ if (codec != null) {
+ Compressor compressor = CodecPool.getCompressor(codec);
+ if (compressor != null) {
+ if (compressor.finished()) {
+ // Somebody returns the compressor to CodecPool but is still using
+ // it.
+ LOG
+ .warn("Compressor obtained from CodecPool is already finished()");
+ // throw new AssertionError(
+ // "Compressor obtained from CodecPool is already finished()");
+ }
+ compressor.reset();
+ }
+ return compressor;
+ }
+ return null;
+ }
+
+ public void returnCompressor(Compressor compressor) {
+ if (compressor != null) {
+ CodecPool.returnCompressor(compressor);
+ }
+ }
+
+ public Decompressor getDecompressor() {
+ CompressionCodec codec = getCodec();
+ if (codec != null) {
+ Decompressor decompressor = CodecPool.getDecompressor(codec);
+ if (decompressor != null) {
+ if (decompressor.finished()) {
+ // Somebody returns the decompressor to CodecPool but is still using
+ // it.
+ LOG
+ .warn("Deompressor obtained from CodecPool is already finished()");
+ // throw new AssertionError(
+ // "Decompressor obtained from CodecPool is already finished()");
+ }
+ decompressor.reset();
+ }
+ return decompressor;
+ }
+
+ return null;
+ }
+
+ public void returnDecompressor(Decompressor decompressor) {
+ if (decompressor != null) {
+ CodecPool.returnDecompressor(decompressor);
+ }
+ }
+
+ public String getName() {
+ return compressName;
+ }
+ }
+
+ public static Algorithm getCompressionAlgorithmByName(String compressName) {
+ Algorithm[] algos = Algorithm.class.getEnumConstants();
+
+ for (Algorithm a : algos) {
+ if (a.getName().equals(compressName)) {
+ return a;
+ }
+ }
+
+ throw new IllegalArgumentException(
+ "Unsupported compression algorithm name: " + compressName);
+ }
+
+ static String[] getSupportedAlgorithms() {
+ Algorithm[] algos = Algorithm.class.getEnumConstants();
+
+ String[] ret = new String[algos.length];
+ int i = 0;
+ for (Algorithm a : algos) {
+ ret[i++] = a.getName();
+ }
+
+ return ret;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/hfile/HFile.java b/src/java/org/apache/hadoop/hbase/io/hfile/HFile.java
new file mode 100644
index 0000000..12cd124
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/hfile/HFile.java
@@ -0,0 +1,1544 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.Closeable;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.Decompressor;
+
+/**
+ * File format for hbase.
+ * A file of sorted key/value pairs. Both keys and values are byte arrays.
+ * <p>
+ * The memory footprint of a HFile includes the following (below is taken from
+ * <a
+ * href=https://issues.apache.org/jira/browse/HADOOP-3315>Hadoop-3315 tfile</a>
+ * but applies also to HFile):
+ * <ul>
+ * <li>Some constant overhead of reading or writing a compressed block.
+ * <ul>
+ * <li>Each compressed block requires one compression/decompression codec for
+ * I/O.
+ * <li>Temporary space to buffer the key.
+ * <li>Temporary space to buffer the value.
+ * </ul>
+ * <li>HFile index, which is proportional to the total number of Data Blocks.
+ * The total amount of memory needed to hold the index can be estimated as
+ * (56+AvgKeySize)*NumBlocks.
+ * </ul>
+ * Suggestions on performance optimization.
+ * <ul>
+ * <li>Minimum block size. We recommend a setting of minimum block size between
+ * 8KB to 1MB for general usage. Larger block size is preferred if files are
+ * primarily for sequential access. However, it would lead to inefficient random
+ * access (because there are more data to decompress). Smaller blocks are good
+ * for random access, but require more memory to hold the block index, and may
+ * be slower to create (because we must flush the compressor stream at the
+ * conclusion of each data block, which leads to an FS I/O flush). Further, due
+ * to the internal caching in Compression codec, the smallest possible block
+ * size would be around 20KB-30KB.
+ * <li>The current implementation does not offer true multi-threading for
+ * reading. The implementation uses FSDataInputStream seek()+read(), which is
+ * shown to be much faster than positioned-read call in single thread mode.
+ * However, it also means that if multiple threads attempt to access the same
+ * HFile (using multiple scanners) simultaneously, the actual I/O is carried out
+ * sequentially even if they access different DFS blocks (Reexamine! pread seems
+ * to be 10% faster than seek+read in my testing -- stack).
+ * <li>Compression codec. Use "none" if the data is not very compressable (by
+ * compressable, I mean a compression ratio at least 2:1). Generally, use "lzo"
+ * as the starting point for experimenting. "gz" overs slightly better
+ * compression ratio over "lzo" but requires 4x CPU to compress and 2x CPU to
+ * decompress, comparing to "lzo".
+ * </ul>
+ *
+ * For more on the background behind HFile, see <a
+ * href=https://issues.apache.org/jira/browse/HBASE-3315>HBASE-61</a>.
+ * <p>
+ * File is made of data blocks followed by meta data blocks (if any), a fileinfo
+ * block, data block index, meta data block index, and a fixed size trailer
+ * which records the offsets at which file changes content type.
+ * <pre><data blocks><meta blocks><fileinfo><data index><meta index><trailer></pre>
+ * Each block has a bit of magic at its start. Block are comprised of
+ * key/values. In data blocks, they are both byte arrays. Metadata blocks are
+ * a String key and a byte array value. An empty file looks like this:
+ * <pre><fileinfo><trailer></pre>. That is, there are not data nor meta
+ * blocks present.
+ * <p>
+ * TODO: Bloomfilters. Need to add hadoop 0.20. first since it has bug fixes
+ * on the hadoop bf package.
+ * * TODO: USE memcmp by default? Write the keys out in an order that allows
+ * my using this -- reverse the timestamp.
+ * TODO: Add support for fast-gzip and for lzo.
+ * TODO: Do scanners need to be able to take a start and end row?
+ * TODO: Should BlockIndex know the name of its file? Should it have a Path
+ * that points at its file say for the case where an index lives apart from
+ * an HFile instance?
+ */
+public class HFile {
+ static final Log LOG = LogFactory.getLog(HFile.class);
+
+ /* These values are more or less arbitrary, and they are used as a
+ * form of check to make sure the file isn't completely corrupt.
+ */
+ final static byte [] DATABLOCKMAGIC =
+ {'D', 'A', 'T', 'A', 'B', 'L', 'K', 42 };
+ final static byte [] INDEXBLOCKMAGIC =
+ { 'I', 'D', 'X', 'B', 'L', 'K', 41, 43 };
+ final static byte [] METABLOCKMAGIC =
+ { 'M', 'E', 'T', 'A', 'B', 'L', 'K', 99 };
+ final static byte [] TRAILERBLOCKMAGIC =
+ { 'T', 'R', 'A', 'B', 'L', 'K', 34, 36 };
+
+ /**
+ * Maximum length of key in HFile.
+ */
+ public final static int MAXIMUM_KEY_LENGTH = 64 * 1024;
+
+ /**
+ * Default blocksize for hfile.
+ */
+ public final static int DEFAULT_BLOCKSIZE = 64 * 1024;
+
+ /**
+ * Default compression: none.
+ */
+ public final static Compression.Algorithm DEFAULT_COMPRESSION_ALGORITHM =
+ Compression.Algorithm.NONE;
+ /** Default compression name: none. */
+ public final static String DEFAULT_COMPRESSION =
+ DEFAULT_COMPRESSION_ALGORITHM.getName();
+
+ /**
+ * HFile Writer.
+ */
+ public static class Writer implements Closeable {
+ // FileSystem stream to write on.
+ private FSDataOutputStream outputStream;
+ // True if we opened the <code>outputStream</code> (and so will close it).
+ private boolean closeOutputStream;
+
+ // Name for this object used when logging or in toString. Is either
+ // the result of a toString on stream or else toString of passed file Path.
+ private String name;
+
+ // Total uncompressed bytes, maybe calculate a compression ratio later.
+ private int totalBytes = 0;
+
+ // Total # of key/value entries, ie: how many times add() was called.
+ private int entryCount = 0;
+
+ // Used calculating average key and value lengths.
+ private long keylength = 0;
+ private long valuelength = 0;
+
+ // Used to ensure we write in order.
+ private final RawComparator<byte []> comparator;
+
+ // A stream made per block written.
+ private DataOutputStream out;
+
+ // Number of uncompressed bytes per block. Reinitialized when we start
+ // new block.
+ private int blocksize;
+
+ // Offset where the current block began.
+ private long blockBegin;
+
+ // First key in a block (Not first key in file).
+ private byte [] firstKey = null;
+
+ // Key previously appended. Becomes the last key in the file.
+ private byte [] lastKeyBuffer = null;
+ private int lastKeyOffset = -1;
+ private int lastKeyLength = -1;
+
+ // See {@link BlockIndex}. Below four fields are used to write the block
+ // index.
+ ArrayList<byte[]> blockKeys = new ArrayList<byte[]>();
+ // Block offset in backing stream.
+ ArrayList<Long> blockOffsets = new ArrayList<Long>();
+ // Raw (decompressed) data size.
+ ArrayList<Integer> blockDataSizes = new ArrayList<Integer>();
+
+ // Meta block system.
+ private ArrayList<byte []> metaNames = new ArrayList<byte []>();
+ private ArrayList<byte []> metaData = new ArrayList<byte[]>();
+
+ // Used compression. Used even if no compression -- 'none'.
+ private final Compression.Algorithm compressAlgo;
+ private Compressor compressor;
+
+ // Special datastructure to hold fileinfo.
+ private FileInfo fileinfo = new FileInfo();
+
+ // May be null if we were passed a stream.
+ private Path path = null;
+
+ /**
+ * Constructor that uses all defaults for compression and block size.
+ * @param fs
+ * @param path
+ * @throws IOException
+ */
+ public Writer(FileSystem fs, Path path)
+ throws IOException {
+ this(fs, path, DEFAULT_BLOCKSIZE, null, null, false);
+ }
+
+ /**
+ * Constructor that takes a Path.
+ * @param fs
+ * @param path
+ * @param blocksize
+ * @param compress
+ * @param comparator
+ * @throws IOException
+ * @throws IOException
+ */
+ public Writer(FileSystem fs, Path path, int blocksize,
+ String compress, final RawComparator<byte []> comparator)
+ throws IOException {
+ this(fs, path, blocksize,
+ compress == null? DEFAULT_COMPRESSION_ALGORITHM:
+ Compression.getCompressionAlgorithmByName(compress),
+ comparator, false);
+ }
+
+ /**
+ * Constructor that takes a Path.
+ * @param fs
+ * @param path
+ * @param blocksize
+ * @param compress
+ * @param comparator
+ * @param bloomfilter
+ * @throws IOException
+ */
+ public Writer(FileSystem fs, Path path, int blocksize,
+ Compression.Algorithm compress,
+ final RawComparator<byte []> comparator,
+ final boolean bloomfilter)
+ throws IOException {
+ this(fs.create(path), blocksize, compress, comparator, bloomfilter);
+ this.closeOutputStream = true;
+ this.name = path.toString();
+ this.path = path;
+ }
+
+ /**
+ * Constructor that takes a stream.
+ * @param ostream Stream to use.
+ * @param blocksize
+ * @param compress
+ * @param c RawComparator to use.
+ * @throws IOException
+ */
+ public Writer(final FSDataOutputStream ostream, final int blocksize,
+ final String compress, final RawComparator<byte []> c)
+ throws IOException {
+ this(ostream, blocksize,
+ compress == null? DEFAULT_COMPRESSION_ALGORITHM:
+ Compression.getCompressionAlgorithmByName(compress), c, false);
+ }
+
+ /**
+ * Constructor that takes a stream.
+ * @param ostream Stream to use.
+ * @param blocksize
+ * @param compress
+ * @param c
+ * @param bloomfilter
+ * @throws IOException
+ */
+ public Writer(final FSDataOutputStream ostream, final int blocksize,
+ final Compression.Algorithm compress,
+ final RawComparator<byte []> c,
+ final boolean bloomfilter)
+ throws IOException {
+ this.outputStream = ostream;
+ this.closeOutputStream = false;
+ this.blocksize = blocksize;
+ this.comparator = c == null? Bytes.BYTES_RAWCOMPARATOR: c;
+ this.name = this.outputStream.toString();
+ this.compressAlgo = compress == null?
+ DEFAULT_COMPRESSION_ALGORITHM: compress;
+ }
+
+ /*
+ * If at block boundary, opens new block.
+ * @throws IOException
+ */
+ private void checkBlockBoundary() throws IOException {
+ if (this.out != null && this.out.size() < blocksize) return;
+ finishBlock();
+ newBlock();
+ }
+
+ /*
+ * Do the cleanup if a current block.
+ * @throws IOException
+ */
+ private void finishBlock() throws IOException {
+ if (this.out == null) return;
+ long size = releaseCompressingStream(this.out);
+ this.out = null;
+ blockKeys.add(firstKey);
+ int written = longToInt(size);
+ blockOffsets.add(Long.valueOf(blockBegin));
+ blockDataSizes.add(Integer.valueOf(written));
+ this.totalBytes += written;
+ }
+
+ /*
+ * Ready a new block for writing.
+ * @throws IOException
+ */
+ private void newBlock() throws IOException {
+ // This is where the next block begins.
+ blockBegin = outputStream.getPos();
+ this.out = getCompressingStream();
+ this.out.write(DATABLOCKMAGIC);
+ firstKey = null;
+ }
+
+ /*
+ * Sets up a compressor and creates a compression stream on top of
+ * this.outputStream. Get one per block written.
+ * @return A compressing stream; if 'none' compression, returned stream
+ * does not compress.
+ * @throws IOException
+ * @see {@link #releaseCompressingStream(DataOutputStream)}
+ */
+ private DataOutputStream getCompressingStream() throws IOException {
+ this.compressor = compressAlgo.getCompressor();
+ // Get new DOS compression stream. In tfile, the DOS, is not closed,
+ // just finished, and that seems to be fine over there. TODO: Check
+ // no memory retention of the DOS. Should I disable the 'flush' on the
+ // DOS as the BCFile over in tfile does? It wants to make it so flushes
+ // don't go through to the underlying compressed stream. Flush on the
+ // compressed downstream should be only when done. I was going to but
+ // looks like when we call flush in here, its legitimate flush that
+ // should go through to the compressor.
+ OutputStream os =
+ this.compressAlgo.createCompressionStream(this.outputStream,
+ this.compressor, 0);
+ return new DataOutputStream(os);
+ }
+
+ /*
+ * Let go of block compressor and compressing stream gotten in call
+ * {@link #getCompressingStream}.
+ * @param dos
+ * @return How much was written on this stream since it was taken out.
+ * @see #getCompressingStream()
+ * @throws IOException
+ */
+ private int releaseCompressingStream(final DataOutputStream dos)
+ throws IOException {
+ dos.flush();
+ this.compressAlgo.returnCompressor(this.compressor);
+ this.compressor = null;
+ return dos.size();
+ }
+
+ /**
+ * Add a meta block to the end of the file. Call before close().
+ * Metadata blocks are expensive. Fill one with a bunch of serialized data
+ * rather than do a metadata block per metadata instance. If metadata is
+ * small, consider adding to file info using
+ * {@link #appendFileInfo(byte[], byte[])}
+ * @param metaBlockName name of the block
+ * @param bytes uninterpreted bytes of the block.
+ */
+ public void appendMetaBlock(String metaBlockName, byte [] bytes) {
+ metaNames.add(Bytes.toBytes(metaBlockName));
+ metaData.add(bytes);
+ }
+
+ /**
+ * Add to the file info. Added key value can be gotten out of the return
+ * from {@link Reader#loadFileInfo()}.
+ * @param k Key
+ * @param v Value
+ * @throws IOException
+ */
+ public void appendFileInfo(final byte [] k, final byte [] v)
+ throws IOException {
+ appendFileInfo(this.fileinfo, k, v, true);
+ }
+
+ FileInfo appendFileInfo(FileInfo fi, final byte [] k, final byte [] v,
+ final boolean checkPrefix)
+ throws IOException {
+ if (k == null || v == null) {
+ throw new NullPointerException("Key nor value may be null");
+ }
+ if (checkPrefix &&
+ Bytes.toString(k).toLowerCase().startsWith(FileInfo.RESERVED_PREFIX)) {
+ throw new IOException("Keys with a " + FileInfo.RESERVED_PREFIX +
+ " are reserved");
+ }
+ fi.put(k, v);
+ return fi;
+ }
+
+ /**
+ * @return Path or null if we were passed a stream rather than a Path.
+ */
+ public Path getPath() {
+ return this.path;
+ }
+
+ @Override
+ public String toString() {
+ return "writer=" + this.name + ", compression=" +
+ this.compressAlgo.getName();
+ }
+
+ /**
+ * Add key/value to file.
+ * Keys must be added in an order that agrees with the Comparator passed
+ * on construction.
+ * @param kv KeyValue to add. Cannot be empty nor null.
+ * @throws IOException
+ */
+ public void append(final KeyValue kv)
+ throws IOException {
+ append(kv.getBuffer(), kv.getKeyOffset(), kv.getKeyLength(),
+ kv.getBuffer(), kv.getValueOffset(), kv.getValueLength());
+ }
+
+ /**
+ * Add key/value to file.
+ * Keys must be added in an order that agrees with the Comparator passed
+ * on construction.
+ * @param key Key to add. Cannot be empty nor null.
+ * @param value Value to add. Cannot be empty nor null.
+ * @throws IOException
+ */
+ public void append(final byte [] key, final byte [] value)
+ throws IOException {
+ append(key, 0, key.length, value, 0, value.length);
+ }
+
+ /**
+ * Add key/value to file.
+ * Keys must be added in an order that agrees with the Comparator passed
+ * on construction.
+ * @param key Key to add. Cannot be empty nor null.
+ * @param value Value to add. Cannot be empty nor null.
+ * @throws IOException
+ */
+ public void append(final byte [] key, final int koffset, final int klength,
+ final byte [] value, final int voffset, final int vlength)
+ throws IOException {
+ checkKey(key, koffset, klength);
+ checkValue(value, voffset, vlength);
+ checkBlockBoundary();
+ // Write length of key and value and then actual key and value bytes.
+ this.out.writeInt(klength);
+ this.keylength += klength;
+ this.out.writeInt(vlength);
+ this.valuelength += vlength;
+ this.out.write(key, koffset, klength);
+ this.out.write(value, voffset, vlength);
+ // Are we the first key in this block?
+ if (this.firstKey == null) {
+ // Copy the key.
+ this.firstKey = new byte [klength];
+ System.arraycopy(key, koffset, this.firstKey, 0, klength);
+ }
+ this.lastKeyBuffer = key;
+ this.lastKeyOffset = koffset;
+ this.lastKeyLength = klength;
+ this.entryCount ++;
+ }
+
+ /*
+ * @param key Key to check.
+ * @throws IOException
+ */
+ private void checkKey(final byte [] key, final int offset, final int length)
+ throws IOException {
+ if (key == null || length <= 0) {
+ throw new IOException("Key cannot be null or empty");
+ }
+ if (length > MAXIMUM_KEY_LENGTH) {
+ throw new IOException("Key length " + length + " > " +
+ MAXIMUM_KEY_LENGTH);
+ }
+ if (this.lastKeyBuffer != null) {
+ if (this.comparator.compare(this.lastKeyBuffer, this.lastKeyOffset,
+ this.lastKeyLength, key, offset, length) > 0) {
+ throw new IOException("Added a key not lexically larger than" +
+ " previous key=" + Bytes.toString(key, offset, length) +
+ ", lastkey=" + Bytes.toString(this.lastKeyBuffer, this.lastKeyOffset,
+ this.lastKeyLength));
+ }
+ }
+ }
+
+ private void checkValue(final byte [] value,
+ final int offset,
+ final int length) throws IOException {
+ if (value == null) {
+ throw new IOException("Value cannot be null");
+ }
+ }
+
+ public void close() throws IOException {
+ if (this.outputStream == null) {
+ return;
+ }
+ // Write out the end of the data blocks, then write meta data blocks.
+ // followed by fileinfo, data block index and meta block index.
+
+ finishBlock();
+
+ FixedFileTrailer trailer = new FixedFileTrailer();
+
+ // Write out the metadata blocks if any.
+ ArrayList<Long> metaOffsets = null;
+ ArrayList<Integer> metaDataSizes = null;
+ if (metaNames.size() > 0) {
+ metaOffsets = new ArrayList<Long>(metaNames.size());
+ metaDataSizes = new ArrayList<Integer>(metaNames.size());
+ for (int i = 0 ; i < metaNames.size() ; ++ i ) {
+ metaOffsets.add(Long.valueOf(outputStream.getPos()));
+ metaDataSizes.
+ add(Integer.valueOf(METABLOCKMAGIC.length + metaData.get(i).length));
+ writeMetaBlock(metaData.get(i));
+ }
+ }
+
+ // Write fileinfo.
+ trailer.fileinfoOffset = writeFileInfo(this.outputStream);
+
+ // Write the data block index.
+ trailer.dataIndexOffset = BlockIndex.writeIndex(this.outputStream,
+ this.blockKeys, this.blockOffsets, this.blockDataSizes);
+
+ // Meta block index.
+ if (metaNames.size() > 0) {
+ trailer.metaIndexOffset = BlockIndex.writeIndex(this.outputStream,
+ this.metaNames, metaOffsets, metaDataSizes);
+ }
+
+ // Now finish off the trailer.
+ trailer.dataIndexCount = blockKeys.size();
+ trailer.metaIndexCount = metaNames.size();
+
+ trailer.totalUncompressedBytes = totalBytes;
+ trailer.entryCount = entryCount;
+
+ trailer.compressionCodec = this.compressAlgo.ordinal();
+
+ trailer.serialize(outputStream);
+
+ if (this.closeOutputStream) {
+ this.outputStream.close();
+ this.outputStream = null;
+ }
+ }
+
+ /* Write a metadata block.
+ * @param metadata
+ * @throws IOException
+ */
+ private void writeMetaBlock(final byte [] b) throws IOException {
+ DataOutputStream dos = getCompressingStream();
+ dos.write(METABLOCKMAGIC);
+ dos.write(b);
+ releaseCompressingStream(dos);
+ }
+
+ /*
+ * Add last bits of metadata to fileinfo and then write it out.
+ * Reader will be expecting to find all below.
+ * @param o Stream to write on.
+ * @return Position at which we started writing.
+ * @throws IOException
+ */
+ private long writeFileInfo(FSDataOutputStream o) throws IOException {
+ if (this.lastKeyBuffer != null) {
+ // Make a copy. The copy is stuffed into HMapWritable. Needs a clean
+ // byte buffer. Won't take a tuple.
+ byte [] b = new byte[this.lastKeyLength];
+ System.arraycopy(this.lastKeyBuffer, this.lastKeyOffset, b, 0,
+ this.lastKeyLength);
+ appendFileInfo(this.fileinfo, FileInfo.LASTKEY, b, false);
+ }
+ int avgKeyLen = this.entryCount == 0? 0:
+ (int)(this.keylength/this.entryCount);
+ appendFileInfo(this.fileinfo, FileInfo.AVG_KEY_LEN,
+ Bytes.toBytes(avgKeyLen), false);
+ int avgValueLen = this.entryCount == 0? 0:
+ (int)(this.keylength/this.entryCount);
+ appendFileInfo(this.fileinfo, FileInfo.AVG_VALUE_LEN,
+ Bytes.toBytes(avgValueLen), false);
+ appendFileInfo(this.fileinfo, FileInfo.COMPARATOR,
+ Bytes.toBytes(this.comparator.getClass().getName()), false);
+ long pos = o.getPos();
+ this.fileinfo.write(o);
+ return pos;
+ }
+ }
+
+ /**
+ * HFile Reader.
+ */
+ public static class Reader implements Closeable {
+ // Stream to read from.
+ private FSDataInputStream istream;
+ // True if we should close istream when done. We don't close it if we
+ // didn't open it.
+ private boolean closeIStream;
+
+ // These are read in when the file info is loaded.
+ HFile.BlockIndex blockIndex;
+ private BlockIndex metaIndex;
+ FixedFileTrailer trailer;
+ private volatile boolean fileInfoLoaded = false;
+
+ // Filled when we read in the trailer.
+ private Compression.Algorithm compressAlgo;
+
+ // Last key in the file. Filled in when we read in the file info
+ private byte [] lastkey = null;
+ // Stats read in when we load file info.
+ private int avgKeyLen = -1;
+ private int avgValueLen = -1;
+
+ // Used to ensure we seek correctly.
+ RawComparator<byte []> comparator;
+
+ // Size of this file.
+ private final long fileSize;
+
+ // Block cache to use.
+ private final BlockCache cache;
+ public int cacheHits = 0;
+ public int blockLoads = 0;
+
+ // Name for this object used when logging or in toString. Is either
+ // the result of a toString on the stream or else is toString of passed
+ // file Path plus metadata key/value pairs.
+ private String name;
+
+ /*
+ * Do not expose the default constructor.
+ */
+ @SuppressWarnings("unused")
+ private Reader() throws IOException {
+ this(null, null, null);
+ }
+
+ /**
+ * Opens a HFile. You must load the file info before you can
+ * use it by calling {@link #loadFileInfo()}.
+ *
+ * @param fs filesystem to load from
+ * @param path path within said filesystem
+ * @param cache block cache. Pass null if none.
+ * @throws IOException
+ */
+ public Reader(FileSystem fs, Path path, BlockCache cache)
+ throws IOException {
+ this(fs.open(path), fs.getFileStatus(path).getLen(), cache);
+ this.closeIStream = true;
+ this.name = path.toString();
+ }
+
+ /**
+ * Opens a HFile. You must load the index before you can
+ * use it by calling {@link #loadFileInfo()}.
+ *
+ * @param fsdis input stream. Caller is responsible for closing the passed
+ * stream.
+ * @param size Length of the stream.
+ * @param cache block cache. Pass null if none.
+ * @throws IOException
+ */
+ public Reader(final FSDataInputStream fsdis, final long size,
+ final BlockCache cache)
+ throws IOException {
+ this.cache = cache;
+ this.fileSize = size;
+ this.istream = fsdis;
+ this.closeIStream = false;
+ this.name = this.istream.toString();
+ }
+
+ @Override
+ public String toString() {
+ return "reader=" + this.name +
+ (!isFileInfoLoaded()? "":
+ ", compression=" + this.compressAlgo.getName() +
+ ", firstKey=" + toStringFirstKey() +
+ ", lastKey=" + toStringLastKey()) +
+ ", avgKeyLen=" + this.avgKeyLen +
+ ", avgValueLen=" + this.avgValueLen +
+ ", entries=" + this.trailer.entryCount +
+ ", length=" + this.fileSize;
+ }
+
+ protected String toStringFirstKey() {
+ return Bytes.toString(getFirstKey());
+ }
+
+ protected String toStringLastKey() {
+ return Bytes.toString(getFirstKey());
+ }
+
+ public long length() {
+ return this.fileSize;
+ }
+
+ /**
+ * Read in the index and file info.
+ * @return A map of fileinfo data.
+ * See {@link Writer#appendFileInfo(byte[], byte[])}.
+ * @throws IOException
+ */
+ public Map<byte [], byte []> loadFileInfo() throws IOException {
+ this.trailer = readTrailer();
+
+ // Read in the fileinfo and get what we need from it.
+ this.istream.seek(this.trailer.fileinfoOffset);
+ FileInfo fi = new FileInfo();
+ fi.readFields(this.istream);
+ this.lastkey = fi.get(FileInfo.LASTKEY);
+ this.avgKeyLen = Bytes.toInt(fi.get(FileInfo.AVG_KEY_LEN));
+ this.avgValueLen = Bytes.toInt(fi.get(FileInfo.AVG_VALUE_LEN));
+ String clazzName = Bytes.toString(fi.get(FileInfo.COMPARATOR));
+ this.comparator = getComparator(clazzName);
+
+ // Read in the data index.
+ this.blockIndex = BlockIndex.readIndex(this.comparator, this.istream,
+ this.trailer.dataIndexOffset, this.trailer.dataIndexCount);
+
+ // Read in the metadata index.
+ if (trailer.metaIndexCount > 0) {
+ this.metaIndex = BlockIndex.readIndex(Bytes.BYTES_RAWCOMPARATOR,
+ this.istream, this.trailer.metaIndexOffset, trailer.metaIndexCount);
+ }
+ this.fileInfoLoaded = true;
+ return fi;
+ }
+
+ boolean isFileInfoLoaded() {
+ return this.fileInfoLoaded;
+ }
+
+ @SuppressWarnings("unchecked")
+ private RawComparator<byte []> getComparator(final String clazzName)
+ throws IOException {
+ if (clazzName == null || clazzName.length() == 0) {
+ return null;
+ }
+ try {
+ return (RawComparator<byte []>)Class.forName(clazzName).newInstance();
+ } catch (InstantiationException e) {
+ throw new IOException(e);
+ } catch (IllegalAccessException e) {
+ throw new IOException(e);
+ } catch (ClassNotFoundException e) {
+ throw new IOException(e);
+ }
+ }
+
+ /* Read the trailer off the input stream. As side effect, sets the
+ * compression algorithm.
+ * @return Populated FixedFileTrailer.
+ * @throws IOException
+ */
+ private FixedFileTrailer readTrailer() throws IOException {
+ FixedFileTrailer fft = new FixedFileTrailer();
+ long seekPoint = this.fileSize - FixedFileTrailer.trailerSize();
+ this.istream.seek(seekPoint);
+ fft.deserialize(this.istream);
+ // Set up the codec.
+ this.compressAlgo =
+ Compression.Algorithm.values()[fft.compressionCodec];
+ return fft;
+ }
+
+ /**
+ * Create a Scanner on this file. No seeks or reads are done on creation.
+ * Call {@link HFileScanner#seekTo(byte[])} to position an start the read.
+ * There is nothing to clean up in a Scanner. Letting go of your references
+ * to the scanner is sufficient.
+ * @return Scanner on this file.
+ */
+ public HFileScanner getScanner() {
+ return new Scanner(this);
+ }
+ /**
+ * @param key Key to search.
+ * @return Block number of the block containing the key or -1 if not in this
+ * file.
+ */
+ protected int blockContainingKey(final byte [] key, int offset, int length) {
+ if (blockIndex == null) {
+ throw new RuntimeException("Block index not loaded");
+ }
+ return blockIndex.blockContainingKey(key, offset, length);
+ }
+ /**
+ * @param metaBlockName
+ * @return Block wrapped in a ByteBuffer
+ * @throws IOException
+ */
+ public ByteBuffer getMetaBlock(String metaBlockName) throws IOException {
+ if (trailer.metaIndexCount == 0) {
+ return null; // there are no meta blocks
+ }
+ if (metaIndex == null) {
+ throw new IOException("Meta index not loaded");
+ }
+ byte [] mbname = Bytes.toBytes(metaBlockName);
+ int block = metaIndex.blockContainingKey(mbname, 0, mbname.length);
+ if (block == -1)
+ return null;
+ long blockSize;
+ if (block == metaIndex.count - 1) {
+ blockSize = trailer.fileinfoOffset - metaIndex.blockOffsets[block];
+ } else {
+ blockSize = metaIndex.blockOffsets[block+1] - metaIndex.blockOffsets[block];
+ }
+
+ ByteBuffer buf = decompress(metaIndex.blockOffsets[block],
+ longToInt(blockSize), metaIndex.blockDataSizes[block]);
+ byte [] magic = new byte[METABLOCKMAGIC.length];
+ buf.get(magic, 0, magic.length);
+
+ if (! Arrays.equals(magic, METABLOCKMAGIC)) {
+ throw new IOException("Meta magic is bad in block " + block);
+ }
+ // Toss the header. May have to remove later due to performance.
+ buf.compact();
+ buf.limit(buf.limit() - METABLOCKMAGIC.length);
+ buf.rewind();
+ return buf;
+ }
+ /**
+ * Read in a file block.
+ * @param block Index of block to read.
+ * @return Block wrapped in a ByteBuffer.
+ * @throws IOException
+ */
+ ByteBuffer readBlock(int block) throws IOException {
+ if (blockIndex == null) {
+ throw new IOException("Block index not loaded");
+ }
+ if (block < 0 || block > blockIndex.count) {
+ throw new IOException("Requested block is out of range: " + block +
+ ", max: " + blockIndex.count);
+ }
+
+ // For any given block from any given file, synchronize reads for said
+ // block.
+ // Without a cache, this synchronizing is needless overhead, but really
+ // the other choice is to duplicate work (which the cache would prevent you from doing).
+ synchronized (blockIndex.blockKeys[block]) {
+ blockLoads++;
+ // Check cache for block. If found return.
+ if (cache != null) {
+ ByteBuffer cachedBuf = cache.getBlock(name + block);
+ if (cachedBuf != null) {
+ // Return a distinct 'copy' of the block, so pos doesnt get messed by
+ // the scanner
+ cacheHits++;
+ return cachedBuf.duplicate();
+ }
+ // Carry on, please load.
+ }
+
+ // Load block from filesystem.
+ long onDiskBlockSize;
+ if (block == blockIndex.count - 1) {
+ // last block! The end of data block is first meta block if there is
+ // one or if there isn't, the fileinfo offset.
+ long offset = this.metaIndex != null?
+ this.metaIndex.blockOffsets[0]: this.trailer.fileinfoOffset;
+ onDiskBlockSize = offset - blockIndex.blockOffsets[block];
+ } else {
+ onDiskBlockSize = blockIndex.blockOffsets[block+1] -
+ blockIndex.blockOffsets[block];
+ }
+ ByteBuffer buf = decompress(blockIndex.blockOffsets[block],
+ longToInt(onDiskBlockSize), this.blockIndex.blockDataSizes[block]);
+
+ byte [] magic = new byte[DATABLOCKMAGIC.length];
+ buf.get(magic, 0, magic.length);
+ if (!Arrays.equals(magic, DATABLOCKMAGIC)) {
+ throw new IOException("Data magic is bad in block " + block);
+ }
+ // Toss the header. May have to remove later due to performance.
+ buf.compact();
+ buf.limit(buf.limit() - DATABLOCKMAGIC.length);
+ buf.rewind();
+
+ // Cache a copy, not the one we are sending back, so the position doesnt
+ // get messed.
+ if (cache != null) {
+ cache.cacheBlock(name + block, buf.duplicate());
+ }
+
+ return buf;
+ }
+ }
+
+ /*
+ * Decompress <code>compressedSize</code> bytes off the backing
+ * FSDataInputStream.
+ * @param offset
+ * @param compressedSize
+ * @param decompressedSize
+ * @return
+ * @throws IOException
+ */
+ private ByteBuffer decompress(final long offset, final int compressedSize,
+ final int decompressedSize)
+ throws IOException {
+ Decompressor decompressor = this.compressAlgo.getDecompressor();
+ // My guess is that the bounded range fis is needed to stop the
+ // decompressor reading into next block -- IIRC, it just grabs a
+ // bunch of data w/o regard to whether decompressor is coming to end of a
+ // decompression.
+ InputStream is = this.compressAlgo.createDecompressionStream(
+ new BoundedRangeFileInputStream(this.istream, offset, compressedSize),
+ decompressor, 0);
+ ByteBuffer buf = ByteBuffer.allocate(decompressedSize);
+ IOUtils.readFully(is, buf.array(), 0, buf.capacity());
+ is.close();
+ this.compressAlgo.returnDecompressor(decompressor);
+ return buf;
+ }
+
+ /**
+ * @return First key in the file.
+ */
+ public byte [] getFirstKey() {
+ if (blockIndex == null) {
+ throw new RuntimeException("Block index not loaded");
+ }
+ return blockIndex.blockKeys[0];
+ }
+
+ public int getEntries() {
+ if (!this.isFileInfoLoaded()) {
+ throw new RuntimeException("File info not loaded");
+ }
+ return this.trailer.entryCount;
+ }
+
+ /**
+ * @return Last key in the file.
+ */
+ public byte [] getLastKey() {
+ if (!isFileInfoLoaded()) {
+ throw new RuntimeException("Load file info first");
+ }
+ return this.lastkey;
+ }
+
+ /**
+ * @return Comparator.
+ */
+ public RawComparator<byte []> getComparator() {
+ return this.comparator;
+ }
+
+ /**
+ * @return index size
+ */
+ public long indexSize() {
+ return (this.blockIndex != null? this.blockIndex.heapSize(): 0) +
+ ((this.metaIndex != null)? this.metaIndex.heapSize(): 0);
+ }
+
+ /**
+ * @return Midkey for this file. We work with block boundaries only so
+ * returned midkey is an approximation only.
+ * @throws IOException
+ */
+ public byte [] midkey() throws IOException {
+ if (!isFileInfoLoaded() || this.blockIndex.isEmpty()) {
+ return null;
+ }
+ return this.blockIndex.midkey();
+ }
+
+ public void close() throws IOException {
+ if (this.closeIStream && this.istream != null) {
+ this.istream.close();
+ this.istream = null;
+ }
+ }
+
+ /*
+ * Implementation of {@link HFileScanner} interface.
+ */
+ private static class Scanner implements HFileScanner {
+ private final Reader reader;
+ private ByteBuffer block;
+ private int currBlock;
+
+ private int currKeyLen = 0;
+ private int currValueLen = 0;
+
+ public int blockFetches = 0;
+
+ public Scanner(Reader r) {
+ this.reader = r;
+ }
+
+ public KeyValue getKeyValue() {
+ return new KeyValue(this.block.array(),
+ this.block.arrayOffset() + this.block.position() - 8);
+ }
+
+ public ByteBuffer getKey() {
+ if (this.block == null || this.currKeyLen == 0) {
+ throw new RuntimeException("you need to seekTo() before calling getKey()");
+ }
+ ByteBuffer keyBuff = this.block.slice();
+ keyBuff.limit(this.currKeyLen);
+ keyBuff.rewind();
+ // Do keyBuff.asReadOnly()?
+ return keyBuff;
+ }
+
+ public ByteBuffer getValue() {
+ if (block == null || currKeyLen == 0) {
+ throw new RuntimeException("you need to seekTo() before calling getValue()");
+ }
+ // TODO: Could this be done with one ByteBuffer rather than create two?
+ ByteBuffer valueBuff = this.block.slice();
+ valueBuff.position(this.currKeyLen);
+ valueBuff = valueBuff.slice();
+ valueBuff.limit(currValueLen);
+ valueBuff.rewind();
+ return valueBuff;
+ }
+
+ public boolean next() throws IOException {
+ // LOG.deug("rem:" + block.remaining() + " p:" + block.position() +
+ // " kl: " + currKeyLen + " kv: " + currValueLen);
+ if (block == null) {
+ throw new IOException("Next called on non-seeked scanner");
+ }
+ block.position(block.position() + currKeyLen + currValueLen);
+ if (block.remaining() <= 0) {
+ // LOG.debug("Fetch next block");
+ currBlock++;
+ if (currBlock >= reader.blockIndex.count) {
+ // damn we are at the end
+ currBlock = 0;
+ block = null;
+ return false;
+ }
+ block = reader.readBlock(currBlock);
+ currKeyLen = block.getInt();
+ currValueLen = block.getInt();
+ blockFetches++;
+ return true;
+ }
+ // LOG.debug("rem:" + block.remaining() + " p:" + block.position() +
+ // " kl: " + currKeyLen + " kv: " + currValueLen);
+
+ currKeyLen = block.getInt();
+ currValueLen = block.getInt();
+ return true;
+ }
+
+ public int seekTo(byte [] key) throws IOException {
+ return seekTo(key, 0, key.length);
+ }
+
+
+ public int seekTo(byte[] key, int offset, int length) throws IOException {
+ int b = reader.blockContainingKey(key, offset, length);
+ if (b < 0) return -1; // falls before the beginning of the file! :-(
+ // Avoid re-reading the same block (that'd be dumb).
+ loadBlock(b);
+
+ return blockSeek(key, offset, length, false);
+ }
+
+ /**
+ * Within a loaded block, seek looking for the first key
+ * that is smaller than (or equal to?) the key we are interested in.
+ *
+ * A note on the seekBefore - if you have seekBefore = true, AND the
+ * first key in the block = key, then you'll get thrown exceptions.
+ * @param key to find
+ * @param seekBefore find the key before the exact match.
+ * @return
+ */
+ private int blockSeek(byte[] key, int offset, int length, boolean seekBefore) {
+ int klen, vlen;
+ int lastLen = 0;
+ do {
+ klen = block.getInt();
+ vlen = block.getInt();
+ int comp = this.reader.comparator.compare(key, offset, length,
+ block.array(), block.arrayOffset() + block.position(), klen);
+ if (comp == 0) {
+ if (seekBefore) {
+ block.position(block.position() - lastLen - 16);
+ currKeyLen = block.getInt();
+ currValueLen = block.getInt();
+ return 1; // non exact match.
+ }
+ currKeyLen = klen;
+ currValueLen = vlen;
+ return 0; // indicate exact match
+ }
+ if (comp < 0) {
+ // go back one key:
+ block.position(block.position() - lastLen - 16);
+ currKeyLen = block.getInt();
+ currValueLen = block.getInt();
+ return 1;
+ }
+ block.position(block.position() + klen + vlen);
+ lastLen = klen + vlen ;
+ } while(block.remaining() > 0);
+ // ok we are at the end, so go back a littleeeeee....
+ // The 8 in the below is intentionally different to the 16s in the above
+ // Do the math you you'll figure it.
+ block.position(block.position() - lastLen - 8);
+ currKeyLen = block.getInt();
+ currValueLen = block.getInt();
+ return 1; // didn't exactly find it.
+ }
+
+ public boolean seekBefore(byte [] key) throws IOException {
+ return seekBefore(key, 0, key.length);
+ }
+
+ public boolean seekBefore(byte[] key, int offset, int length)
+ throws IOException {
+ int b = reader.blockContainingKey(key, offset, length);
+ if (b < 0)
+ return false; // key is before the start of the file.
+
+ // Question: does this block begin with 'key'?
+ if (this.reader.comparator.compare(reader.blockIndex.blockKeys[b],
+ 0, reader.blockIndex.blockKeys[b].length,
+ key, offset, length) == 0) {
+ // Ok the key we're interested in is the first of the block, so go back one.
+ if (b == 0) {
+ // we have a 'problem', the key we want is the first of the file.
+ return false;
+ }
+ b--;
+ // TODO shortcut: seek forward in this block to the last key of the block.
+ }
+ loadBlock(b);
+ blockSeek(key, offset, length, true);
+ return true;
+ }
+
+ public String getKeyString() {
+ return Bytes.toString(block.array(), block.arrayOffset() +
+ block.position(), currKeyLen);
+ }
+
+ public String getValueString() {
+ return Bytes.toString(block.array(), block.arrayOffset() +
+ block.position() + currKeyLen, currValueLen);
+ }
+
+ public Reader getReader() {
+ return this.reader;
+ }
+
+ public boolean isSeeked(){
+ return this.block != null;
+ }
+
+ public boolean seekTo() throws IOException {
+ if (this.reader.blockIndex.isEmpty()) {
+ return false;
+ }
+ if (block != null && currBlock == 0) {
+ block.rewind();
+ currKeyLen = block.getInt();
+ currValueLen = block.getInt();
+ }
+ currBlock = 0;
+ block = reader.readBlock(currBlock);
+ currKeyLen = block.getInt();
+ currValueLen = block.getInt();
+ blockFetches++;
+ return true;
+ }
+
+ private void loadBlock(int bloc) throws IOException {
+ if (block == null) {
+ block = reader.readBlock(bloc);
+ currBlock = bloc;
+ blockFetches++;
+ } else {
+ if (bloc != currBlock) {
+ block = reader.readBlock(bloc);
+ currBlock = bloc;
+ blockFetches++;
+ } else {
+ // we are already in the same block, just rewind to seek again.
+ block.rewind();
+ }
+ }
+ }
+ }
+ }
+ /*
+ * The RFile has a fixed trailer which contains offsets to other variable
+ * parts of the file. Also includes basic metadata on this file.
+ */
+ private static class FixedFileTrailer {
+ // Offset to the data block index.
+ long dataIndexOffset;
+ // Offset to the fileinfo data, a small block of vitals..
+ long fileinfoOffset;
+ // How many index counts are there (aka: block count)
+ int dataIndexCount;
+ // Offset to the meta block index.
+ long metaIndexOffset;
+ // How many meta block index entries (aka: meta block count)
+ int metaIndexCount;
+ long totalUncompressedBytes;
+ int entryCount;
+ int compressionCodec;
+ int version = 1;
+
+ FixedFileTrailer() {
+ super();
+ }
+
+ static int trailerSize() {
+ // Keep this up to date...
+ final int intSize = 4;
+ final int longSize = 8;
+ return
+ ( intSize * 5 ) +
+ ( longSize * 4 ) +
+ TRAILERBLOCKMAGIC.length;
+ }
+
+ void serialize(DataOutputStream outputStream) throws IOException {
+ outputStream.write(TRAILERBLOCKMAGIC);
+ outputStream.writeLong(fileinfoOffset);
+ outputStream.writeLong(dataIndexOffset);
+ outputStream.writeInt(dataIndexCount);
+ outputStream.writeLong(metaIndexOffset);
+ outputStream.writeInt(metaIndexCount);
+ outputStream.writeLong(totalUncompressedBytes);
+ outputStream.writeInt(entryCount);
+ outputStream.writeInt(compressionCodec);
+ outputStream.writeInt(version);
+ }
+
+ void deserialize(DataInputStream inputStream) throws IOException {
+ byte [] header = new byte[TRAILERBLOCKMAGIC.length];
+ inputStream.readFully(header);
+ if ( !Arrays.equals(header, TRAILERBLOCKMAGIC)) {
+ throw new IOException("Trailer 'header' is wrong; does the trailer " +
+ "size match content?");
+ }
+ fileinfoOffset = inputStream.readLong();
+ dataIndexOffset = inputStream.readLong();
+ dataIndexCount = inputStream.readInt();
+
+ metaIndexOffset = inputStream.readLong();
+ metaIndexCount = inputStream.readInt();
+
+ totalUncompressedBytes = inputStream.readLong();
+ entryCount = inputStream.readInt();
+ compressionCodec = inputStream.readInt();
+ version = inputStream.readInt();
+
+ if (version != 1) {
+ throw new IOException("Wrong version: " + version);
+ }
+ }
+
+ @Override
+ public String toString() {
+ return "fileinfoOffset=" + fileinfoOffset +
+ ", dataIndexOffset=" + dataIndexOffset +
+ ", dataIndexCount=" + dataIndexCount +
+ ", metaIndexOffset=" + metaIndexOffset +
+ ", metaIndexCount=" + metaIndexCount +
+ ", totalBytes=" + totalUncompressedBytes +
+ ", entryCount=" + entryCount +
+ ", version=" + version;
+ }
+ }
+
+ /*
+ * The block index for a RFile.
+ * Used reading.
+ */
+ static class BlockIndex implements HeapSize {
+ // How many actual items are there? The next insert location too.
+ int count = 0;
+ byte [][] blockKeys;
+ long [] blockOffsets;
+ int [] blockDataSizes;
+ int size = 0;
+
+ /* Needed doing lookup on blocks.
+ */
+ final RawComparator<byte []> comparator;
+
+ /*
+ * Shutdown default constructor
+ */
+ @SuppressWarnings("unused")
+ private BlockIndex() {
+ this(null);
+ }
+
+
+ /**
+ * @param c comparator used to compare keys.
+ */
+ BlockIndex(final RawComparator<byte []>c) {
+ this.comparator = c;
+ // Guess that cost of three arrays + this object is 4 * 8 bytes.
+ this.size += (4 * 8);
+ }
+
+ /**
+ * @return True if block index is empty.
+ */
+ boolean isEmpty() {
+ return this.blockKeys.length <= 0;
+ }
+
+ /**
+ * Adds a new entry in the block index.
+ *
+ * @param key Last key in the block
+ * @param offset file offset where the block is stored
+ * @param dataSize the uncompressed data size
+ */
+ void add(final byte[] key, final long offset, final int dataSize) {
+ blockOffsets[count] = offset;
+ blockKeys[count] = key;
+ blockDataSizes[count] = dataSize;
+ count++;
+ this.size += (Bytes.SIZEOF_INT * 2 + key.length);
+ }
+
+ /**
+ * @param key Key to find
+ * @return Offset of block containing <code>key</code> or -1 if this file
+ * does not contain the request.
+ */
+ int blockContainingKey(final byte[] key, int offset, int length) {
+ int pos = Bytes.binarySearch(blockKeys, key, offset, length, this.comparator);
+ if (pos < 0) {
+ pos ++;
+ pos *= -1;
+ if (pos == 0) {
+ // falls before the beginning of the file.
+ return -1;
+ }
+ // When switched to "first key in block" index, binarySearch now returns
+ // the block with a firstKey < key. This means the value we want is potentially
+ // in the next block.
+ pos --; // in previous block.
+
+ return pos;
+ }
+ // wow, a perfect hit, how unlikely?
+ return pos;
+ }
+
+ /*
+ * @return File midkey. Inexact. Operates on block boundaries. Does
+ * not go into blocks.
+ */
+ byte [] midkey() throws IOException {
+ int pos = ((this.count - 1)/2); // middle of the index
+ if (pos < 0) {
+ throw new IOException("HFile empty");
+ }
+ return this.blockKeys[pos];
+ }
+
+ /*
+ * Write out index. Whatever we write here must jibe with what
+ * BlockIndex#readIndex is expecting. Make sure the two ends of the
+ * index serialization match.
+ * @param o
+ * @param keys
+ * @param offsets
+ * @param sizes
+ * @param c
+ * @return Position at which we entered the index.
+ * @throws IOException
+ */
+ static long writeIndex(final FSDataOutputStream o,
+ final List<byte []> keys, final List<Long> offsets,
+ final List<Integer> sizes)
+ throws IOException {
+ long pos = o.getPos();
+ // Don't write an index if nothing in the index.
+ if (keys.size() > 0) {
+ o.write(INDEXBLOCKMAGIC);
+ // Write the index.
+ for (int i = 0; i < keys.size(); ++i) {
+ o.writeLong(offsets.get(i).longValue());
+ o.writeInt(sizes.get(i).intValue());
+ byte [] key = keys.get(i);
+ Bytes.writeByteArray(o, key);
+ }
+ }
+ return pos;
+ }
+
+ /*
+ * Read in the index that is at <code>indexOffset</code>
+ * Must match what was written by writeIndex in the Writer.close.
+ * @param in
+ * @param indexOffset
+ * @throws IOException
+ */
+ static BlockIndex readIndex(final RawComparator<byte []> c,
+ final FSDataInputStream in, final long indexOffset, final int indexSize)
+ throws IOException {
+ BlockIndex bi = new BlockIndex(c);
+ bi.blockOffsets = new long[indexSize];
+ bi.blockKeys = new byte[indexSize][];
+ bi.blockDataSizes = new int[indexSize];
+ // If index size is zero, no index was written.
+ if (indexSize > 0) {
+ in.seek(indexOffset);
+ byte [] magic = new byte[INDEXBLOCKMAGIC.length];
+ IOUtils.readFully(in, magic, 0, magic.length);
+ if (!Arrays.equals(magic, INDEXBLOCKMAGIC)) {
+ throw new IOException("Index block magic is wrong: " +
+ Arrays.toString(magic));
+ }
+ for (int i = 0; i < indexSize; ++i ) {
+ long offset = in.readLong();
+ int dataSize = in.readInt();
+ byte [] key = Bytes.readByteArray(in);
+ bi.add(key, offset, dataSize);
+ }
+ }
+ return bi;
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("size=" + count);
+ for (int i = 0; i < count ; i++) {
+ sb.append(", ");
+ sb.append("key=").append(Bytes.toString(blockKeys[i])).
+ append(", offset=").append(blockOffsets[i]).
+ append(", dataSize=" + blockDataSizes[i]);
+ }
+ return sb.toString();
+ }
+
+ public long heapSize() {
+ return this.size;
+ }
+ }
+
+ /*
+ * Metadata for this file. Conjured by the writer. Read in by the reader.
+ */
+ static class FileInfo extends HbaseMapWritable<byte [], byte []> {
+ static final String RESERVED_PREFIX = "hfile.";
+ static final byte [] LASTKEY = Bytes.toBytes(RESERVED_PREFIX + "LASTKEY");
+ static final byte [] AVG_KEY_LEN =
+ Bytes.toBytes(RESERVED_PREFIX + "AVG_KEY_LEN");
+ static final byte [] AVG_VALUE_LEN =
+ Bytes.toBytes(RESERVED_PREFIX + "AVG_VALUE_LEN");
+ static final byte [] COMPARATOR =
+ Bytes.toBytes(RESERVED_PREFIX + "COMPARATOR");
+
+ /*
+ * Constructor.
+ */
+ FileInfo() {
+ super();
+ }
+ }
+
+ /**
+ * Get names of supported compression algorithms. The names are acceptable by
+ * HFile.Writer.
+ *
+ * @return Array of strings, each represents a supported compression
+ * algorithm. Currently, the following compression algorithms are
+ * supported.
+ * <ul>
+ * <li>"none" - No compression.
+ * <li>"gz" - GZIP compression.
+ * </ul>
+ */
+ public static String[] getSupportedCompressionAlgorithms() {
+ return Compression.getSupportedAlgorithms();
+ }
+
+ // Utility methods.
+ /*
+ * @param l Long to convert to an int.
+ * @return <code>l</code> cast as an int.
+ */
+ static int longToInt(final long l) {
+ // Expecting the size() of a block not exceeding 4GB. Assuming the
+ // size() will wrap to negative integer if it exceeds 2GB (From tfile).
+ return (int)(l & 0x00000000ffffffffL);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java b/src/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
new file mode 100644
index 0000000..6b9673d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
@@ -0,0 +1,121 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * A scanner allows you to position yourself within a HFile and
+ * scan through it. It allows you to reposition yourself as well.
+ *
+ * <p>A scanner doesn't always have a key/value that it is pointing to
+ * when it is first created and before
+ * {@link #seekTo()}/{@link #seekTo(byte[])} are called.
+ * In this case, {@link #getKey()}/{@link #getValue()} returns null. At most
+ * other times, a key and value will be available. The general pattern is that
+ * you position the Scanner using the seekTo variants and then getKey and
+ * getValue.
+ */
+public interface HFileScanner {
+ /**
+ * SeekTo or just before the passed <code>key</code>. Examine the return
+ * code to figure whether we found the key or not.
+ * Consider the key stream of all the keys in the file,
+ * <code>k[0] .. k[n]</code>, where there are n keys in the file.
+ * @param key Key to find.
+ * @return -1, if key < k[0], no position;
+ * 0, such that k[i] = key and scanner is left in position i; and
+ * 1, such that k[i] < key, and scanner is left in position i.
+ * Furthermore, there may be a k[i+1], such that k[i] < key < k[i+1]
+ * but there may not be a k[i+1], and next() will return false (EOF).
+ * @throws IOException
+ */
+ public int seekTo(byte[] key) throws IOException;
+ public int seekTo(byte[] key, int offset, int length) throws IOException;
+ /**
+ * Consider the key stream of all the keys in the file,
+ * <code>k[0] .. k[n]</code>, where there are n keys in the file.
+ * @param key Key to find
+ * @return false if key <= k[0] or true with scanner in position 'i' such
+ * that: k[i] < key. Furthermore: there may be a k[i+1], such that
+ * k[i] < key <= k[i+1] but there may also NOT be a k[i+1], and next() will
+ * return false (EOF).
+ * @throws IOException
+ */
+ public boolean seekBefore(byte [] key) throws IOException;
+ public boolean seekBefore(byte []key, int offset, int length) throws IOException;
+ /**
+ * Positions this scanner at the start of the file.
+ * @return False if empty file; i.e. a call to next would return false and
+ * the current key and value are undefined.
+ * @throws IOException
+ */
+ public boolean seekTo() throws IOException;
+ /**
+ * Scans to the next entry in the file.
+ * @return Returns false if you are at the end otherwise true if more in file.
+ * @throws IOException
+ */
+ public boolean next() throws IOException;
+ /**
+ * Gets a buffer view to the current key. You must call
+ * {@link #seekTo(byte[])} before this method.
+ * @return byte buffer for the key. The limit is set to the key size, and the
+ * position is 0, the start of the buffer view.
+ */
+ public ByteBuffer getKey();
+ /**
+ * Gets a buffer view to the current value. You must call
+ * {@link #seekTo(byte[])} before this method.
+ *
+ * @return byte buffer for the value. The limit is set to the value size, and
+ * the position is 0, the start of the buffer view.
+ */
+ public ByteBuffer getValue();
+ /**
+ * @return Instance of {@link KeyValue}.
+ */
+ public KeyValue getKeyValue();
+ /**
+ * Convenience method to get a copy of the key as a string - interpreting the
+ * bytes as UTF8. You must call {@link #seekTo(byte[])} before this method.
+ * @return key as a string
+ */
+ public String getKeyString();
+ /**
+ * Convenience method to get a copy of the value as a string - interpreting
+ * the bytes as UTF8. You must call {@link #seekTo(byte[])} before this method.
+ * @return value as a string
+ */
+ public String getValueString();
+ /**
+ * @return Reader that underlies this Scanner instance.
+ */
+ public HFile.Reader getReader();
+ /**
+ * @return True is scanner has had one of the seek calls invoked; i.e.
+ * {@link #seekBefore(byte[])} or {@link #seekTo()} or {@link #seekTo(byte[])}.
+ * Otherwise returns false.
+ */
+ public boolean isSeeked();
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java b/src/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java
new file mode 100644
index 0000000..7f934e1
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java
@@ -0,0 +1,56 @@
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.lang.ref.ReferenceQueue;
+import java.lang.ref.SoftReference;
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.Map;
+
+
+/**
+ * Simple one RFile soft reference cache.
+ */
+public class SimpleBlockCache implements BlockCache {
+ private static class Ref extends SoftReference<ByteBuffer> {
+ public String blockId;
+ public Ref(String blockId, ByteBuffer buf, ReferenceQueue q) {
+ super(buf, q);
+ this.blockId = blockId;
+ }
+ }
+ private Map<String,Ref> cache =
+ new HashMap<String,Ref>();
+
+ private ReferenceQueue q = new ReferenceQueue();
+ public int dumps = 0;
+
+ public SimpleBlockCache() {
+ super();
+ }
+
+ void processQueue() {
+ Ref r;
+ while ( (r = (Ref)q.poll()) != null) {
+ cache.remove(r.blockId);
+ dumps++;
+ }
+ }
+
+ public synchronized int size() {
+ processQueue();
+ return cache.size();
+ }
+ @Override
+ public synchronized ByteBuffer getBlock(String blockName) {
+ processQueue(); // clear out some crap.
+ Ref ref = cache.get(blockName);
+ if (ref == null)
+ return null;
+ return ref.get();
+ }
+
+ @Override
+ public synchronized void cacheBlock(String blockName, ByteBuffer buf) {
+ cache.put(blockName, new Ref(blockName, buf, q));
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/io/hfile/package.html b/src/java/org/apache/hadoop/hbase/io/hfile/package.html
new file mode 100644
index 0000000..fa9244f
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/io/hfile/package.html
@@ -0,0 +1,25 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<head />
+<body bgcolor="white">
+Provides the hbase data+index+metadata file.
+</body>
+</html>
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HBaseClient.java b/src/java/org/apache/hadoop/hbase/ipc/HBaseClient.java
new file mode 100644
index 0000000..e54f50c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HBaseClient.java
@@ -0,0 +1,866 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import java.net.Socket;
+import java.net.InetSocketAddress;
+import java.net.SocketTimeoutException;
+import java.net.UnknownHostException;
+import java.net.ConnectException;
+
+import java.io.IOException;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.FilterInputStream;
+import java.io.InputStream;
+
+import java.util.Hashtable;
+import java.util.Iterator;
+import java.util.Map.Entry;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+import javax.net.SocketFactory;
+
+import org.apache.commons.logging.*;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/** A client for an IPC service. IPC calls take a single {@link Writable} as a
+ * parameter, and return a {@link Writable} as their value. A service runs on
+ * a port and is defined by a parameter class and a value class.
+ *
+ * <p>This is the org.apache.hadoop.ipc.Client renamed as HBaseClient and
+ * moved into this package so can access package-private methods.
+ *
+ * @see HBaseServer
+ */
+public class HBaseClient {
+
+ public static final Log LOG =
+ LogFactory.getLog("org.apache.hadoop.ipc.HBaseClass");
+ protected Hashtable<ConnectionId, Connection> connections =
+ new Hashtable<ConnectionId, Connection>();
+
+ protected Class<? extends Writable> valueClass; // class of call values
+ protected int counter; // counter for call ids
+ protected AtomicBoolean running = new AtomicBoolean(true); // if client runs
+ final protected Configuration conf;
+ final protected int maxIdleTime; //connections will be culled if it was idle for
+ //maxIdleTime msecs
+ final protected int maxRetries; //the max. no. of retries for socket connections
+ protected boolean tcpNoDelay; // if T then disable Nagle's Algorithm
+ protected int pingInterval; // how often sends ping to the server in msecs
+
+ protected SocketFactory socketFactory; // how to create sockets
+ private int refCount = 1;
+
+ final private static String PING_INTERVAL_NAME = "ipc.ping.interval";
+ final static int DEFAULT_PING_INTERVAL = 60000; // 1 min
+ final static int PING_CALL_ID = -1;
+
+ /**
+ * set the ping interval value in configuration
+ *
+ * @param conf Configuration
+ * @param pingInterval the ping interval
+ */
+ final public static void setPingInterval(Configuration conf, int pingInterval) {
+ conf.setInt(PING_INTERVAL_NAME, pingInterval);
+ }
+
+ /**
+ * Get the ping interval from configuration;
+ * If not set in the configuration, return the default value.
+ *
+ * @param conf Configuration
+ * @return the ping interval
+ */
+ final static int getPingInterval(Configuration conf) {
+ return conf.getInt(PING_INTERVAL_NAME, DEFAULT_PING_INTERVAL);
+ }
+
+ /**
+ * Increment this client's reference count
+ *
+ */
+ synchronized void incCount() {
+ refCount++;
+ }
+
+ /**
+ * Decrement this client's reference count
+ *
+ */
+ synchronized void decCount() {
+ refCount--;
+ }
+
+ /**
+ * Return if this client has no reference
+ *
+ * @return true if this client has no reference; false otherwise
+ */
+ synchronized boolean isZeroReference() {
+ return refCount==0;
+ }
+
+ /** A call waiting for a value. */
+ private class Call {
+ int id; // call id
+ Writable param; // parameter
+ Writable value; // value, null if error
+ IOException error; // exception, null if value
+ boolean done; // true when call is done
+
+ protected Call(Writable param) {
+ this.param = param;
+ synchronized (HBaseClient.this) {
+ this.id = counter++;
+ }
+ }
+
+ /** Indicate when the call is complete and the
+ * value or error are available. Notifies by default. */
+ protected synchronized void callComplete() {
+ this.done = true;
+ notify(); // notify caller
+ }
+
+ /** Set the exception when there is an error.
+ * Notify the caller the call is done.
+ *
+ * @param error exception thrown by the call; either local or remote
+ */
+ public synchronized void setException(IOException error) {
+ this.error = error;
+ callComplete();
+ }
+
+ /** Set the return value when there is no error.
+ * Notify the caller the call is done.
+ *
+ * @param value return value of the call.
+ */
+ public synchronized void setValue(Writable value) {
+ this.value = value;
+ callComplete();
+ }
+ }
+
+ /** Thread that reads responses and notifies callers. Each connection owns a
+ * socket connected to a remote address. Calls are multiplexed through this
+ * socket: responses may be delivered out of order. */
+ private class Connection extends Thread {
+ private ConnectionId remoteId;
+ private Socket socket = null; // connected socket
+ private DataInputStream in;
+ private DataOutputStream out;
+
+ // currently active calls
+ private Hashtable<Integer, Call> calls = new Hashtable<Integer, Call>();
+ private AtomicLong lastActivity = new AtomicLong();// last I/O activity time
+ protected AtomicBoolean shouldCloseConnection = new AtomicBoolean(); // indicate if the connection is closed
+ private IOException closeException; // close reason
+
+ public Connection(InetSocketAddress address) throws IOException {
+ this(new ConnectionId(address, null));
+ }
+
+ public Connection(ConnectionId remoteId) throws IOException {
+ if (remoteId.getAddress().isUnresolved()) {
+ throw new UnknownHostException("unknown host: " +
+ remoteId.getAddress().getHostName());
+ }
+ this.remoteId = remoteId;
+ UserGroupInformation ticket = remoteId.getTicket();
+ this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
+ remoteId.getAddress().toString() +
+ ((ticket==null)?" from an unknown user": (" from " + ticket.getUserName())));
+ this.setDaemon(true);
+ }
+
+ /** Update lastActivity with the current time. */
+ private void touch() {
+ lastActivity.set(System.currentTimeMillis());
+ }
+
+ /**
+ * Add a call to this connection's call queue and notify
+ * a listener; synchronized.
+ * Returns false if called during shutdown.
+ * @param call to add
+ * @return true if the call was added.
+ */
+ protected synchronized boolean addCall(Call call) {
+ if (shouldCloseConnection.get())
+ return false;
+ calls.put(call.id, call);
+ notify();
+ return true;
+ }
+
+ /** This class sends a ping to the remote side when timeout on
+ * reading. If no failure is detected, it retries until at least
+ * a byte is read.
+ */
+ private class PingInputStream extends FilterInputStream {
+ /* constructor */
+ protected PingInputStream(InputStream in) {
+ super(in);
+ }
+
+ /* Process timeout exception
+ * if the connection is not going to be closed, send a ping.
+ * otherwise, throw the timeout exception.
+ */
+ private void handleTimeout(SocketTimeoutException e) throws IOException {
+ if (shouldCloseConnection.get() || !running.get()) {
+ throw e;
+ }
+ sendPing();
+ }
+
+ /** Read a byte from the stream.
+ * Send a ping if timeout on read. Retries if no failure is detected
+ * until a byte is read.
+ * @throws IOException for any IO problem other than socket timeout
+ */
+ @Override
+ public int read() throws IOException {
+ do {
+ try {
+ return super.read();
+ } catch (SocketTimeoutException e) {
+ handleTimeout(e);
+ }
+ } while (true);
+ }
+
+ /** Read bytes into a buffer starting from offset <code>off</code>
+ * Send a ping if timeout on read. Retries if no failure is detected
+ * until a byte is read.
+ *
+ * @return the total number of bytes read; -1 if the connection is closed.
+ */
+ @Override
+ public int read(byte[] buf, int off, int len) throws IOException {
+ do {
+ try {
+ return super.read(buf, off, len);
+ } catch (SocketTimeoutException e) {
+ handleTimeout(e);
+ }
+ } while (true);
+ }
+ }
+
+ /** Connect to the server and set up the I/O streams. It then sends
+ * a header to the server and starts
+ * the connection thread that waits for responses.
+ */
+ protected synchronized void setupIOstreams() {
+ if (socket != null || shouldCloseConnection.get()) {
+ return;
+ }
+
+ short ioFailures = 0;
+ short timeoutFailures = 0;
+ try {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Connecting to "+remoteId.getAddress());
+ }
+ while (true) {
+ try {
+ this.socket = socketFactory.createSocket();
+ this.socket.setTcpNoDelay(tcpNoDelay);
+ // connection time out is 20s
+ this.socket.connect(remoteId.getAddress(), 20000);
+ this.socket.setSoTimeout(pingInterval);
+ break;
+ } catch (SocketTimeoutException toe) {
+ /* The max number of retries is 45,
+ * which amounts to 20s*45 = 15 minutes retries.
+ */
+ handleConnectionFailure(timeoutFailures++, 45, toe);
+ } catch (IOException ie) {
+ handleConnectionFailure(ioFailures++, maxRetries, ie);
+ }
+ }
+ this.in = new DataInputStream(new BufferedInputStream
+ (new PingInputStream(NetUtils.getInputStream(socket))));
+ this.out = new DataOutputStream
+ (new BufferedOutputStream(NetUtils.getOutputStream(socket)));
+ writeHeader();
+
+ // update last activity time
+ touch();
+
+ // start the receiver thread after the socket connection has been set up
+ start();
+ } catch (IOException e) {
+ markClosed(e);
+ close();
+ }
+ }
+
+ /* Handle connection failures
+ *
+ * If the current number of retries is equal to the max number of retries,
+ * stop retrying and throw the exception; Otherwise backoff 1 second and
+ * try connecting again.
+ *
+ * This Method is only called from inside setupIOstreams(), which is
+ * synchronized. Hence the sleep is synchronized; the locks will be retained.
+ *
+ * @param curRetries current number of retries
+ * @param maxRetries max number of retries allowed
+ * @param ioe failure reason
+ * @throws IOException if max number of retries is reached
+ */
+ private void handleConnectionFailure(
+ int curRetries, int maxRetries, IOException ioe) throws IOException {
+ // close the current connection
+ try {
+ socket.close();
+ } catch (IOException e) {
+ LOG.warn("Not able to close a socket", e);
+ }
+ // set socket to null so that the next call to setupIOstreams
+ // can start the process of connect all over again.
+ socket = null;
+
+ // throw the exception if the maximum number of retries is reached
+ if (curRetries >= maxRetries) {
+ throw ioe;
+ }
+
+ // otherwise back off and retry
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException ignored) {}
+
+ LOG.info("Retrying connect to server: " + remoteId.getAddress() +
+ ". Already tried " + curRetries + " time(s).");
+ }
+
+ /* Write the header for each connection
+ * Out is not synchronized because only the first thread does this.
+ */
+ private void writeHeader() throws IOException {
+ out.write(HBaseServer.HEADER.array());
+ out.write(HBaseServer.CURRENT_VERSION);
+ //When there are more fields we can have ConnectionHeader Writable.
+ DataOutputBuffer buf = new DataOutputBuffer();
+ ObjectWritable.writeObject(buf, remoteId.getTicket(),
+ UserGroupInformation.class, conf);
+ int bufLen = buf.getLength();
+ out.writeInt(bufLen);
+ out.write(buf.getData(), 0, bufLen);
+ }
+
+ /* wait till someone signals us to start reading RPC response or
+ * it is idle too long, it is marked as to be closed,
+ * or the client is marked as not running.
+ *
+ * Return true if it is time to read a response; false otherwise.
+ */
+ private synchronized boolean waitForWork() {
+ if (calls.isEmpty() && !shouldCloseConnection.get() && running.get()) {
+ long timeout = maxIdleTime-
+ (System.currentTimeMillis()-lastActivity.get());
+ if (timeout>0) {
+ try {
+ wait(timeout);
+ } catch (InterruptedException e) {}
+ }
+ }
+
+ if (!calls.isEmpty() && !shouldCloseConnection.get() && running.get()) {
+ return true;
+ } else if (shouldCloseConnection.get()) {
+ return false;
+ } else if (calls.isEmpty()) { // idle connection closed or stopped
+ markClosed(null);
+ return false;
+ } else { // get stopped but there are still pending requests
+ markClosed((IOException)new IOException().initCause(
+ new InterruptedException()));
+ return false;
+ }
+ }
+
+ public InetSocketAddress getRemoteAddress() {
+ return remoteId.getAddress();
+ }
+
+ /* Send a ping to the server if the time elapsed
+ * since last I/O activity is equal to or greater than the ping interval
+ */
+ protected synchronized void sendPing() throws IOException {
+ long curTime = System.currentTimeMillis();
+ if ( curTime - lastActivity.get() >= pingInterval) {
+ lastActivity.set(curTime);
+ synchronized (out) {
+ out.writeInt(PING_CALL_ID);
+ out.flush();
+ }
+ }
+ }
+
+ @Override
+ public void run() {
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + ": starting, having connections "
+ + connections.size());
+
+ while (waitForWork()) {//wait here for work - read or close connection
+ receiveResponse();
+ }
+
+ close();
+
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + ": stopped, remaining connections "
+ + connections.size());
+ }
+
+ /** Initiates a call by sending the parameter to the remote server.
+ * Note: this is not called from the Connection thread, but by other
+ * threads.
+ * @param call
+ */
+ public void sendParam(Call call) {
+ if (shouldCloseConnection.get()) {
+ return;
+ }
+
+ DataOutputBuffer d=null;
+ try {
+ synchronized (this.out) {
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + " sending #" + call.id);
+
+ //for serializing the
+ //data to be written
+ d = new DataOutputBuffer();
+ d.writeInt(call.id);
+ call.param.write(d);
+ byte[] data = d.getData();
+ int dataLength = d.getLength();
+ out.writeInt(dataLength); //first put the data length
+ out.write(data, 0, dataLength);//write the data
+ out.flush();
+ }
+ } catch(IOException e) {
+ markClosed(e);
+ } finally {
+ //the buffer is just an in-memory buffer, but it is still polite to
+ // close early
+ IOUtils.closeStream(d);
+ }
+ }
+
+ /* Receive a response.
+ * Because only one receiver, so no synchronization on in.
+ */
+ private void receiveResponse() {
+ if (shouldCloseConnection.get()) {
+ return;
+ }
+ touch();
+
+ try {
+ int id = in.readInt(); // try to read an id
+
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + " got value #" + id);
+
+ Call call = calls.remove(id);
+
+ boolean isError = in.readBoolean(); // read if error
+ if (isError) {
+ call.setException(new RemoteException( WritableUtils.readString(in),
+ WritableUtils.readString(in)));
+ } else {
+ Writable value =
+ (Writable) ReflectionUtils.newInstance(valueClass, conf);
+ value.readFields(in); // read value
+ call.setValue(value);
+ }
+ } catch (IOException e) {
+ markClosed(e);
+ }
+ }
+
+ private synchronized void markClosed(IOException e) {
+ if (shouldCloseConnection.compareAndSet(false, true)) {
+ closeException = e;
+ notifyAll();
+ }
+ }
+
+ /** Close the connection. */
+ private synchronized void close() {
+ if (!shouldCloseConnection.get()) {
+ LOG.error("The connection is not in the closed state");
+ return;
+ }
+
+ // release the resources
+ // first thing to do;take the connection out of the connection list
+ synchronized (connections) {
+ if (connections.get(remoteId) == this) {
+ connections.remove(remoteId);
+ }
+ }
+
+ // close the streams and therefore the socket
+ IOUtils.closeStream(out);
+ IOUtils.closeStream(in);
+
+ // clean up all calls
+ if (closeException == null) {
+ if (!calls.isEmpty()) {
+ LOG.warn(
+ "A connection is closed for no cause and calls are not empty");
+
+ // clean up calls anyway
+ closeException = new IOException("Unexpected closed connection");
+ cleanupCalls();
+ }
+ } else {
+ // log the info
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("closing ipc connection to " + remoteId.address + ": " +
+ closeException.getMessage(),closeException);
+ }
+
+ // cleanup calls
+ cleanupCalls();
+ }
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + ": closed");
+ }
+
+ /* Cleanup all calls and mark them as done */
+ private void cleanupCalls() {
+ Iterator<Entry<Integer, Call>> itor = calls.entrySet().iterator() ;
+ while (itor.hasNext()) {
+ Call c = itor.next().getValue();
+ c.setException(closeException); // local exception
+ itor.remove();
+ }
+ }
+ }
+
+ /** Call implementation used for parallel calls. */
+ private class ParallelCall extends Call {
+ private ParallelResults results;
+ protected int index;
+
+ public ParallelCall(Writable param, ParallelResults results, int index) {
+ super(param);
+ this.results = results;
+ this.index = index;
+ }
+
+ /** Deliver result to result collector. */
+ @Override
+ protected void callComplete() {
+ results.callComplete(this);
+ }
+ }
+
+ /** Result collector for parallel calls. */
+ private static class ParallelResults {
+ protected Writable[] values;
+ protected int size;
+ protected int count;
+
+ public ParallelResults(int size) {
+ this.values = new Writable[size];
+ this.size = size;
+ }
+
+ /**
+ * Collect a result.
+ * @param call
+ */
+ public synchronized void callComplete(ParallelCall call) {
+ values[call.index] = call.value; // store the value
+ count++; // count it
+ if (count == size) // if all values are in
+ notify(); // then notify waiting caller
+ }
+ }
+
+ /**
+ * Construct an IPC client whose values are of the given {@link Writable}
+ * class.
+ * @param valueClass
+ * @param conf
+ * @param factory
+ */
+ public HBaseClient(Class<? extends Writable> valueClass, Configuration conf,
+ SocketFactory factory) {
+ this.valueClass = valueClass;
+ this.maxIdleTime =
+ conf.getInt("ipc.client.connection.maxidletime", 10000); //10s
+ this.maxRetries = conf.getInt("ipc.client.connect.max.retries", 10);
+ this.tcpNoDelay = conf.getBoolean("ipc.client.tcpnodelay", false);
+ this.pingInterval = getPingInterval(conf);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("The ping interval is" + this.pingInterval + "ms.");
+ }
+ this.conf = conf;
+ this.socketFactory = factory;
+ }
+
+ /**
+ * Construct an IPC client with the default SocketFactory
+ * @param valueClass
+ * @param conf
+ */
+ public HBaseClient(Class<? extends Writable> valueClass, Configuration conf) {
+ this(valueClass, conf, NetUtils.getDefaultSocketFactory(conf));
+ }
+
+ /** Return the socket factory of this client
+ *
+ * @return this client's socket factory
+ */
+ SocketFactory getSocketFactory() {
+ return socketFactory;
+ }
+
+ /** Stop all threads related to this client. No further calls may be made
+ * using this client. */
+ public void stop() {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Stopping client");
+ }
+
+ if (!running.compareAndSet(true, false)) {
+ return;
+ }
+
+ // wake up all connections
+ synchronized (connections) {
+ for (Connection conn : connections.values()) {
+ conn.interrupt();
+ }
+ }
+
+ // wait until all connections are closed
+ while (!connections.isEmpty()) {
+ try {
+ Thread.sleep(100);
+ } catch (InterruptedException e) {
+ }
+ }
+ }
+
+ /** Make a call, passing <code>param</code>, to the IPC server running at
+ * <code>address</code>, returning the value. Throws exceptions if there are
+ * network problems or if the remote code threw an exception.
+ * @param param
+ * @param address
+ * @return Writable
+ * @throws IOException
+ */
+ public Writable call(Writable param, InetSocketAddress address)
+ throws IOException {
+ return call(param, address, null);
+ }
+
+ public Writable call(Writable param, InetSocketAddress addr,
+ UserGroupInformation ticket)
+ throws IOException {
+ Call call = new Call(param);
+ Connection connection = getConnection(addr, ticket, call);
+ connection.sendParam(call); // send the parameter
+ synchronized (call) {
+ while (!call.done) {
+ try {
+ call.wait(); // wait for the result
+ } catch (InterruptedException ignored) {}
+ }
+
+ if (call.error != null) {
+ if (call.error instanceof RemoteException) {
+ call.error.fillInStackTrace();
+ throw call.error;
+ }
+ // local exception
+ throw wrapException(addr, call.error);
+ }
+ return call.value;
+ }
+ }
+
+ /**
+ * Take an IOException and the address we were trying to connect to
+ * and return an IOException with the input exception as the cause.
+ * The new exception provides the stack trace of the place where
+ * the exception is thrown and some extra diagnostics information.
+ * If the exception is ConnectException or SocketTimeoutException,
+ * return a new one of the same type; Otherwise return an IOException.
+ *
+ * @param addr target address
+ * @param exception the relevant exception
+ * @return an exception to throw
+ */
+ private IOException wrapException(InetSocketAddress addr,
+ IOException exception) {
+ if (exception instanceof ConnectException) {
+ //connection refused; include the host:port in the error
+ return (ConnectException)new ConnectException(
+ "Call to " + addr + " failed on connection exception: " + exception)
+ .initCause(exception);
+ } else if (exception instanceof SocketTimeoutException) {
+ return (SocketTimeoutException)new SocketTimeoutException(
+ "Call to " + addr + " failed on socket timeout exception: "
+ + exception).initCause(exception);
+ } else {
+ return (IOException)new IOException(
+ "Call to " + addr + " failed on local exception: " + exception)
+ .initCause(exception);
+
+ }
+ }
+
+ /** Makes a set of calls in parallel. Each parameter is sent to the
+ * corresponding address. When all values are available, or have timed out
+ * or errored, the collected results are returned in an array. The array
+ * contains nulls for calls that timed out or errored.
+ * @param params
+ * @param addresses
+ * @return Writable[]
+ * @throws IOException
+ */
+ public Writable[] call(Writable[] params, InetSocketAddress[] addresses)
+ throws IOException {
+ if (addresses.length == 0) return new Writable[0];
+
+ ParallelResults results = new ParallelResults(params.length);
+ synchronized (results) {
+ for (int i = 0; i < params.length; i++) {
+ ParallelCall call = new ParallelCall(params[i], results, i);
+ try {
+ Connection connection = getConnection(addresses[i], null, call);
+ connection.sendParam(call); // send each parameter
+ } catch (IOException e) {
+ // log errors
+ LOG.info("Calling "+addresses[i]+" caught: " +
+ e.getMessage(),e);
+ results.size--; // wait for one fewer result
+ }
+ }
+ while (results.count != results.size) {
+ try {
+ results.wait(); // wait for all results
+ } catch (InterruptedException e) {}
+ }
+
+ return results.values;
+ }
+ }
+
+ /** Get a connection from the pool, or create a new one and add it to the
+ * pool. Connections to a given host/port are reused. */
+ private Connection getConnection(InetSocketAddress addr,
+ UserGroupInformation ticket,
+ Call call)
+ throws IOException {
+ if (!running.get()) {
+ // the client is stopped
+ throw new IOException("The client is stopped");
+ }
+ Connection connection;
+ /* we could avoid this allocation for each RPC by having a
+ * connectionsId object and with set() method. We need to manage the
+ * refs for keys in HashMap properly. For now its ok.
+ */
+ ConnectionId remoteId = new ConnectionId(addr, ticket);
+ do {
+ synchronized (connections) {
+ connection = connections.get(remoteId);
+ if (connection == null) {
+ connection = new Connection(remoteId);
+ connections.put(remoteId, connection);
+ }
+ }
+ } while (!connection.addCall(call));
+
+ //we don't invoke the method below inside "synchronized (connections)"
+ //block above. The reason for that is if the server happens to be slow,
+ //it will take longer to establish a connection and that will slow the
+ //entire system down.
+ connection.setupIOstreams();
+ return connection;
+ }
+
+ /**
+ * This class holds the address and the user ticket. The client connections
+ * to servers are uniquely identified by <remoteAddress, ticket>
+ */
+ private static class ConnectionId {
+ InetSocketAddress address;
+ UserGroupInformation ticket;
+
+ ConnectionId(InetSocketAddress address, UserGroupInformation ticket) {
+ this.address = address;
+ this.ticket = ticket;
+ }
+
+ InetSocketAddress getAddress() {
+ return address;
+ }
+ UserGroupInformation getTicket() {
+ return ticket;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (obj instanceof ConnectionId) {
+ ConnectionId id = (ConnectionId) obj;
+ return address.equals(id.address) && ticket == id.ticket;
+ //Note : ticket is a ref comparision.
+ }
+ return false;
+ }
+
+ @Override
+ public int hashCode() {
+ return address.hashCode() ^ System.identityHashCode(ticket);
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java b/src/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java
new file mode 100644
index 0000000..8b16555
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java
@@ -0,0 +1,681 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Array;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+import java.net.ConnectException;
+import java.net.InetSocketAddress;
+import java.net.SocketTimeoutException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.net.SocketFactory;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.RetriesExhaustedException;
+import org.apache.hadoop.hbase.io.HbaseObjectWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.ipc.VersionedProtocol;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+
+/** A simple RPC mechanism.
+ *
+ * This is a local hbase copy of the hadoop RPC so we can do things like
+ * address HADOOP-414 for hbase-only and try other hbase-specific
+ * optimizations like using our own version of ObjectWritable. Class has been
+ * renamed to avoid confusing it w/ hadoop versions.
+ * <p>
+ *
+ *
+ * A <i>protocol</i> is a Java interface. All parameters and return types must
+ * be one of:
+ *
+ * <ul> <li>a primitive type, <code>boolean</code>, <code>byte</code>,
+ * <code>char</code>, <code>short</code>, <code>int</code>, <code>long</code>,
+ * <code>float</code>, <code>double</code>, or <code>void</code>; or</li>
+ *
+ * <li>a {@link String}; or</li>
+ *
+ * <li>a {@link Writable}; or</li>
+ *
+ * <li>an array of the above types</li> </ul>
+ *
+ * All methods in the protocol should throw only IOException. No field data of
+ * the protocol instance is transmitted.
+ */
+public class HBaseRPC {
+ // Leave this out in the hadoop ipc package but keep class name. Do this
+ // so that we dont' get the logging of this class's invocations by doing our
+ // blanket enabling DEBUG on the o.a.h.h. package.
+ protected static final Log LOG =
+ LogFactory.getLog("org.apache.hadoop.ipc.HbaseRPC");
+
+ private HBaseRPC() {
+ super();
+ } // no public ctor
+
+
+ /** A method invocation, including the method name and its parameters.*/
+ private static class Invocation implements Writable, Configurable {
+ // Here, for hbase, we maintain two static maps of method names to code and
+ // vice versa.
+ private static final Map<Byte, String> CODE_TO_METHODNAME =
+ new HashMap<Byte, String>();
+ private static final Map<String, Byte> METHODNAME_TO_CODE =
+ new HashMap<String, Byte>();
+ // Special code that means 'not-encoded'.
+ private static final byte NOT_ENCODED = 0;
+ static {
+ byte code = NOT_ENCODED + 1;
+ code = addToMap(VersionedProtocol.class, code);
+ code = addToMap(HMasterInterface.class, code);
+ code = addToMap(HMasterRegionInterface.class, code);
+ code = addToMap(TransactionalRegionInterface.class, code);
+ }
+ // End of hbase modifications.
+
+ private String methodName;
+ @SuppressWarnings("unchecked")
+ private Class[] parameterClasses;
+ private Object[] parameters;
+ private Configuration conf;
+
+ /** default constructor */
+ public Invocation() {
+ super();
+ }
+
+ /**
+ * @param method
+ * @param parameters
+ */
+ public Invocation(Method method, Object[] parameters) {
+ this.methodName = method.getName();
+ this.parameterClasses = method.getParameterTypes();
+ this.parameters = parameters;
+ }
+
+ /** @return The name of the method invoked. */
+ public String getMethodName() { return methodName; }
+
+ /** @return The parameter classes. */
+ @SuppressWarnings("unchecked")
+ public Class[] getParameterClasses() { return parameterClasses; }
+
+ /** @return The parameter instances. */
+ public Object[] getParameters() { return parameters; }
+
+ public void readFields(DataInput in) throws IOException {
+ byte code = in.readByte();
+ methodName = CODE_TO_METHODNAME.get(Byte.valueOf(code));
+ parameters = new Object[in.readInt()];
+ parameterClasses = new Class[parameters.length];
+ HbaseObjectWritable objectWritable = new HbaseObjectWritable();
+ for (int i = 0; i < parameters.length; i++) {
+ parameters[i] = HbaseObjectWritable.readObject(in, objectWritable,
+ this.conf);
+ parameterClasses[i] = objectWritable.getDeclaredClass();
+ }
+ }
+
+ public void write(DataOutput out) throws IOException {
+ writeMethodNameCode(out, this.methodName);
+ out.writeInt(parameterClasses.length);
+ for (int i = 0; i < parameterClasses.length; i++) {
+ HbaseObjectWritable.writeObject(out, parameters[i], parameterClasses[i],
+ conf);
+ }
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder buffer = new StringBuilder(256);
+ buffer.append(methodName);
+ buffer.append("(");
+ for (int i = 0; i < parameters.length; i++) {
+ if (i != 0)
+ buffer.append(", ");
+ buffer.append(parameters[i]);
+ }
+ buffer.append(")");
+ return buffer.toString();
+ }
+
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ }
+
+ public Configuration getConf() {
+ return this.conf;
+ }
+
+ // Hbase additions.
+ private static void addToMap(final String name, final byte code) {
+ if (METHODNAME_TO_CODE.containsKey(name)) {
+ return;
+ }
+ METHODNAME_TO_CODE.put(name, Byte.valueOf(code));
+ CODE_TO_METHODNAME.put(Byte.valueOf(code), name);
+ }
+
+ /*
+ * @param c Class whose methods we'll add to the map of methods to codes
+ * (and vice versa).
+ * @param code Current state of the byte code.
+ * @return State of <code>code</code> when this method is done.
+ */
+ private static byte addToMap(final Class<?> c, final byte code) {
+ byte localCode = code;
+ Method [] methods = c.getMethods();
+ // There are no guarantees about the order in which items are returned in
+ // so do a sort (Was seeing that sort was one way on one server and then
+ // another on different server).
+ Arrays.sort(methods, new Comparator<Method>() {
+ public int compare(Method left, Method right) {
+ return left.getName().compareTo(right.getName());
+ }
+ });
+ for (int i = 0; i < methods.length; i++) {
+ addToMap(methods[i].getName(), localCode++);
+ }
+ return localCode;
+ }
+
+ /*
+ * Write out the code byte for passed Class.
+ * @param out
+ * @param c
+ * @throws IOException
+ */
+ static void writeMethodNameCode(final DataOutput out, final String methodname)
+ throws IOException {
+ Byte code = METHODNAME_TO_CODE.get(methodname);
+ if (code == null) {
+ LOG.error("Unsupported type " + methodname);
+ throw new UnsupportedOperationException("No code for unexpected " +
+ methodname);
+ }
+ out.writeByte(code.byteValue());
+ }
+ // End of hbase additions.
+ }
+
+ /* Cache a client using its socket factory as the hash key */
+ static private class ClientCache {
+ private Map<SocketFactory, HBaseClient> clients =
+ new HashMap<SocketFactory, HBaseClient>();
+
+ protected ClientCache() {}
+
+ /**
+ * Construct & cache an IPC client with the user-provided SocketFactory
+ * if no cached client exists.
+ *
+ * @param conf Configuration
+ * @return an IPC client
+ */
+ protected synchronized HBaseClient getClient(Configuration conf,
+ SocketFactory factory) {
+ // Construct & cache client. The configuration is only used for timeout,
+ // and Clients have connection pools. So we can either (a) lose some
+ // connection pooling and leak sockets, or (b) use the same timeout for all
+ // configurations. Since the IPC is usually intended globally, not
+ // per-job, we choose (a).
+ HBaseClient client = clients.get(factory);
+ if (client == null) {
+ // Make an hbase client instead of hadoop Client.
+ client = new HBaseClient(HbaseObjectWritable.class, conf, factory);
+ clients.put(factory, client);
+ } else {
+ client.incCount();
+ }
+ return client;
+ }
+
+ /**
+ * Construct & cache an IPC client with the default SocketFactory
+ * if no cached client exists.
+ *
+ * @param conf Configuration
+ * @return an IPC client
+ */
+ protected synchronized HBaseClient getClient(Configuration conf) {
+ return getClient(conf, SocketFactory.getDefault());
+ }
+
+ /**
+ * Stop a RPC client connection
+ * A RPC client is closed only when its reference count becomes zero.
+ */
+ protected void stopClient(HBaseClient client) {
+ synchronized (this) {
+ client.decCount();
+ if (client.isZeroReference()) {
+ clients.remove(client.getSocketFactory());
+ }
+ }
+ if (client.isZeroReference()) {
+ client.stop();
+ }
+ }
+ }
+
+ protected final static ClientCache CLIENTS = new ClientCache();
+
+ private static class Invoker implements InvocationHandler {
+ private InetSocketAddress address;
+ private UserGroupInformation ticket;
+ private HBaseClient client;
+ private boolean isClosed = false;
+
+ /**
+ * @param address
+ * @param ticket
+ * @param conf
+ * @param factory
+ */
+ public Invoker(InetSocketAddress address, UserGroupInformation ticket,
+ Configuration conf, SocketFactory factory) {
+ this.address = address;
+ this.ticket = ticket;
+ this.client = CLIENTS.getClient(conf, factory);
+ }
+
+ public Object invoke(Object proxy, Method method, Object[] args)
+ throws Throwable {
+ final boolean logDebug = LOG.isDebugEnabled();
+ long startTime = 0;
+ if (logDebug) {
+ startTime = System.currentTimeMillis();
+ }
+ HbaseObjectWritable value = (HbaseObjectWritable)
+ client.call(new Invocation(method, args), address, ticket);
+ if (logDebug) {
+ long callTime = System.currentTimeMillis() - startTime;
+ LOG.debug("Call: " + method.getName() + " " + callTime);
+ }
+ return value.get();
+ }
+
+ /* close the IPC client that's responsible for this invoker's RPCs */
+ synchronized protected void close() {
+ if (!isClosed) {
+ isClosed = true;
+ CLIENTS.stopClient(client);
+ }
+ }
+ }
+
+ /**
+ * A version mismatch for the RPC protocol.
+ */
+ @SuppressWarnings("serial")
+ public static class VersionMismatch extends IOException {
+ private String interfaceName;
+ private long clientVersion;
+ private long serverVersion;
+
+ /**
+ * Create a version mismatch exception
+ * @param interfaceName the name of the protocol mismatch
+ * @param clientVersion the client's version of the protocol
+ * @param serverVersion the server's version of the protocol
+ */
+ public VersionMismatch(String interfaceName, long clientVersion,
+ long serverVersion) {
+ super("Protocol " + interfaceName + " version mismatch. (client = " +
+ clientVersion + ", server = " + serverVersion + ")");
+ this.interfaceName = interfaceName;
+ this.clientVersion = clientVersion;
+ this.serverVersion = serverVersion;
+ }
+
+ /**
+ * Get the interface name
+ * @return the java class name
+ * (eg. org.apache.hadoop.mapred.InterTrackerProtocol)
+ */
+ public String getInterfaceName() {
+ return interfaceName;
+ }
+
+ /**
+ * @return the client's preferred version
+ */
+ public long getClientVersion() {
+ return clientVersion;
+ }
+
+ /**
+ * @return the server's agreed to version.
+ */
+ public long getServerVersion() {
+ return serverVersion;
+ }
+ }
+
+ /**
+ * @param protocol
+ * @param clientVersion
+ * @param addr
+ * @param conf
+ * @param maxAttempts
+ * @return proxy
+ * @throws IOException
+ */
+ @SuppressWarnings("unchecked")
+ public static VersionedProtocol waitForProxy(Class protocol,
+ long clientVersion,
+ InetSocketAddress addr,
+ Configuration conf,
+ int maxAttempts,
+ long timeout
+ ) throws IOException {
+ // HBase does limited number of reconnects which is different from hadoop.
+ long startTime = System.currentTimeMillis();
+ IOException ioe;
+ int reconnectAttempts = 0;
+ while (true) {
+ try {
+ return getProxy(protocol, clientVersion, addr, conf);
+ } catch(ConnectException se) { // namenode has not been started
+ LOG.info("Server at " + addr + " not available yet, Zzzzz...");
+ ioe = se;
+ if (maxAttempts >= 0 && ++reconnectAttempts >= maxAttempts) {
+ LOG.info("Server at " + addr + " could not be reached after " +
+ reconnectAttempts + " tries, giving up.");
+ throw new RetriesExhaustedException(addr.toString(), "unknown".getBytes(),
+ "unknown".getBytes(), reconnectAttempts - 1,
+ new ArrayList<Throwable>());
+ }
+ } catch(SocketTimeoutException te) { // namenode is busy
+ LOG.info("Problem connecting to server: " + addr);
+ ioe = te;
+ }
+ // check if timed out
+ if (System.currentTimeMillis()-timeout >= startTime) {
+ throw ioe;
+ }
+
+ // wait for retry
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException ie) {
+ // IGNORE
+ }
+ }
+ }
+
+ /**
+ * Construct a client-side proxy object that implements the named protocol,
+ * talking to a server at the named address.
+ *
+ * @param protocol
+ * @param clientVersion
+ * @param addr
+ * @param conf
+ * @param factory
+ * @return proxy
+ * @throws IOException
+ */
+ public static VersionedProtocol getProxy(Class<?> protocol,
+ long clientVersion, InetSocketAddress addr, Configuration conf,
+ SocketFactory factory) throws IOException {
+ return getProxy(protocol, clientVersion, addr, null, conf, factory);
+ }
+
+ /**
+ * Construct a client-side proxy object that implements the named protocol,
+ * talking to a server at the named address.
+ *
+ * @param protocol
+ * @param clientVersion
+ * @param addr
+ * @param ticket
+ * @param conf
+ * @param factory
+ * @return proxy
+ * @throws IOException
+ */
+ public static VersionedProtocol getProxy(Class<?> protocol,
+ long clientVersion, InetSocketAddress addr, UserGroupInformation ticket,
+ Configuration conf, SocketFactory factory)
+ throws IOException {
+ VersionedProtocol proxy =
+ (VersionedProtocol) Proxy.newProxyInstance(
+ protocol.getClassLoader(), new Class[] { protocol },
+ new Invoker(addr, ticket, conf, factory));
+ long serverVersion = proxy.getProtocolVersion(protocol.getName(),
+ clientVersion);
+ if (serverVersion == clientVersion) {
+ return proxy;
+ }
+ throw new VersionMismatch(protocol.getName(), clientVersion,
+ serverVersion);
+ }
+
+ /**
+ * Construct a client-side proxy object with the default SocketFactory
+ *
+ * @param protocol
+ * @param clientVersion
+ * @param addr
+ * @param conf
+ * @return a proxy instance
+ * @throws IOException
+ */
+ public static VersionedProtocol getProxy(Class<?> protocol,
+ long clientVersion, InetSocketAddress addr, Configuration conf)
+ throws IOException {
+
+ return getProxy(protocol, clientVersion, addr, conf, NetUtils
+ .getDefaultSocketFactory(conf));
+ }
+
+ /**
+ * Stop this proxy and release its invoker's resource
+ * @param proxy the proxy to be stopped
+ */
+ public static void stopProxy(VersionedProtocol proxy) {
+ if (proxy!=null) {
+ ((Invoker)Proxy.getInvocationHandler(proxy)).close();
+ }
+ }
+
+ /**
+ * Expert: Make multiple, parallel calls to a set of servers.
+ *
+ * @param method
+ * @param params
+ * @param addrs
+ * @param conf
+ * @return values
+ * @throws IOException
+ */
+ public static Object[] call(Method method, Object[][] params,
+ InetSocketAddress[] addrs, Configuration conf)
+ throws IOException {
+
+ Invocation[] invocations = new Invocation[params.length];
+ for (int i = 0; i < params.length; i++)
+ invocations[i] = new Invocation(method, params[i]);
+ HBaseClient client = CLIENTS.getClient(conf);
+ try {
+ Writable[] wrappedValues = client.call(invocations, addrs);
+
+ if (method.getReturnType() == Void.TYPE) {
+ return null;
+ }
+
+ Object[] values =
+ (Object[])Array.newInstance(method.getReturnType(), wrappedValues.length);
+ for (int i = 0; i < values.length; i++)
+ if (wrappedValues[i] != null)
+ values[i] = ((HbaseObjectWritable)wrappedValues[i]).get();
+
+ return values;
+ } finally {
+ CLIENTS.stopClient(client);
+ }
+ }
+
+ /**
+ * Construct a server for a protocol implementation instance listening on a
+ * port and address.
+ *
+ * @param instance
+ * @param bindAddress
+ * @param port
+ * @param conf
+ * @return Server
+ * @throws IOException
+ */
+ public static Server getServer(final Object instance, final String bindAddress, final int port, Configuration conf)
+ throws IOException {
+ return getServer(instance, bindAddress, port, 1, false, conf);
+ }
+
+ /**
+ * Construct a server for a protocol implementation instance listening on a
+ * port and address.
+ *
+ * @param instance
+ * @param bindAddress
+ * @param port
+ * @param numHandlers
+ * @param verbose
+ * @param conf
+ * @return Server
+ * @throws IOException
+ */
+ public static Server getServer(final Object instance, final String bindAddress, final int port,
+ final int numHandlers,
+ final boolean verbose, Configuration conf)
+ throws IOException {
+ return new Server(instance, conf, bindAddress, port, numHandlers, verbose);
+ }
+
+ /** An RPC Server. */
+ public static class Server extends HBaseServer {
+ private Object instance;
+ private Class<?> implementation;
+ private boolean verbose;
+
+ /**
+ * Construct an RPC server.
+ * @param instance the instance whose methods will be called
+ * @param conf the configuration to use
+ * @param bindAddress the address to bind on to listen for connection
+ * @param port the port to listen for connections on
+ * @throws IOException
+ */
+ public Server(Object instance, Configuration conf, String bindAddress, int port)
+ throws IOException {
+ this(instance, conf, bindAddress, port, 1, false);
+ }
+
+ private static String classNameBase(String className) {
+ String[] names = className.split("\\.", -1);
+ if (names == null || names.length == 0) {
+ return className;
+ }
+ return names[names.length-1];
+ }
+
+ /** Construct an RPC server.
+ * @param instance the instance whose methods will be called
+ * @param conf the configuration to use
+ * @param bindAddress the address to bind on to listen for connection
+ * @param port the port to listen for connections on
+ * @param numHandlers the number of method handler threads to run
+ * @param verbose whether each call should be logged
+ * @throws IOException
+ */
+ public Server(Object instance, Configuration conf, String bindAddress, int port,
+ int numHandlers, boolean verbose) throws IOException {
+ super(bindAddress, port, Invocation.class, numHandlers, conf, classNameBase(instance.getClass().getName()));
+ this.instance = instance;
+ this.implementation = instance.getClass();
+ this.verbose = verbose;
+ }
+
+ @Override
+ public Writable call(Writable param, long receivedTime) throws IOException {
+ try {
+ Invocation call = (Invocation)param;
+ if (verbose) log("Call: " + call);
+ Method method =
+ implementation.getMethod(call.getMethodName(),
+ call.getParameterClasses());
+
+ long startTime = System.currentTimeMillis();
+ Object value = method.invoke(instance, call.getParameters());
+ int processingTime = (int) (System.currentTimeMillis() - startTime);
+ int qTime = (int) (startTime-receivedTime);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Served: " + call.getMethodName() +
+ " queueTime= " + qTime +
+ " procesingTime= " + processingTime);
+ rpcMetrics.rpcQueueTime.inc(qTime);
+ rpcMetrics.rpcProcessingTime.inc(processingTime);
+ }
+ rpcMetrics.rpcQueueTime.inc(qTime);
+ rpcMetrics.rpcProcessingTime.inc(processingTime);
+ rpcMetrics.inc(call.getMethodName(), processingTime);
+ if (verbose) log("Return: "+value);
+
+ return new HbaseObjectWritable(method.getReturnType(), value);
+
+ } catch (InvocationTargetException e) {
+ Throwable target = e.getTargetException();
+ if (target instanceof IOException) {
+ throw (IOException)target;
+ }
+ IOException ioe = new IOException(target.toString());
+ ioe.setStackTrace(target.getStackTrace());
+ throw ioe;
+ } catch (Throwable e) {
+ IOException ioe = new IOException(e.toString());
+ ioe.setStackTrace(e.getStackTrace());
+ throw ioe;
+ }
+ }
+ }
+
+ protected static void log(String value) {
+ String v = value;
+ if (v != null && v.length() > 55)
+ v = v.substring(0, 55)+"...";
+ LOG.info(v);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HBaseRPCErrorHandler.java b/src/java/org/apache/hadoop/hbase/ipc/HBaseRPCErrorHandler.java
new file mode 100644
index 0000000..ed3c70f
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HBaseRPCErrorHandler.java
@@ -0,0 +1,31 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+/**
+ * An interface for calling out of RPC for error conditions.
+ */
+public interface HBaseRPCErrorHandler {
+ /**
+ * Take actions on the event of an OutOfMemoryError.
+ * @param e the throwable
+ * @return if the server should be shut down
+ */
+ public boolean checkOOME(final Throwable e) ;
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HBaseRPCProtocolVersion.java b/src/java/org/apache/hadoop/hbase/ipc/HBaseRPCProtocolVersion.java
new file mode 100644
index 0000000..904d859
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HBaseRPCProtocolVersion.java
@@ -0,0 +1,75 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import org.apache.hadoop.ipc.VersionedProtocol;
+
+/**
+ * There is one version id for all the RPC interfaces. If any interface
+ * is changed, the versionID must be changed here.
+ */
+public interface HBaseRPCProtocolVersion extends VersionedProtocol {
+ /**
+ * Interface version.
+ *
+ * HMasterInterface version history:
+ * <ul>
+ * <li>Version was incremented to 2 when we brought the hadoop RPC local to
+ * hbase HADOOP-2495</li>
+ * <li>Version was incremented to 3 when we changed the RPC to send codes
+ * instead of actual class names (HADOOP-2519).</li>
+ * <li>Version 4 when we moved to all byte arrays (HBASE-42).</li>
+ * <li>Version 5 HBASE-576.</li>
+ * <li>Version 6 modifyTable.</li>
+ * </ul>
+ * <p>HMasterRegionInterface version history:
+ * <ul>
+ * <li>Version 2 was when the regionServerStartup was changed to return a
+ * MapWritable instead of a HbaseMapWritable as part of HBASE-82 changes.</li>
+ * <li>Version 3 was when HMsg was refactored so it could carry optional
+ * messages (HBASE-504).</li>
+ * <li>HBASE-576 we moved this to 4.</li>
+ * </ul>
+ * <p>HRegionInterface version history:
+ * <ul>
+ * <li>Upped to 5 when we added scanner caching</li>
+ * <li>HBASE-576, we moved this to 6.</li>
+ * </ul>
+ * <p>TransactionalRegionInterface version history:
+ * <ul>
+ * <li>Moved to 2 for hbase-576.</li>
+ * </ul>
+ * <p>Unified RPC version number history:
+ * <ul>
+ * <li>Version 10: initial version (had to be > all other RPC versions</li>
+ * <li>Version 11: Changed getClosestRowBefore signature.</li>
+ * <li>Version 12: HServerLoad extensions (HBASE-1018).</li>
+ * <li>Version 13: HBASE-847</li>
+ * <li>Version 14: HBASE-900</li>
+ * <li>Version 15: HRegionInterface.exists</li>
+ * <li>Version 16: Removed HMasterRegionInterface.getRootRegionLocation and
+ * HMasterInterface.findRootRegion. We use ZooKeeper to store root region
+ * location instead.</li>
+ * <li>Version 17: Added incrementColumnValue.</li>
+ * </ul>
+ */
+ public static final long versionID = 17L;
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java b/src/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
new file mode 100644
index 0000000..2eb96af
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
@@ -0,0 +1,114 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.util.MetricsTimeVaryingRate;
+
+/**
+ *
+ * This class is for maintaining the various RPC statistics
+ * and publishing them through the metrics interfaces.
+ * This also registers the JMX MBean for RPC.
+ * <p>
+ * This class has a number of metrics variables that are publicly accessible;
+ * these variables (objects) have methods to update their values;
+ * for example:
+ * <p> {@link #rpcQueueTime}.inc(time)
+ *
+ */
+public class HBaseRpcMetrics implements Updater {
+ private MetricsRecord metricsRecord;
+ private static Log LOG = LogFactory.getLog(HBaseRpcMetrics.class);
+
+ private Map <String, MetricsTimeVaryingRate> registry =
+ Collections.synchronizedMap(new HashMap<String, MetricsTimeVaryingRate>());
+
+ public HBaseRpcMetrics(String hostName, String port) {
+ MetricsContext context = MetricsUtil.getContext("rpc");
+ metricsRecord = MetricsUtil.createRecord(context, "metrics");
+
+ metricsRecord.setTag("port", port);
+
+ LOG.info("Initializing RPC Metrics with hostName="
+ + hostName + ", port=" + port);
+
+ context.registerUpdater(this);
+ }
+
+ /**
+ * The metrics variables are public:
+ * - they can be set directly by calling their set/inc methods
+ * -they can also be read directly - e.g. JMX does this.
+ */
+ public MetricsTimeVaryingRate rpcQueueTime =
+ new MetricsTimeVaryingRate("RpcQueueTime");
+ public MetricsTimeVaryingRate rpcProcessingTime =
+ new MetricsTimeVaryingRate("RpcProcessingTime");
+
+ private MetricsTimeVaryingRate get(String key) {
+ return registry.get(key);
+ }
+
+ private MetricsTimeVaryingRate create(String key) {
+ MetricsTimeVaryingRate newMetric = new MetricsTimeVaryingRate(key);
+ registry.put(key, newMetric);
+ return newMetric;
+ }
+
+ public synchronized void inc(String name, int amt) {
+ MetricsTimeVaryingRate m = get(name);
+ if (m == null) {
+ m = create(name);
+ }
+ m.inc(amt);
+ }
+
+ /**
+ * Push the metrics to the monitoring subsystem on doUpdate() call.
+ * @param context
+ */
+ public void doUpdates(MetricsContext context) {
+ rpcQueueTime.pushMetric(metricsRecord);
+ rpcProcessingTime.pushMetric(metricsRecord);
+
+ synchronized (registry) {
+ // Iterate through the registry to propogate the different rpc metrics.
+
+ for (String metricName : registry.keySet() ) {
+ MetricsTimeVaryingRate value = (MetricsTimeVaryingRate) registry.get(metricName);
+
+ value.pushMetric(metricsRecord);
+ }
+ }
+ metricsRecord.update();
+ }
+
+ public void shutdown() {
+ // Nothing to do
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HBaseServer.java b/src/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
new file mode 100644
index 0000000..f3e0bb9
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
@@ -0,0 +1,1172 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.net.BindException;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.ServerSocket;
+import java.net.Socket;
+import java.net.SocketException;
+import java.net.UnknownHostException;
+import java.nio.ByteBuffer;
+import java.nio.channels.CancelledKeyException;
+import java.nio.channels.ClosedChannelException;
+import java.nio.channels.SelectionKey;
+import java.nio.channels.Selector;
+import java.nio.channels.ServerSocketChannel;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.ReadableByteChannel;
+import java.nio.channels.WritableByteChannel;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.StringUtils;
+
+/** An abstract IPC service. IPC calls take a single {@link Writable} as a
+ * parameter, and return a {@link Writable} as their value. A service runs on
+ * a port and is defined by a parameter class and a value class.
+ *
+ *
+ * <p>Copied local so can fix HBASE-900.
+ *
+ * @see HBaseClient
+ */
+public abstract class HBaseServer {
+
+ /**
+ * The first four bytes of Hadoop RPC connections
+ */
+ public static final ByteBuffer HEADER = ByteBuffer.wrap("hrpc".getBytes());
+
+ // 1 : Introduce ping and server does not throw away RPCs
+ // 3 : RPC was refactored in 0.19
+ public static final byte CURRENT_VERSION = 3;
+
+ /**
+ * How many calls/handler are allowed in the queue.
+ */
+ private static final int MAX_QUEUE_SIZE_PER_HANDLER = 100;
+
+ public static final Log LOG =
+ LogFactory.getLog("org.apache.hadoop.ipc.HBaseServer");
+
+ protected static final ThreadLocal<HBaseServer> SERVER = new ThreadLocal<HBaseServer>();
+
+ /** Returns the server instance called under or null. May be called under
+ * {@link #call(Writable, long)} implementations, and under {@link Writable}
+ * methods of paramters and return values. Permits applications to access
+ * the server context.
+ * @return HBaseServer
+ */
+ public static HBaseServer get() {
+ return SERVER.get();
+ }
+
+ /** This is set to Call object before Handler invokes an RPC and reset
+ * after the call returns.
+ */
+ protected static final ThreadLocal<Call> CurCall = new ThreadLocal<Call>();
+
+ /** Returns the remote side ip address when invoked inside an RPC
+ * Returns null incase of an error.
+ * @return InetAddress
+ */
+ public static InetAddress getRemoteIp() {
+ Call call = CurCall.get();
+ if (call != null) {
+ return call.connection.socket.getInetAddress();
+ }
+ return null;
+ }
+ /** Returns remote address as a string when invoked inside an RPC.
+ * Returns null in case of an error.
+ * @return String
+ */
+ public static String getRemoteAddress() {
+ InetAddress addr = getRemoteIp();
+ return (addr == null) ? null : addr.getHostAddress();
+ }
+
+ protected String bindAddress;
+ protected int port; // port we listen on
+ private int handlerCount; // number of handler threads
+ protected Class<? extends Writable> paramClass; // class of call parameters
+ protected int maxIdleTime; // the maximum idle time after
+ // which a client may be disconnected
+ protected int thresholdIdleConnections; // the number of idle connections
+ // after which we will start
+ // cleaning up idle
+ // connections
+ int maxConnectionsToNuke; // the max number of
+ // connections to nuke
+ // during a cleanup
+
+ protected HBaseRpcMetrics rpcMetrics;
+
+ protected Configuration conf;
+
+ private int maxQueueSize;
+ protected int socketSendBufferSize;
+ protected final boolean tcpNoDelay; // if T then disable Nagle's Algorithm
+
+ volatile protected boolean running = true; // true while server runs
+ protected BlockingQueue<Call> callQueue; // queued calls
+
+ protected List<Connection> connectionList =
+ Collections.synchronizedList(new LinkedList<Connection>());
+ //maintain a list
+ //of client connections
+ private Listener listener = null;
+ protected Responder responder = null;
+ protected int numConnections = 0;
+ private Handler[] handlers = null;
+ protected HBaseRPCErrorHandler errorHandler = null;
+
+ /**
+ * A convenience method to bind to a given address and report
+ * better exceptions if the address is not a valid host.
+ * @param socket the socket to bind
+ * @param address the address to bind to
+ * @param backlog the number of connections allowed in the queue
+ * @throws BindException if the address can't be bound
+ * @throws UnknownHostException if the address isn't a valid host name
+ * @throws IOException other random errors from bind
+ */
+ public static void bind(ServerSocket socket, InetSocketAddress address,
+ int backlog) throws IOException {
+ try {
+ socket.bind(address, backlog);
+ } catch (BindException e) {
+ BindException bindException = new BindException("Problem binding to " + address
+ + " : " + e.getMessage());
+ bindException.initCause(e);
+ throw bindException;
+ } catch (SocketException e) {
+ // If they try to bind to a different host's address, give a better
+ // error message.
+ if ("Unresolved address".equals(e.getMessage())) {
+ throw new UnknownHostException("Invalid hostname for server: " +
+ address.getHostName());
+ }
+ throw e;
+ }
+ }
+
+ /** A call queued for handling. */
+ private static class Call {
+ protected int id; // the client's call id
+ protected Writable param; // the parameter passed
+ protected Connection connection; // connection to client
+ protected long timestamp; // the time received when response is null
+ // the time served when response is not null
+ protected ByteBuffer response; // the response for this call
+
+ public Call(int id, Writable param, Connection connection) {
+ this.id = id;
+ this.param = param;
+ this.connection = connection;
+ this.timestamp = System.currentTimeMillis();
+ this.response = null;
+ }
+
+ @Override
+ public String toString() {
+ return param.toString() + " from " + connection.toString();
+ }
+
+ public void setResponse(ByteBuffer response) {
+ this.response = response;
+ }
+ }
+
+ /** Listens on the socket. Creates jobs for the handler threads*/
+ private class Listener extends Thread {
+
+ private ServerSocketChannel acceptChannel = null; //the accept channel
+ private Selector selector = null; //the selector that we use for the server
+ private InetSocketAddress address; //the address we bind at
+ private Random rand = new Random();
+ private long lastCleanupRunTime = 0; //the last time when a cleanup connec-
+ //-tion (for idle connections) ran
+ private long cleanupInterval = 10000; //the minimum interval between
+ //two cleanup runs
+ private int backlogLength = conf.getInt("ipc.server.listen.queue.size", 128);
+
+ public Listener() throws IOException {
+ address = new InetSocketAddress(bindAddress, port);
+ // Create a new server socket and set to non blocking mode
+ acceptChannel = ServerSocketChannel.open();
+ acceptChannel.configureBlocking(false);
+
+ // Bind the server socket to the local host and port
+ bind(acceptChannel.socket(), address, backlogLength);
+ port = acceptChannel.socket().getLocalPort(); //Could be an ephemeral port
+ // create a selector;
+ selector= Selector.open();
+
+ // Register accepts on the server socket with the selector.
+ acceptChannel.register(selector, SelectionKey.OP_ACCEPT);
+ this.setName("IPC Server listener on " + port);
+ this.setDaemon(true);
+ }
+ /** cleanup connections from connectionList. Choose a random range
+ * to scan and also have a limit on the number of the connections
+ * that will be cleanedup per run. The criteria for cleanup is the time
+ * for which the connection was idle. If 'force' is true then all
+ * connections will be looked at for the cleanup.
+ */
+ private void cleanupConnections(boolean force) {
+ if (force || numConnections > thresholdIdleConnections) {
+ long currentTime = System.currentTimeMillis();
+ if (!force && (currentTime - lastCleanupRunTime) < cleanupInterval) {
+ return;
+ }
+ int start = 0;
+ int end = numConnections - 1;
+ if (!force) {
+ start = rand.nextInt() % numConnections;
+ end = rand.nextInt() % numConnections;
+ int temp;
+ if (end < start) {
+ temp = start;
+ start = end;
+ end = temp;
+ }
+ }
+ int i = start;
+ int numNuked = 0;
+ while (i <= end) {
+ Connection c;
+ synchronized (connectionList) {
+ try {
+ c = connectionList.get(i);
+ } catch (Exception e) {return;}
+ }
+ if (c.timedOut(currentTime)) {
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + ": disconnecting client " + c.getHostAddress());
+ closeConnection(c);
+ numNuked++;
+ end--;
+ c = null;
+ if (!force && numNuked == maxConnectionsToNuke) break;
+ }
+ else i++;
+ }
+ lastCleanupRunTime = System.currentTimeMillis();
+ }
+ }
+
+ @Override
+ public void run() {
+ LOG.info(getName() + ": starting");
+ SERVER.set(HBaseServer.this);
+
+ while (running) {
+ SelectionKey key = null;
+ try {
+ selector.select();
+ Iterator<SelectionKey> iter = selector.selectedKeys().iterator();
+ while (iter.hasNext()) {
+ key = iter.next();
+ iter.remove();
+ try {
+ if (key.isValid()) {
+ if (key.isAcceptable())
+ doAccept(key);
+ else if (key.isReadable())
+ doRead(key);
+ }
+ } catch (IOException e) {
+ }
+ key = null;
+ }
+ } catch (OutOfMemoryError e) {
+ if (errorHandler != null) {
+ if (errorHandler.checkOOME(e)) {
+ LOG.info(getName() + ": exiting on OOME");
+ closeCurrentConnection(key);
+ cleanupConnections(true);
+ return;
+ }
+ } else {
+ // we can run out of memory if we have too many threads
+ // log the event and sleep for a minute and give
+ // some thread(s) a chance to finish
+ LOG.warn("Out of Memory in server select", e);
+ closeCurrentConnection(key);
+ cleanupConnections(true);
+ try { Thread.sleep(60000); } catch (Exception ie) {}
+ }
+ } catch (InterruptedException e) {
+ if (running) { // unexpected -- log it
+ LOG.info(getName() + " caught: " +
+ StringUtils.stringifyException(e));
+ }
+ } catch (Exception e) {
+ closeCurrentConnection(key);
+ }
+ cleanupConnections(false);
+ }
+ LOG.info("Stopping " + this.getName());
+
+ synchronized (this) {
+ try {
+ acceptChannel.close();
+ selector.close();
+ } catch (IOException e) { }
+
+ selector= null;
+ acceptChannel= null;
+
+ // clean up all connections
+ while (!connectionList.isEmpty()) {
+ closeConnection(connectionList.remove(0));
+ }
+ }
+ }
+
+ private void closeCurrentConnection(SelectionKey key) {
+ if (key != null) {
+ Connection c = (Connection)key.attachment();
+ if (c != null) {
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + ": disconnecting client " + c.getHostAddress());
+ closeConnection(c);
+ c = null;
+ }
+ }
+ }
+
+ InetSocketAddress getAddress() {
+ return (InetSocketAddress)acceptChannel.socket().getLocalSocketAddress();
+ }
+
+ void doAccept(SelectionKey key) throws IOException, OutOfMemoryError {
+ Connection c = null;
+ ServerSocketChannel server = (ServerSocketChannel) key.channel();
+ // accept up to 10 connections
+ for (int i=0; i<10; i++) {
+ SocketChannel channel = server.accept();
+ if (channel==null) return;
+
+ channel.configureBlocking(false);
+ channel.socket().setTcpNoDelay(tcpNoDelay);
+ SelectionKey readKey = channel.register(selector, SelectionKey.OP_READ);
+ c = new Connection(channel, System.currentTimeMillis());
+ readKey.attach(c);
+ synchronized (connectionList) {
+ connectionList.add(numConnections, c);
+ numConnections++;
+ }
+ if (LOG.isDebugEnabled())
+ LOG.debug("Server connection from " + c.toString() +
+ "; # active connections: " + numConnections +
+ "; # queued calls: " + callQueue.size());
+ }
+ }
+
+ void doRead(SelectionKey key) throws InterruptedException {
+ int count = 0;
+ Connection c = (Connection)key.attachment();
+ if (c == null) {
+ return;
+ }
+ c.setLastContact(System.currentTimeMillis());
+
+ try {
+ count = c.readAndProcess();
+ } catch (InterruptedException ieo) {
+ throw ieo;
+ } catch (Exception e) {
+ LOG.debug(getName() + ": readAndProcess threw exception " + e + ". Count of bytes read: " + count, e);
+ count = -1; //so that the (count < 0) block is executed
+ }
+ if (count < 0) {
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + ": disconnecting client " +
+ c.getHostAddress() + ". Number of active connections: "+
+ numConnections);
+ closeConnection(c);
+ c = null;
+ }
+ else {
+ c.setLastContact(System.currentTimeMillis());
+ }
+ }
+
+ synchronized void doStop() {
+ if (selector != null) {
+ selector.wakeup();
+ Thread.yield();
+ }
+ if (acceptChannel != null) {
+ try {
+ acceptChannel.socket().close();
+ } catch (IOException e) {
+ LOG.info(getName() + ":Exception in closing listener socket. " + e);
+ }
+ }
+ }
+ }
+
+ // Sends responses of RPC back to clients.
+ private class Responder extends Thread {
+ private Selector writeSelector;
+ private int pending; // connections waiting to register
+
+ final static int PURGE_INTERVAL = 900000; // 15mins
+
+ Responder() throws IOException {
+ this.setName("IPC Server Responder");
+ this.setDaemon(true);
+ writeSelector = Selector.open(); // create a selector
+ pending = 0;
+ }
+
+ @Override
+ public void run() {
+ LOG.info(getName() + ": starting");
+ SERVER.set(HBaseServer.this);
+ long lastPurgeTime = 0; // last check for old calls.
+
+ while (running) {
+ try {
+ waitPending(); // If a channel is being registered, wait.
+ writeSelector.select(PURGE_INTERVAL);
+ Iterator<SelectionKey> iter = writeSelector.selectedKeys().iterator();
+ while (iter.hasNext()) {
+ SelectionKey key = iter.next();
+ iter.remove();
+ try {
+ if (key.isValid() && key.isWritable()) {
+ doAsyncWrite(key);
+ }
+ } catch (IOException e) {
+ LOG.info(getName() + ": doAsyncWrite threw exception " + e);
+ }
+ }
+ long now = System.currentTimeMillis();
+ if (now < lastPurgeTime + PURGE_INTERVAL) {
+ continue;
+ }
+ lastPurgeTime = now;
+ //
+ // If there were some calls that have not been sent out for a
+ // long time, discard them.
+ //
+ LOG.debug("Checking for old call responses.");
+ ArrayList<Call> calls;
+
+ // get the list of channels from list of keys.
+ synchronized (writeSelector.keys()) {
+ calls = new ArrayList<Call>(writeSelector.keys().size());
+ iter = writeSelector.keys().iterator();
+ while (iter.hasNext()) {
+ SelectionKey key = iter.next();
+ Call call = (Call)key.attachment();
+ if (call != null && key.channel() == call.connection.channel) {
+ calls.add(call);
+ }
+ }
+ }
+
+ for(Call call : calls) {
+ doPurge(call, now);
+ }
+ } catch (OutOfMemoryError e) {
+ if (errorHandler != null) {
+ if (errorHandler.checkOOME(e)) {
+ LOG.info(getName() + ": exiting on OOME");
+ return;
+ }
+ } else {
+ //
+ // we can run out of memory if we have too many threads
+ // log the event and sleep for a minute and give
+ // some thread(s) a chance to finish
+ //
+ LOG.warn("Out of Memory in server select", e);
+ try { Thread.sleep(60000); } catch (Exception ie) {}
+ }
+ } catch (Exception e) {
+ LOG.warn("Exception in Responder " +
+ StringUtils.stringifyException(e));
+ }
+ }
+ LOG.info("Stopping " + this.getName());
+ }
+
+ private void doAsyncWrite(SelectionKey key) throws IOException {
+ Call call = (Call)key.attachment();
+ if (call == null) {
+ return;
+ }
+ if (key.channel() != call.connection.channel) {
+ throw new IOException("doAsyncWrite: bad channel");
+ }
+
+ synchronized(call.connection.responseQueue) {
+ if (processResponse(call.connection.responseQueue, false)) {
+ try {
+ key.interestOps(0);
+ } catch (CancelledKeyException e) {
+ /* The Listener/reader might have closed the socket.
+ * We don't explicitly cancel the key, so not sure if this will
+ * ever fire.
+ * This warning could be removed.
+ */
+ LOG.warn("Exception while changing ops : " + e);
+ }
+ }
+ }
+ }
+
+ //
+ // Remove calls that have been pending in the responseQueue
+ // for a long time.
+ //
+ private void doPurge(Call call, long now) {
+ LinkedList<Call> responseQueue = call.connection.responseQueue;
+ synchronized (responseQueue) {
+ Iterator<Call> iter = responseQueue.listIterator(0);
+ while (iter.hasNext()) {
+ Call nextCall = iter.next();
+ if (now > nextCall.timestamp + PURGE_INTERVAL) {
+ closeConnection(nextCall.connection);
+ break;
+ }
+ }
+ }
+ }
+
+ // Processes one response. Returns true if there are no more pending
+ // data for this channel.
+ //
+ private boolean processResponse(LinkedList<Call> responseQueue,
+ boolean inHandler) throws IOException {
+ boolean error = true;
+ boolean done = false; // there is more data for this channel.
+ int numElements = 0;
+ Call call = null;
+ try {
+ synchronized (responseQueue) {
+ //
+ // If there are no items for this channel, then we are done
+ //
+ numElements = responseQueue.size();
+ if (numElements == 0) {
+ error = false;
+ return true; // no more data for this channel.
+ }
+ //
+ // Extract the first call
+ //
+ call = responseQueue.removeFirst();
+ SocketChannel channel = call.connection.channel;
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(getName() + ": responding to #" + call.id + " from " +
+ call.connection);
+ }
+ //
+ // Send as much data as we can in the non-blocking fashion
+ //
+ int numBytes = channelWrite(channel, call.response);
+ if (numBytes < 0) {
+ return true;
+ }
+ if (!call.response.hasRemaining()) {
+ call.connection.decRpcCount();
+ if (numElements == 1) { // last call fully processes.
+ done = true; // no more data for this channel.
+ } else {
+ done = false; // more calls pending to be sent.
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(getName() + ": responding to #" + call.id + " from " +
+ call.connection + " Wrote " + numBytes + " bytes.");
+ }
+ } else {
+ //
+ // If we were unable to write the entire response out, then
+ // insert in Selector queue.
+ //
+ call.connection.responseQueue.addFirst(call);
+
+ if (inHandler) {
+ // set the serve time when the response has to be sent later
+ call.timestamp = System.currentTimeMillis();
+
+ incPending();
+ try {
+ // Wakeup the thread blocked on select, only then can the call
+ // to channel.register() complete.
+ writeSelector.wakeup();
+ channel.register(writeSelector, SelectionKey.OP_WRITE, call);
+ } catch (ClosedChannelException e) {
+ //Its ok. channel might be closed else where.
+ done = true;
+ } finally {
+ decPending();
+ }
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(getName() + ": responding to #" + call.id + " from " +
+ call.connection + " Wrote partial " + numBytes +
+ " bytes.");
+ }
+ }
+ error = false; // everything went off well
+ }
+ } finally {
+ if (error && call != null) {
+ LOG.warn(getName()+", call " + call + ": output error");
+ done = true; // error. no more data for this channel.
+ closeConnection(call.connection);
+ }
+ }
+ return done;
+ }
+
+ //
+ // Enqueue a response from the application.
+ //
+ void doRespond(Call call) throws IOException {
+ synchronized (call.connection.responseQueue) {
+ call.connection.responseQueue.addLast(call);
+ if (call.connection.responseQueue.size() == 1) {
+ processResponse(call.connection.responseQueue, true);
+ }
+ }
+ }
+
+ private synchronized void incPending() { // call waiting to be enqueued.
+ pending++;
+ }
+
+ private synchronized void decPending() { // call done enqueueing.
+ pending--;
+ notify();
+ }
+
+ private synchronized void waitPending() throws InterruptedException {
+ while (pending > 0) {
+ wait();
+ }
+ }
+ }
+
+ /** Reads calls from a connection and queues them for handling. */
+ private class Connection {
+ private boolean versionRead = false; //if initial signature and
+ //version are read
+ private boolean headerRead = false; //if the connection header that
+ //follows version is read.
+ protected SocketChannel channel;
+ private ByteBuffer data;
+ private ByteBuffer dataLengthBuffer;
+ protected LinkedList<Call> responseQueue;
+ private volatile int rpcCount = 0; // number of outstanding rpcs
+ private long lastContact;
+ private int dataLength;
+ protected Socket socket;
+ // Cache the remote host & port info so that even if the socket is
+ // disconnected, we can say where it used to connect to.
+ private String hostAddress;
+ private int remotePort;
+ protected UserGroupInformation ticket = null;
+
+ public Connection(SocketChannel channel, long lastContact) {
+ this.channel = channel;
+ this.lastContact = lastContact;
+ this.data = null;
+ this.dataLengthBuffer = ByteBuffer.allocate(4);
+ this.socket = channel.socket();
+ InetAddress addr = socket.getInetAddress();
+ if (addr == null) {
+ this.hostAddress = "*Unknown*";
+ } else {
+ this.hostAddress = addr.getHostAddress();
+ }
+ this.remotePort = socket.getPort();
+ this.responseQueue = new LinkedList<Call>();
+ if (socketSendBufferSize != 0) {
+ try {
+ socket.setSendBufferSize(socketSendBufferSize);
+ } catch (IOException e) {
+ LOG.warn("Connection: unable to set socket send buffer size to " +
+ socketSendBufferSize);
+ }
+ }
+ }
+
+ @Override
+ public String toString() {
+ return getHostAddress() + ":" + remotePort;
+ }
+
+ public String getHostAddress() {
+ return hostAddress;
+ }
+
+ public void setLastContact(long lastContact) {
+ this.lastContact = lastContact;
+ }
+
+ public long getLastContact() {
+ return lastContact;
+ }
+
+ /* Return true if the connection has no outstanding rpc */
+ private boolean isIdle() {
+ return rpcCount == 0;
+ }
+
+ /* Decrement the outstanding RPC count */
+ protected void decRpcCount() {
+ rpcCount--;
+ }
+
+ /* Increment the outstanding RPC count */
+ private void incRpcCount() {
+ rpcCount++;
+ }
+
+ protected boolean timedOut(long currentTime) {
+ if (isIdle() && currentTime - lastContact > maxIdleTime)
+ return true;
+ return false;
+ }
+
+ public int readAndProcess() throws IOException, InterruptedException {
+ while (true) {
+ /* Read at most one RPC. If the header is not read completely yet
+ * then iterate until we read first RPC or until there is no data left.
+ */
+ int count = -1;
+ if (dataLengthBuffer.remaining() > 0) {
+ count = channelRead(channel, dataLengthBuffer);
+ if (count < 0 || dataLengthBuffer.remaining() > 0)
+ return count;
+ }
+
+ if (!versionRead) {
+ //Every connection is expected to send the header.
+ ByteBuffer versionBuffer = ByteBuffer.allocate(1);
+ count = channelRead(channel, versionBuffer);
+ if (count <= 0) {
+ return count;
+ }
+ int version = versionBuffer.get(0);
+
+ dataLengthBuffer.flip();
+ if (!HEADER.equals(dataLengthBuffer) || version != CURRENT_VERSION) {
+ //Warning is ok since this is not supposed to happen.
+ LOG.warn("Incorrect header or version mismatch from " +
+ hostAddress + ":" + remotePort +
+ " got version " + version +
+ " expected version " + CURRENT_VERSION);
+ return -1;
+ }
+ dataLengthBuffer.clear();
+ versionRead = true;
+ continue;
+ }
+
+ if (data == null) {
+ dataLengthBuffer.flip();
+ dataLength = dataLengthBuffer.getInt();
+
+ if (dataLength == HBaseClient.PING_CALL_ID) {
+ dataLengthBuffer.clear();
+ return 0; //ping message
+ }
+ data = ByteBuffer.allocate(dataLength);
+ incRpcCount(); // Increment the rpc count
+ }
+
+ count = channelRead(channel, data);
+
+ if (data.remaining() == 0) {
+ dataLengthBuffer.clear();
+ data.flip();
+ if (headerRead) {
+ processData();
+ data = null;
+ return count;
+ }
+ processHeader();
+ headerRead = true;
+ data = null;
+ continue;
+ }
+ return count;
+ }
+ }
+
+ /// Reads the header following version
+ private void processHeader() throws IOException {
+ /* In the current version, it is just a ticket.
+ * Later we could introduce a "ConnectionHeader" class.
+ */
+ DataInputStream in =
+ new DataInputStream(new ByteArrayInputStream(data.array()));
+ ticket = (UserGroupInformation) ObjectWritable.readObject(in, conf);
+ }
+
+ private void processData() throws IOException, InterruptedException {
+ DataInputStream dis =
+ new DataInputStream(new ByteArrayInputStream(data.array()));
+ int id = dis.readInt(); // try to read an id
+
+ if (LOG.isDebugEnabled())
+ LOG.debug(" got #" + id);
+
+ Writable param = (Writable) ReflectionUtils.newInstance(paramClass, conf);
+ param.readFields(dis);
+
+ Call call = new Call(id, param, this);
+ callQueue.put(call); // queue the call; maybe blocked here
+ }
+
+ protected synchronized void close() {
+ data = null;
+ dataLengthBuffer = null;
+ if (!channel.isOpen())
+ return;
+ try {socket.shutdownOutput();} catch(Exception e) {}
+ if (channel.isOpen()) {
+ try {channel.close();} catch(Exception e) {}
+ }
+ try {socket.close();} catch(Exception e) {}
+ }
+ }
+
+ /** Handles queued calls . */
+ private class Handler extends Thread {
+ public Handler(int instanceNumber) {
+ this.setDaemon(true);
+ this.setName("IPC Server handler "+ instanceNumber + " on " + port);
+ }
+
+ @Override
+ public void run() {
+ LOG.info(getName() + ": starting");
+ SERVER.set(HBaseServer.this);
+ final int buffersize = 16 * 1024;
+ ByteArrayOutputStream buf = new ByteArrayOutputStream(buffersize);
+ while (running) {
+ try {
+ Call call = callQueue.take(); // pop the queue; maybe blocked here
+
+ if (LOG.isDebugEnabled())
+ LOG.debug(getName() + ": has #" + call.id + " from " +
+ call.connection);
+
+ String errorClass = null;
+ String error = null;
+ Writable value = null;
+
+ CurCall.set(call);
+ UserGroupInformation previous = UserGroupInformation.getCurrentUGI();
+ UserGroupInformation.setCurrentUGI(call.connection.ticket);
+ try {
+ value = call(call.param, call.timestamp); // make the call
+ } catch (Throwable e) {
+ LOG.info(getName()+", call "+call+": error: " + e, e);
+ errorClass = e.getClass().getName();
+ error = StringUtils.stringifyException(e);
+ }
+ UserGroupInformation.setCurrentUGI(previous);
+ CurCall.set(null);
+
+ if (buf.size() > buffersize) {
+ // Allocate a new BAOS as reset only moves size back to zero but
+ // keeps the buffer of whatever the largest write was -- see
+ // hbase-900.
+ buf = new ByteArrayOutputStream(buffersize);
+ } else {
+ buf.reset();
+ }
+ DataOutputStream out = new DataOutputStream(buf);
+ out.writeInt(call.id); // write call id
+ out.writeBoolean(error != null); // write error flag
+
+ if (error == null) {
+ value.write(out);
+ } else {
+ WritableUtils.writeString(out, errorClass);
+ WritableUtils.writeString(out, error);
+ }
+ call.setResponse(ByteBuffer.wrap(buf.toByteArray()));
+ responder.doRespond(call);
+ } catch (InterruptedException e) {
+ if (running) { // unexpected -- log it
+ LOG.info(getName() + " caught: " +
+ StringUtils.stringifyException(e));
+ }
+ } catch (OutOfMemoryError e) {
+ if (errorHandler != null) {
+ if (errorHandler.checkOOME(e)) {
+ LOG.info(getName() + ": exiting on OOME");
+ return;
+ }
+ } else {
+ // rethrow if no handler
+ throw e;
+ }
+ } catch (Exception e) {
+ LOG.info(getName() + " caught: " +
+ StringUtils.stringifyException(e));
+ }
+ }
+ LOG.info(getName() + ": exiting");
+ }
+
+ }
+
+ protected HBaseServer(String bindAddress, int port,
+ Class<? extends Writable> paramClass, int handlerCount,
+ Configuration conf)
+ throws IOException
+ {
+ this(bindAddress, port, paramClass, handlerCount, conf, Integer.toString(port));
+ }
+ /** Constructs a server listening on the named port and address. Parameters passed must
+ * be of the named class. The <code>handlerCount</handlerCount> determines
+ * the number of handler threads that will be used to process calls.
+ *
+ */
+ protected HBaseServer(String bindAddress, int port,
+ Class<? extends Writable> paramClass, int handlerCount,
+ Configuration conf, String serverName)
+ throws IOException {
+ this.bindAddress = bindAddress;
+ this.conf = conf;
+ this.port = port;
+ this.paramClass = paramClass;
+ this.handlerCount = handlerCount;
+ this.socketSendBufferSize = 0;
+ this.maxQueueSize = handlerCount * MAX_QUEUE_SIZE_PER_HANDLER;
+ this.callQueue = new LinkedBlockingQueue<Call>(maxQueueSize);
+ this.maxIdleTime = 2*conf.getInt("ipc.client.connection.maxidletime", 1000);
+ this.maxConnectionsToNuke = conf.getInt("ipc.client.kill.max", 10);
+ this.thresholdIdleConnections = conf.getInt("ipc.client.idlethreshold", 4000);
+
+ // Start the listener here and let it bind to the port
+ listener = new Listener();
+ this.port = listener.getAddress().getPort();
+ this.rpcMetrics = new HBaseRpcMetrics(serverName,
+ Integer.toString(this.port));
+ this.tcpNoDelay = conf.getBoolean("ipc.server.tcpnodelay", false);
+
+ // Create the responder here
+ responder = new Responder();
+ }
+
+ protected void closeConnection(Connection connection) {
+ synchronized (connectionList) {
+ if (connectionList.remove(connection))
+ numConnections--;
+ }
+ connection.close();
+ }
+
+ /** Sets the socket buffer size used for responding to RPCs.
+ * @param size
+ */
+ public void setSocketSendBufSize(int size) { this.socketSendBufferSize = size; }
+
+ /** Starts the service. Must be called before any calls will be handled. */
+ public synchronized void start() {
+ responder.start();
+ listener.start();
+ handlers = new Handler[handlerCount];
+
+ for (int i = 0; i < handlerCount; i++) {
+ handlers[i] = new Handler(i);
+ handlers[i].start();
+ }
+ }
+
+ /** Stops the service. No new calls will be handled after this is called. */
+ public synchronized void stop() {
+ LOG.info("Stopping server on " + port);
+ running = false;
+ if (handlers != null) {
+ for (int i = 0; i < handlerCount; i++) {
+ if (handlers[i] != null) {
+ handlers[i].interrupt();
+ }
+ }
+ }
+ listener.interrupt();
+ listener.doStop();
+ responder.interrupt();
+ notifyAll();
+ if (this.rpcMetrics != null) {
+ this.rpcMetrics.shutdown();
+ }
+ }
+
+ /** Wait for the server to be stopped.
+ * Does not wait for all subthreads to finish.
+ * See {@link #stop()}.
+ * @throws InterruptedException
+ */
+ public synchronized void join() throws InterruptedException {
+ while (running) {
+ wait();
+ }
+ }
+
+ /**
+ * Return the socket (ip+port) on which the RPC server is listening to.
+ * @return the socket (ip+port) on which the RPC server is listening to.
+ */
+ public synchronized InetSocketAddress getListenerAddress() {
+ return listener.getAddress();
+ }
+
+ /** Called for each call.
+ * @param param
+ * @param receiveTime
+ * @return Writable
+ * @throws IOException
+ */
+ public abstract Writable call(Writable param, long receiveTime)
+ throws IOException;
+
+ /**
+ * The number of open RPC conections
+ * @return the number of open rpc connections
+ */
+ public int getNumOpenConnections() {
+ return numConnections;
+ }
+
+ /**
+ * The number of rpc calls in the queue.
+ * @return The number of rpc calls in the queue.
+ */
+ public int getCallQueueLen() {
+ return callQueue.size();
+ }
+
+ /**
+ * Set the handler for calling out of RPC for error conditions.
+ * @param handler the handler implementation
+ */
+ public void setErrorHandler(HBaseRPCErrorHandler handler) {
+ this.errorHandler = handler;
+ }
+
+ /**
+ * When the read or write buffer size is larger than this limit, i/o will be
+ * done in chunks of this size. Most RPC requests and responses would be
+ * be smaller.
+ */
+ private static int NIO_BUFFER_LIMIT = 8*1024; //should not be more than 64KB.
+
+ /**
+ * This is a wrapper around {@link WritableByteChannel#write(ByteBuffer)}.
+ * If the amount of data is large, it writes to channel in smaller chunks.
+ * This is to avoid jdk from creating many direct buffers as the size of
+ * buffer increases. This also minimizes extra copies in NIO layer
+ * as a result of multiple write operations required to write a large
+ * buffer.
+ *
+ * @see WritableByteChannel#write(ByteBuffer)
+ */
+ protected static int channelWrite(WritableByteChannel channel,
+ ByteBuffer buffer) throws IOException {
+ return (buffer.remaining() <= NIO_BUFFER_LIMIT) ?
+ channel.write(buffer) : channelIO(null, channel, buffer);
+ }
+
+ /**
+ * This is a wrapper around {@link ReadableByteChannel#read(ByteBuffer)}.
+ * If the amount of data is large, it writes to channel in smaller chunks.
+ * This is to avoid jdk from creating many direct buffers as the size of
+ * ByteBuffer increases. There should not be any performance degredation.
+ *
+ * @see ReadableByteChannel#read(ByteBuffer)
+ */
+ protected static int channelRead(ReadableByteChannel channel,
+ ByteBuffer buffer) throws IOException {
+ return (buffer.remaining() <= NIO_BUFFER_LIMIT) ?
+ channel.read(buffer) : channelIO(channel, null, buffer);
+ }
+
+ /**
+ * Helper for {@link #channelRead(ReadableByteChannel, ByteBuffer)}
+ * and {@link #channelWrite(WritableByteChannel, ByteBuffer)}. Only
+ * one of readCh or writeCh should be non-null.
+ *
+ * @see #channelRead(ReadableByteChannel, ByteBuffer)
+ * @see #channelWrite(WritableByteChannel, ByteBuffer)
+ */
+ private static int channelIO(ReadableByteChannel readCh,
+ WritableByteChannel writeCh,
+ ByteBuffer buf) throws IOException {
+
+ int originalLimit = buf.limit();
+ int initialRemaining = buf.remaining();
+ int ret = 0;
+
+ while (buf.remaining() > 0) {
+ try {
+ int ioSize = Math.min(buf.remaining(), NIO_BUFFER_LIMIT);
+ buf.limit(buf.position() + ioSize);
+
+ ret = (readCh == null) ? writeCh.write(buf) : readCh.read(buf);
+
+ if (ret < ioSize) {
+ break;
+ }
+
+ } finally {
+ buf.limit(originalLimit);
+ }
+ }
+
+ int nBytes = initialRemaining - buf.remaining();
+ return (nBytes > 0) ? nBytes : ret;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HMasterInterface.java b/src/java/org/apache/hadoop/hbase/ipc/HMasterInterface.java
new file mode 100644
index 0000000..46cf018
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HMasterInterface.java
@@ -0,0 +1,119 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Clients interact with the HMasterInterface to gain access to meta-level
+ * HBase functionality, like finding an HRegionServer and creating/destroying
+ * tables.
+ *
+ * <p>NOTE: if you change the interface, you must change the RPC version
+ * number in HBaseRPCProtocolVersion
+ *
+ */
+public interface HMasterInterface extends HBaseRPCProtocolVersion {
+
+ /** @return true if master is available */
+ public boolean isMasterRunning();
+
+ // Admin tools would use these cmds
+
+ /**
+ * Creates a new table
+ * @param desc table descriptor
+ * @throws IOException
+ */
+ public void createTable(HTableDescriptor desc) throws IOException;
+
+ /**
+ * Deletes a table
+ * @param tableName
+ * @throws IOException
+ */
+ public void deleteTable(final byte [] tableName) throws IOException;
+
+ /**
+ * Adds a column to the specified table
+ * @param tableName
+ * @param column column descriptor
+ * @throws IOException
+ */
+ public void addColumn(final byte [] tableName, HColumnDescriptor column)
+ throws IOException;
+
+ /**
+ * Modifies an existing column on the specified table
+ * @param tableName
+ * @param columnName name of the column to edit
+ * @param descriptor new column descriptor
+ * @throws IOException
+ */
+ public void modifyColumn(final byte [] tableName, final byte [] columnName,
+ HColumnDescriptor descriptor)
+ throws IOException;
+
+
+ /**
+ * Deletes a column from the specified table
+ * @param tableName
+ * @param columnName
+ * @throws IOException
+ */
+ public void deleteColumn(final byte [] tableName, final byte [] columnName)
+ throws IOException;
+
+ /**
+ * Puts the table on-line (only needed if table has been previously taken offline)
+ * @param tableName
+ * @throws IOException
+ */
+ public void enableTable(final byte [] tableName) throws IOException;
+
+ /**
+ * Take table offline
+ *
+ * @param tableName
+ * @throws IOException
+ */
+ public void disableTable(final byte [] tableName) throws IOException;
+
+ /**
+ * Modify a table's metadata
+ *
+ * @param tableName
+ * @param op
+ * @param args
+ * @throws IOException
+ */
+ public void modifyTable(byte[] tableName, int op, Writable[] args)
+ throws IOException;
+
+ /**
+ * Shutdown an HBase cluster.
+ * @throws IOException
+ */
+ public void shutdown() throws IOException;
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HMasterRegionInterface.java b/src/java/org/apache/hadoop/hbase/ipc/HMasterRegionInterface.java
new file mode 100644
index 0000000..b7e384c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HMasterRegionInterface.java
@@ -0,0 +1,64 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.IOException;
+
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+
+/**
+ * HRegionServers interact with the HMasterRegionInterface to report on local
+ * goings-on and to obtain data-handling instructions from the HMaster.
+ * <p>Changes here need to be reflected in HbaseObjectWritable HbaseRPC#Invoker.
+ *
+ * <p>NOTE: if you change the interface, you must change the RPC version
+ * number in HBaseRPCProtocolVersion
+ *
+ */
+public interface HMasterRegionInterface extends HBaseRPCProtocolVersion {
+
+ /**
+ * Called when a region server first starts
+ * @param info
+ * @throws IOException
+ * @return Configuration for the regionserver to use: e.g. filesystem,
+ * hbase rootdir, etc.
+ */
+ public MapWritable regionServerStartup(HServerInfo info) throws IOException;
+
+ /**
+ * Called to renew lease, tell master what the region server is doing and to
+ * receive new instructions from the master
+ *
+ * @param info server's address and start code
+ * @param msgs things the region server wants to tell the master
+ * @param mostLoadedRegions Array of HRegionInfos that should contain the
+ * reporting server's most loaded regions. These are candidates for being
+ * rebalanced.
+ * @return instructions from the master to the region server
+ * @throws IOException
+ */
+ public HMsg[] regionServerReport(HServerInfo info, HMsg msgs[],
+ HRegionInfo mostLoadedRegions[])
+ throws IOException;
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java b/src/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
new file mode 100644
index 0000000..f26dee8
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
@@ -0,0 +1,309 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.NotServingRegionException;
+
+/**
+ * Clients interact with HRegionServers using a handle to the HRegionInterface.
+ *
+ * <p>NOTE: if you change the interface, you must change the RPC version
+ * number in HBaseRPCProtocolVersion
+ *
+ */
+public interface HRegionInterface extends HBaseRPCProtocolVersion {
+ /**
+ * Get metainfo about an HRegion
+ *
+ * @param regionName name of the region
+ * @return HRegionInfo object for region
+ * @throws NotServingRegionException
+ */
+ public HRegionInfo getRegionInfo(final byte [] regionName)
+ throws NotServingRegionException;
+
+ /**
+ * Get the specified number of versions of the specified row and column with
+ * the specified timestamp.
+ *
+ * @param regionName region name
+ * @param row row key
+ * @param column column key
+ * @param timestamp timestamp
+ * @param numVersions number of versions to return
+ * @return array of values
+ * @throws IOException
+ */
+ public Cell[] get(final byte [] regionName, final byte [] row,
+ final byte [] column, final long timestamp, final int numVersions)
+ throws IOException;
+
+ /**
+ * Return all the data for the row that matches <i>row</i> exactly,
+ * or the one that immediately preceeds it.
+ *
+ * @param regionName region name
+ * @param row row key
+ * @param columnFamily Column family to look for row in.
+ * @return map of values
+ * @throws IOException
+ */
+ public RowResult getClosestRowBefore(final byte [] regionName,
+ final byte [] row, final byte [] columnFamily)
+ throws IOException;
+
+ /**
+ * Get selected columns for the specified row at a given timestamp.
+ *
+ * @param regionName region name
+ * @param row row key
+ * @param columns columns to get
+ * @param ts time stamp
+ * @param numVersions number of versions
+ * @param lockId lock id
+ * @return map of values
+ * @throws IOException
+ */
+ public RowResult getRow(final byte [] regionName, final byte [] row,
+ final byte[][] columns, final long ts,
+ final int numVersions, final long lockId)
+ throws IOException;
+
+ /**
+ * Applies a batch of updates via one RPC
+ *
+ * @param regionName name of the region to update
+ * @param b BatchUpdate
+ * @param lockId lock id
+ * @throws IOException
+ */
+ public void batchUpdate(final byte [] regionName, final BatchUpdate b,
+ final long lockId)
+ throws IOException;
+
+ /**
+ * Applies a batch of updates via one RPC for many rows
+ *
+ * @param regionName name of the region to update
+ * @param b BatchUpdate[]
+ * @throws IOException
+ * @return number of updates applied
+ */
+ public int batchUpdates(final byte[] regionName, final BatchUpdate[] b)
+ throws IOException;
+
+ /**
+ * Applies a batch of updates to one row atomically via one RPC
+ * if the columns specified in expectedValues match
+ * the given values in expectedValues
+ *
+ * @param regionName name of the region to update
+ * @param b BatchUpdate
+ * @param expectedValues map of column names to expected data values.
+ * @return true if update was applied
+ * @throws IOException
+ */
+ public boolean checkAndSave(final byte [] regionName, final BatchUpdate b,
+ final HbaseMapWritable<byte[],byte[]> expectedValues)
+ throws IOException;
+
+
+ /**
+ * Delete all cells that match the passed row and column and whose timestamp
+ * is equal-to or older than the passed timestamp.
+ *
+ * @param regionName region name
+ * @param row row key
+ * @param column column key
+ * @param timestamp Delete all entries that have this timestamp or older
+ * @param lockId lock id
+ * @throws IOException
+ */
+ public void deleteAll(byte [] regionName, byte [] row, byte [] column,
+ long timestamp, long lockId)
+ throws IOException;
+
+ /**
+ * Delete all cells that match the passed row and whose
+ * timestamp is equal-to or older than the passed timestamp.
+ *
+ * @param regionName region name
+ * @param row row key
+ * @param timestamp Delete all entries that have this timestamp or older
+ * @param lockId lock id
+ * @throws IOException
+ */
+ public void deleteAll(byte [] regionName, byte [] row, long timestamp,
+ long lockId)
+ throws IOException;
+
+ /**
+ * Delete all cells that match the passed row & the column regex and whose
+ * timestamp is equal-to or older than the passed timestamp.
+ *
+ * @param regionName
+ * @param row
+ * @param colRegex
+ * @param timestamp
+ * @param lockId
+ * @throws IOException
+ */
+ public void deleteAllByRegex(byte [] regionName, byte [] row, String colRegex,
+ long timestamp, long lockId)
+ throws IOException;
+
+ /**
+ * Delete all cells for a row with matching column family with timestamps
+ * less than or equal to <i>timestamp</i>.
+ *
+ * @param regionName The name of the region to operate on
+ * @param row The row to operate on
+ * @param family The column family to match
+ * @param timestamp Timestamp to match
+ * @param lockId lock id
+ * @throws IOException
+ */
+ public void deleteFamily(byte [] regionName, byte [] row, byte [] family,
+ long timestamp, long lockId)
+ throws IOException;
+
+ /**
+ * Delete all cells for a row with matching column family regex with
+ * timestamps less than or equal to <i>timestamp</i>.
+ *
+ * @param regionName The name of the region to operate on
+ * @param row The row to operate on
+ * @param familyRegex column family regex
+ * @param timestamp Timestamp to match
+ * @param lockId lock id
+ * @throws IOException
+ */
+ public void deleteFamilyByRegex(byte [] regionName, byte [] row, String familyRegex,
+ long timestamp, long lockId)
+ throws IOException;
+
+ /**
+ * Returns true if any cells exist for the given coordinate.
+ *
+ * @param regionName The name of the region
+ * @param row The row
+ * @param column The column, or null for any
+ * @param timestamp The timestamp, or LATEST_TIMESTAMP for any
+ * @param lockID lock id
+ * @return true if the row exists, false otherwise
+ * @throws IOException
+ */
+ public boolean exists(byte [] regionName, byte [] row, byte [] column,
+ long timestamp, long lockID)
+ throws IOException;
+
+ //
+ // remote scanner interface
+ //
+
+ /**
+ * Opens a remote scanner with a RowFilter.
+ *
+ * @param regionName name of region to scan
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex for column family name. A column name is judged to be
+ * regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row to scan
+ * @param timestamp only return values whose timestamp is <= this value
+ * @param filter RowFilter for filtering results at the row-level.
+ *
+ * @return scannerId scanner identifier used in other calls
+ * @throws IOException
+ */
+ public long openScanner(final byte [] regionName, final byte [][] columns,
+ final byte [] startRow, long timestamp, RowFilterInterface filter)
+ throws IOException;
+
+ /**
+ * Get the next set of values
+ * @param scannerId clientId passed to openScanner
+ * @return map of values
+ * @throws IOException
+ */
+ public RowResult next(long scannerId) throws IOException;
+
+ /**
+ * Get the next set of values
+ * @param scannerId clientId passed to openScanner
+ * @param numberOfRows the number of rows to fetch
+ * @return map of values
+ * @throws IOException
+ */
+ public RowResult[] next(long scannerId, int numberOfRows) throws IOException;
+
+ /**
+ * Close a scanner
+ *
+ * @param scannerId the scanner id returned by openScanner
+ * @throws IOException
+ */
+ public void close(long scannerId) throws IOException;
+
+ /**
+ * Opens a remote row lock.
+ *
+ * @param regionName name of region
+ * @param row row to lock
+ * @return lockId lock identifier
+ * @throws IOException
+ */
+ public long lockRow(final byte [] regionName, final byte [] row)
+ throws IOException;
+
+ /**
+ * Releases a remote row lock.
+ *
+ * @param regionName
+ * @param lockId the lock id returned by lockRow
+ * @throws IOException
+ */
+ public void unlockRow(final byte [] regionName, final long lockId)
+ throws IOException;
+
+ /**
+ * Atomically increments a column value. If the column value isn't long-like, this could
+ * throw an exception.
+ *
+ * @param regionName
+ * @param row
+ * @param column
+ * @param amount
+ * @return new incremented column value
+ * @throws IOException
+ */
+ public long incrementColumnValue(byte [] regionName, byte [] row,
+ byte [] column, long amount) throws IOException;
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/IndexedRegionInterface.java b/src/java/org/apache/hadoop/hbase/ipc/IndexedRegionInterface.java
new file mode 100644
index 0000000..6f5bb0b
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/IndexedRegionInterface.java
@@ -0,0 +1,11 @@
+/*
+ * $Id$
+ * Created on Sep 10, 2008
+ *
+ */
+package org.apache.hadoop.hbase.ipc;
+
+/** Interface for the indexed region server. */
+public interface IndexedRegionInterface extends TransactionalRegionInterface {
+ // No methods for now...
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/TransactionalRegionInterface.java b/src/java/org/apache/hadoop/hbase/ipc/TransactionalRegionInterface.java
new file mode 100644
index 0000000..20e50c8
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/TransactionalRegionInterface.java
@@ -0,0 +1,208 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+
+/**
+ * Interface for transactional region servers.
+ *
+ * <p>NOTE: if you change the interface, you must change the RPC version
+ * number in HBaseRPCProtocolVersion
+ *
+ */
+public interface TransactionalRegionInterface extends HRegionInterface {
+
+ /**
+ * Sent to initiate a transaction.
+ *
+ * @param transactionId
+ * @param regionName name of region
+ * @throws IOException
+ */
+ public void beginTransaction(long transactionId, final byte[] regionName)
+ throws IOException;
+
+ /**
+ * Retrieve a single value from the specified region for the specified row and
+ * column keys
+ *
+ * @param transactionId
+ * @param regionName name of region
+ * @param row row key
+ * @param column column key
+ * @return alue for that region/row/column
+ * @throws IOException
+ */
+ public Cell get(long transactionId, final byte[] regionName,
+ final byte[] row, final byte[] column) throws IOException;
+
+ /**
+ * Get the specified number of versions of the specified row and column
+ *
+ * @param transactionId
+ * @param regionName region name
+ * @param row row key
+ * @param column column key
+ * @param numVersions number of versions to return
+ * @return array of values
+ * @throws IOException
+ */
+ public Cell[] get(long transactionId, final byte[] regionName,
+ final byte[] row, final byte[] column, final int numVersions)
+ throws IOException;
+
+ /**
+ * Get the specified number of versions of the specified row and column with
+ * the specified timestamp.
+ *
+ * @param transactionId
+ * @param regionName region name
+ * @param row row key
+ * @param column column key
+ * @param timestamp timestamp
+ * @param numVersions number of versions to return
+ * @return array of values
+ * @throws IOException
+ */
+ public Cell[] get(long transactionId, final byte[] regionName,
+ final byte[] row, final byte[] column, final long timestamp,
+ final int numVersions) throws IOException;
+
+ /**
+ * Get all the data for the specified row at a given timestamp
+ *
+ * @param transactionId
+ * @param regionName region name
+ * @param row row key
+ * @param ts timestamp
+ * @return map of values
+ * @throws IOException
+ */
+ public RowResult getRow(long transactionId, final byte[] regionName,
+ final byte[] row, final long ts) throws IOException;
+
+ /**
+ * Get selected columns for the specified row at a given timestamp.
+ *
+ * @param transactionId
+ * @param regionName region name
+ * @param row row key
+ * @param columns colums to get
+ * @param ts timestamp
+ * @return map of values
+ * @throws IOException
+ */
+ public RowResult getRow(long transactionId, final byte[] regionName,
+ final byte[] row, final byte[][] columns, final long ts)
+ throws IOException;
+
+ /**
+ * Get selected columns for the specified row at the latest timestamp.
+ *
+ * @param transactionId
+ * @param regionName region name
+ * @param row row key
+ * @param columns columns to get
+ * @return map of values
+ * @throws IOException
+ */
+ public RowResult getRow(long transactionId, final byte[] regionName,
+ final byte[] row, final byte[][] columns) throws IOException;
+
+ /**
+ * Delete all cells that match the passed row and whose timestamp is equal-to
+ * or older than the passed timestamp.
+ *
+ * @param transactionId
+ * @param regionName region name
+ * @param row row key
+ * @param timestamp Delete all entries that have this timestamp or older
+ * @throws IOException
+ */
+ public void deleteAll(long transactionId, byte[] regionName, byte[] row,
+ long timestamp) throws IOException;
+
+ /**
+ * Opens a remote scanner with a RowFilter.
+ *
+ * @param transactionId
+ * @param regionName name of region to scan
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible to
+ * pass a regex for column family name. A column name is judged to be regex if
+ * it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param startRow starting row to scan
+ * @param timestamp only return values whose timestamp is <= this value
+ * @param filter RowFilter for filtering results at the row-level.
+ *
+ * @return scannerId scanner identifier used in other calls
+ * @throws IOException
+ */
+ public long openScanner(final long transactionId, final byte[] regionName,
+ final byte[][] columns, final byte[] startRow, long timestamp,
+ RowFilterInterface filter) throws IOException;
+
+ /**
+ * Applies a batch of updates via one RPC
+ *
+ * @param transactionId
+ * @param regionName name of the region to update
+ * @param b BatchUpdate
+ * @throws IOException
+ */
+ public void batchUpdate(long transactionId, final byte[] regionName,
+ final BatchUpdate b) throws IOException;
+
+ /**
+ * Ask if we can commit the given transaction.
+ *
+ * @param regionName
+ * @param transactionId
+ * @return true if we can commit
+ * @throws IOException
+ */
+ public boolean commitRequest(final byte[] regionName, long transactionId)
+ throws IOException;
+
+ /**
+ * Commit the transaction.
+ *
+ * @param regionName
+ * @param transactionId
+ * @throws IOException
+ */
+ public void commit(final byte[] regionName, long transactionId)
+ throws IOException;
+
+ /**
+ * Abort the transaction.
+ *
+ * @param regionName
+ * @param transactionId
+ * @throws IOException
+ */
+ public void abort(final byte[] regionName, long transactionId)
+ throws IOException;
+}
diff --git a/src/java/org/apache/hadoop/hbase/ipc/package.html b/src/java/org/apache/hadoop/hbase/ipc/package.html
new file mode 100644
index 0000000..0e01bdc
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/ipc/package.html
@@ -0,0 +1,24 @@
+<html>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<body>
+Tools to help define network clients and servers.
+This is the hadoop copied local so can fix bugs and make hbase-specific optimizations.
+</body>
+</html>
diff --git a/src/java/org/apache/hadoop/hbase/mapred/BuildTableIndex.java b/src/java/org/apache/hadoop/hbase/mapred/BuildTableIndex.java
new file mode 100644
index 0000000..9698883
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/BuildTableIndex.java
@@ -0,0 +1,205 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.mapred.FileOutputFormat;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+
+/**
+ * Example table column indexing class. Runs a mapreduce job to index
+ * specified table columns.
+ * <ul><li>Each row is modeled as a Lucene document: row key is indexed in
+ * its untokenized form, column name-value pairs are Lucene field name-value
+ * pairs.</li>
+ * <li>A file passed on command line is used to populate an
+ * {@link IndexConfiguration} which is used to set various Lucene parameters,
+ * specify whether to optimize an index and which columns to index and/or
+ * store, in tokenized or untokenized form, etc. For an example, see the
+ * <code>createIndexConfContent</code> method in TestTableIndex
+ * </li>
+ * <li>The number of reduce tasks decides the number of indexes (partitions).
+ * The index(es) is stored in the output path of job configuration.</li>
+ * <li>The index build process is done in the reduce phase. Users can use
+ * the map phase to join rows from different tables or to pre-parse/analyze
+ * column content, etc.</li>
+ * </ul>
+ */
+public class BuildTableIndex {
+ private static final String USAGE = "Usage: BuildTableIndex " +
+ "-m <numMapTasks> -r <numReduceTasks>\n -indexConf <iconfFile> " +
+ "-indexDir <indexDir>\n -table <tableName> -columns <columnName1> " +
+ "[<columnName2> ...]";
+
+ private static void printUsage(String message) {
+ System.err.println(message);
+ System.err.println(USAGE);
+ System.exit(-1);
+ }
+
+ /** default constructor */
+ public BuildTableIndex() {
+ super();
+ }
+
+ /**
+ * @param args
+ * @throws IOException
+ */
+ public void run(String[] args) throws IOException {
+ if (args.length < 6) {
+ printUsage("Too few arguments");
+ }
+
+ int numMapTasks = 1;
+ int numReduceTasks = 1;
+ String iconfFile = null;
+ String indexDir = null;
+ String tableName = null;
+ StringBuffer columnNames = null;
+
+ // parse args
+ for (int i = 0; i < args.length - 1; i++) {
+ if ("-m".equals(args[i])) {
+ numMapTasks = Integer.parseInt(args[++i]);
+ } else if ("-r".equals(args[i])) {
+ numReduceTasks = Integer.parseInt(args[++i]);
+ } else if ("-indexConf".equals(args[i])) {
+ iconfFile = args[++i];
+ } else if ("-indexDir".equals(args[i])) {
+ indexDir = args[++i];
+ } else if ("-table".equals(args[i])) {
+ tableName = args[++i];
+ } else if ("-columns".equals(args[i])) {
+ columnNames = new StringBuffer(args[++i]);
+ while (i + 1 < args.length && !args[i + 1].startsWith("-")) {
+ columnNames.append(" ");
+ columnNames.append(args[++i]);
+ }
+ } else {
+ printUsage("Unsupported option " + args[i]);
+ }
+ }
+
+ if (indexDir == null || tableName == null || columnNames == null) {
+ printUsage("Index directory, table name and at least one column must " +
+ "be specified");
+ }
+
+ Configuration conf = new HBaseConfiguration();
+ if (iconfFile != null) {
+ // set index configuration content from a file
+ String content = readContent(iconfFile);
+ IndexConfiguration iconf = new IndexConfiguration();
+ // purely to validate, exception will be thrown if not valid
+ iconf.addFromXML(content);
+ conf.set("hbase.index.conf", content);
+ }
+
+ if (columnNames != null) {
+ JobConf jobConf = createJob(conf, numMapTasks, numReduceTasks, indexDir,
+ tableName, columnNames.toString());
+ JobClient.runJob(jobConf);
+ }
+ }
+
+ /**
+ * @param conf
+ * @param numMapTasks
+ * @param numReduceTasks
+ * @param indexDir
+ * @param tableName
+ * @param columnNames
+ * @return JobConf
+ */
+ public JobConf createJob(Configuration conf, int numMapTasks,
+ int numReduceTasks, String indexDir, String tableName,
+ String columnNames) {
+ JobConf jobConf = new JobConf(conf, BuildTableIndex.class);
+ jobConf.setJobName("build index for table " + tableName);
+ jobConf.setNumMapTasks(numMapTasks);
+ // number of indexes to partition into
+ jobConf.setNumReduceTasks(numReduceTasks);
+
+ // use identity map (a waste, but just as an example)
+ IdentityTableMap.initJob(tableName, columnNames, IdentityTableMap.class,
+ jobConf);
+
+ // use IndexTableReduce to build a Lucene index
+ jobConf.setReducerClass(IndexTableReduce.class);
+ FileOutputFormat.setOutputPath(jobConf, new Path(indexDir));
+ jobConf.setOutputFormat(IndexOutputFormat.class);
+
+ return jobConf;
+ }
+
+ /*
+ * Read xml file of indexing configurations. The xml format is similar to
+ * hbase-default.xml and hadoop-default.xml. For an example configuration,
+ * see the <code>createIndexConfContent</code> method in TestTableIndex
+ * @param fileName File to read.
+ * @return XML configuration read from file
+ * @throws IOException
+ */
+ private String readContent(String fileName) throws IOException {
+ File file = new File(fileName);
+ int length = (int) file.length();
+ if (length == 0) {
+ printUsage("Index configuration file " + fileName + " does not exist");
+ }
+
+ int bytesRead = 0;
+ byte[] bytes = new byte[length];
+ FileInputStream fis = new FileInputStream(file);
+
+ try {
+ // read entire file into content
+ while (bytesRead < length) {
+ int read = fis.read(bytes, bytesRead, length - bytesRead);
+ if (read > 0) {
+ bytesRead += read;
+ } else {
+ break;
+ }
+ }
+ } finally {
+ fis.close();
+ }
+
+ return new String(bytes, 0, bytesRead, HConstants.UTF8_ENCODING);
+ }
+
+ /**
+ * @param args
+ * @throws IOException
+ */
+ public static void main(String[] args) throws IOException {
+ BuildTableIndex build = new BuildTableIndex();
+ build.run(args);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/mapred/Driver.java b/src/java/org/apache/hadoop/hbase/mapred/Driver.java
new file mode 100644
index 0000000..393695d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/Driver.java
@@ -0,0 +1,39 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import org.apache.hadoop.util.ProgramDriver;
+
+/**
+ * Driver for hbase mapreduce jobs. Select which to run by passing
+ * name of job to this main.
+ */
+public class Driver {
+ /**
+ * @param args
+ * @throws Throwable
+ */
+ public static void main(String[] args) throws Throwable {
+ ProgramDriver pgd = new ProgramDriver();
+ pgd.addClass(RowCounter.NAME, RowCounter.class,
+ "Count rows in HBase table");
+ pgd.driver(args);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java b/src/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java
new file mode 100644
index 0000000..4eb9cfa
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java
@@ -0,0 +1,160 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+
+
+/**
+ * Extract grouping columns from input record
+ */
+public class GroupingTableMap
+extends MapReduceBase
+implements TableMap<ImmutableBytesWritable,RowResult> {
+
+ /**
+ * JobConf parameter to specify the columns used to produce the key passed to
+ * collect from the map phase
+ */
+ public static final String GROUP_COLUMNS =
+ "hbase.mapred.groupingtablemap.columns";
+
+ protected byte [][] m_columns;
+
+ /**
+ * Use this before submitting a TableMap job. It will appropriately set up the
+ * JobConf.
+ *
+ * @param table table to be processed
+ * @param columns space separated list of columns to fetch
+ * @param groupColumns space separated list of columns used to form the key
+ * used in collect
+ * @param mapper map class
+ * @param job job configuration object
+ */
+ @SuppressWarnings("unchecked")
+ public static void initJob(String table, String columns, String groupColumns,
+ Class<? extends TableMap> mapper, JobConf job) {
+
+ TableMapReduceUtil.initTableMapJob(table, columns, mapper,
+ ImmutableBytesWritable.class, RowResult.class, job);
+ job.set(GROUP_COLUMNS, groupColumns);
+ }
+
+ @Override
+ public void configure(JobConf job) {
+ super.configure(job);
+ String[] cols = job.get(GROUP_COLUMNS, "").split(" ");
+ m_columns = new byte[cols.length][];
+ for(int i = 0; i < cols.length; i++) {
+ m_columns[i] = Bytes.toBytes(cols[i]);
+ }
+ }
+
+ /**
+ * Extract the grouping columns from value to construct a new key.
+ *
+ * Pass the new key and value to reduce.
+ * If any of the grouping columns are not found in the value, the record is skipped.
+ * @param key
+ * @param value
+ * @param output
+ * @param reporter
+ * @throws IOException
+ */
+ public void map(ImmutableBytesWritable key, RowResult value,
+ OutputCollector<ImmutableBytesWritable,RowResult> output,
+ Reporter reporter) throws IOException {
+
+ byte[][] keyVals = extractKeyValues(value);
+ if(keyVals != null) {
+ ImmutableBytesWritable tKey = createGroupKey(keyVals);
+ output.collect(tKey, value);
+ }
+ }
+
+ /**
+ * Extract columns values from the current record. This method returns
+ * null if any of the columns are not found.
+ *
+ * Override this method if you want to deal with nulls differently.
+ *
+ * @param r
+ * @return array of byte values
+ */
+ protected byte[][] extractKeyValues(RowResult r) {
+ byte[][] keyVals = null;
+ ArrayList<byte[]> foundList = new ArrayList<byte[]>();
+ int numCols = m_columns.length;
+ if(numCols > 0) {
+ for (Map.Entry<byte [], Cell> e: r.entrySet()) {
+ byte [] column = e.getKey();
+ for (int i = 0; i < numCols; i++) {
+ if (Bytes.equals(column, m_columns[i])) {
+ foundList.add(e.getValue().getValue());
+ break;
+ }
+ }
+ }
+ if(foundList.size() == numCols) {
+ keyVals = foundList.toArray(new byte[numCols][]);
+ }
+ }
+ return keyVals;
+ }
+
+ /**
+ * Create a key by concatenating multiple column values.
+ * Override this function in order to produce different types of keys.
+ *
+ * @param vals
+ * @return key generated by concatenating multiple column values
+ */
+ protected ImmutableBytesWritable createGroupKey(byte[][] vals) {
+ if(vals == null) {
+ return null;
+ }
+ StringBuilder sb = new StringBuilder();
+ for(int i = 0; i < vals.length; i++) {
+ if(i > 0) {
+ sb.append(" ");
+ }
+ try {
+ sb.append(new String(vals[i], HConstants.UTF8_ENCODING));
+ } catch (UnsupportedEncodingException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ return new ImmutableBytesWritable(Bytes.toBytes(sb.toString()));
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java b/src/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java
new file mode 100644
index 0000000..c5a56fd
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java
@@ -0,0 +1,90 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.Partitioner;
+
+
+/**
+ * This is used to partition the output keys into groups of keys.
+ * Keys are grouped according to the regions that currently exist
+ * so that each reducer fills a single region so load is distributed.
+ *
+ * @param <K2>
+ * @param <V2>
+ */
+public class HRegionPartitioner<K2,V2>
+implements Partitioner<ImmutableBytesWritable, V2> {
+ private final Log LOG = LogFactory.getLog(TableInputFormat.class);
+ private HTable table;
+ private byte[][] startKeys;
+
+ public void configure(JobConf job) {
+ try {
+ this.table = new HTable(new HBaseConfiguration(job),
+ job.get(TableOutputFormat.OUTPUT_TABLE));
+ } catch (IOException e) {
+ LOG.error(e);
+ }
+
+ try {
+ this.startKeys = this.table.getStartKeys();
+ } catch (IOException e) {
+ LOG.error(e);
+ }
+ }
+
+ public int getPartition(ImmutableBytesWritable key,
+ V2 value, int numPartitions) {
+ byte[] region = null;
+ // Only one region return 0
+ if (this.startKeys.length == 1){
+ return 0;
+ }
+ try {
+ // Not sure if this is cached after a split so we could have problems
+ // here if a region splits while mapping
+ region = table.getRegionLocation(key.get()).getRegionInfo().getStartKey();
+ } catch (IOException e) {
+ LOG.error(e);
+ }
+ for (int i = 0; i < this.startKeys.length; i++){
+ if (Bytes.compareTo(region, this.startKeys[i]) == 0 ){
+ if (i >= numPartitions-1){
+ // cover if we have less reduces then regions.
+ return (Integer.toString(i).hashCode()
+ & Integer.MAX_VALUE) % numPartitions;
+ }
+ return i;
+ }
+ }
+ // if above fails to find start key that match we need to return something
+ return 0;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/IdentityTableMap.java b/src/java/org/apache/hadoop/hbase/mapred/IdentityTableMap.java
new file mode 100644
index 0000000..f3f0fc0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/IdentityTableMap.java
@@ -0,0 +1,75 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+
+/**
+ * Pass the given key and record as-is to reduce
+ */
+public class IdentityTableMap
+extends MapReduceBase
+implements TableMap<ImmutableBytesWritable, RowResult> {
+
+ /** constructor */
+ public IdentityTableMap() {
+ super();
+ }
+
+ /**
+ * Use this before submitting a TableMap job. It will
+ * appropriately set up the JobConf.
+ *
+ * @param table table name
+ * @param columns columns to scan
+ * @param mapper mapper class
+ * @param job job configuration
+ */
+ @SuppressWarnings("unchecked")
+ public static void initJob(String table, String columns,
+ Class<? extends TableMap> mapper, JobConf job) {
+ TableMapReduceUtil.initTableMapJob(table, columns, mapper,
+ ImmutableBytesWritable.class,
+ RowResult.class, job);
+ }
+
+ /**
+ * Pass the key, value to reduce
+ * @param key
+ * @param value
+ * @param output
+ * @param reporter
+ * @throws IOException
+ */
+ public void map(ImmutableBytesWritable key, RowResult value,
+ OutputCollector<ImmutableBytesWritable,RowResult> output,
+ Reporter reporter) throws IOException {
+
+ // convert
+ output.collect(key, value);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/IdentityTableReduce.java b/src/java/org/apache/hadoop/hbase/mapred/IdentityTableReduce.java
new file mode 100644
index 0000000..08b54d6
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/IdentityTableReduce.java
@@ -0,0 +1,60 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+
+/**
+ * Write to table each key, record pair
+ */
+public class IdentityTableReduce
+extends MapReduceBase
+implements TableReduce<ImmutableBytesWritable, BatchUpdate> {
+ @SuppressWarnings("unused")
+ private static final Log LOG =
+ LogFactory.getLog(IdentityTableReduce.class.getName());
+
+ /**
+ * No aggregation, output pairs of (key, record)
+ * @param key
+ * @param values
+ * @param output
+ * @param reporter
+ * @throws IOException
+ */
+ public void reduce(ImmutableBytesWritable key, Iterator<BatchUpdate> values,
+ OutputCollector<ImmutableBytesWritable, BatchUpdate> output,
+ Reporter reporter)
+ throws IOException {
+
+ while(values.hasNext()) {
+ output.collect(key, values.next());
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/IndexConfiguration.java b/src/java/org/apache/hadoop/hbase/mapred/IndexConfiguration.java
new file mode 100644
index 0000000..3d988812
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/IndexConfiguration.java
@@ -0,0 +1,422 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.ByteArrayInputStream;
+import java.io.OutputStream;
+import java.io.StringWriter;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Properties;
+
+import javax.xml.parsers.DocumentBuilder;
+import javax.xml.parsers.DocumentBuilderFactory;
+import javax.xml.transform.Transformer;
+import javax.xml.transform.TransformerFactory;
+import javax.xml.transform.dom.DOMSource;
+import javax.xml.transform.stream.StreamResult;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.w3c.dom.Document;
+import org.w3c.dom.Element;
+import org.w3c.dom.Node;
+import org.w3c.dom.NodeList;
+import org.w3c.dom.Text;
+
+/**
+ * Configuration parameters for building a Lucene index
+ */
+public class IndexConfiguration extends Configuration {
+ private static final Log LOG = LogFactory.getLog(IndexConfiguration.class);
+
+ static final String HBASE_COLUMN_NAME = "hbase.column.name";
+ static final String HBASE_COLUMN_STORE = "hbase.column.store";
+ static final String HBASE_COLUMN_INDEX = "hbase.column.index";
+ static final String HBASE_COLUMN_TOKENIZE = "hbase.column.tokenize";
+ static final String HBASE_COLUMN_BOOST = "hbase.column.boost";
+ static final String HBASE_COLUMN_OMIT_NORMS = "hbase.column.omit.norms";
+ static final String HBASE_INDEX_ROWKEY_NAME = "hbase.index.rowkey.name";
+ static final String HBASE_INDEX_ANALYZER_NAME = "hbase.index.analyzer.name";
+ static final String HBASE_INDEX_MAX_BUFFERED_DOCS =
+ "hbase.index.max.buffered.docs";
+ static final String HBASE_INDEX_MAX_BUFFERED_DELS =
+ "hbase.index.max.buffered.dels";
+ static final String HBASE_INDEX_MAX_FIELD_LENGTH =
+ "hbase.index.max.field.length";
+ static final String HBASE_INDEX_MAX_MERGE_DOCS =
+ "hbase.index.max.merge.docs";
+ static final String HBASE_INDEX_MERGE_FACTOR = "hbase.index.merge.factor";
+ // double ramBufferSizeMB;
+ static final String HBASE_INDEX_SIMILARITY_NAME =
+ "hbase.index.similarity.name";
+ static final String HBASE_INDEX_USE_COMPOUND_FILE =
+ "hbase.index.use.compound.file";
+ static final String HBASE_INDEX_OPTIMIZE = "hbase.index.optimize";
+
+ public static class ColumnConf extends Properties {
+
+ private static final long serialVersionUID = 7419012290580607821L;
+
+ boolean getBoolean(String name, boolean defaultValue) {
+ String valueString = getProperty(name);
+ if ("true".equals(valueString))
+ return true;
+ else if ("false".equals(valueString))
+ return false;
+ else
+ return defaultValue;
+ }
+
+ void setBoolean(String name, boolean value) {
+ setProperty(name, Boolean.toString(value));
+ }
+
+ float getFloat(String name, float defaultValue) {
+ String valueString = getProperty(name);
+ if (valueString == null)
+ return defaultValue;
+ try {
+ return Float.parseFloat(valueString);
+ } catch (NumberFormatException e) {
+ return defaultValue;
+ }
+ }
+
+ void setFloat(String name, float value) {
+ setProperty(name, Float.toString(value));
+ }
+ }
+
+ private Map<String, ColumnConf> columnMap =
+ new ConcurrentHashMap<String, ColumnConf>();
+
+ public Iterator<String> columnNameIterator() {
+ return columnMap.keySet().iterator();
+ }
+
+ public boolean isIndex(String columnName) {
+ return getColumn(columnName).getBoolean(HBASE_COLUMN_INDEX, true);
+ }
+
+ public void setIndex(String columnName, boolean index) {
+ getColumn(columnName).setBoolean(HBASE_COLUMN_INDEX, index);
+ }
+
+ public boolean isStore(String columnName) {
+ return getColumn(columnName).getBoolean(HBASE_COLUMN_STORE, false);
+ }
+
+ public void setStore(String columnName, boolean store) {
+ getColumn(columnName).setBoolean(HBASE_COLUMN_STORE, store);
+ }
+
+ public boolean isTokenize(String columnName) {
+ return getColumn(columnName).getBoolean(HBASE_COLUMN_TOKENIZE, true);
+ }
+
+ public void setTokenize(String columnName, boolean tokenize) {
+ getColumn(columnName).setBoolean(HBASE_COLUMN_TOKENIZE, tokenize);
+ }
+
+ public float getBoost(String columnName) {
+ return getColumn(columnName).getFloat(HBASE_COLUMN_BOOST, 1.0f);
+ }
+
+ public void setBoost(String columnName, float boost) {
+ getColumn(columnName).setFloat(HBASE_COLUMN_BOOST, boost);
+ }
+
+ public boolean isOmitNorms(String columnName) {
+ return getColumn(columnName).getBoolean(HBASE_COLUMN_OMIT_NORMS, true);
+ }
+
+ public void setOmitNorms(String columnName, boolean omitNorms) {
+ getColumn(columnName).setBoolean(HBASE_COLUMN_OMIT_NORMS, omitNorms);
+ }
+
+ private ColumnConf getColumn(String columnName) {
+ ColumnConf column = columnMap.get(columnName);
+ if (column == null) {
+ column = new ColumnConf();
+ columnMap.put(columnName, column);
+ }
+ return column;
+ }
+
+ public String getAnalyzerName() {
+ return get(HBASE_INDEX_ANALYZER_NAME,
+ "org.apache.lucene.analysis.standard.StandardAnalyzer");
+ }
+
+ public void setAnalyzerName(String analyzerName) {
+ set(HBASE_INDEX_ANALYZER_NAME, analyzerName);
+ }
+
+ public int getMaxBufferedDeleteTerms() {
+ return getInt(HBASE_INDEX_MAX_BUFFERED_DELS, 1000);
+ }
+
+ public void setMaxBufferedDeleteTerms(int maxBufferedDeleteTerms) {
+ setInt(HBASE_INDEX_MAX_BUFFERED_DELS, maxBufferedDeleteTerms);
+ }
+
+ public int getMaxBufferedDocs() {
+ return getInt(HBASE_INDEX_MAX_BUFFERED_DOCS, 10);
+ }
+
+ public void setMaxBufferedDocs(int maxBufferedDocs) {
+ setInt(HBASE_INDEX_MAX_BUFFERED_DOCS, maxBufferedDocs);
+ }
+
+ public int getMaxFieldLength() {
+ return getInt(HBASE_INDEX_MAX_FIELD_LENGTH, Integer.MAX_VALUE);
+ }
+
+ public void setMaxFieldLength(int maxFieldLength) {
+ setInt(HBASE_INDEX_MAX_FIELD_LENGTH, maxFieldLength);
+ }
+
+ public int getMaxMergeDocs() {
+ return getInt(HBASE_INDEX_MAX_MERGE_DOCS, Integer.MAX_VALUE);
+ }
+
+ public void setMaxMergeDocs(int maxMergeDocs) {
+ setInt(HBASE_INDEX_MAX_MERGE_DOCS, maxMergeDocs);
+ }
+
+ public int getMergeFactor() {
+ return getInt(HBASE_INDEX_MERGE_FACTOR, 10);
+ }
+
+ public void setMergeFactor(int mergeFactor) {
+ setInt(HBASE_INDEX_MERGE_FACTOR, mergeFactor);
+ }
+
+ public String getRowkeyName() {
+ return get(HBASE_INDEX_ROWKEY_NAME, "ROWKEY");
+ }
+
+ public void setRowkeyName(String rowkeyName) {
+ set(HBASE_INDEX_ROWKEY_NAME, rowkeyName);
+ }
+
+ public String getSimilarityName() {
+ return get(HBASE_INDEX_SIMILARITY_NAME, null);
+ }
+
+ public void setSimilarityName(String similarityName) {
+ set(HBASE_INDEX_SIMILARITY_NAME, similarityName);
+ }
+
+ public boolean isUseCompoundFile() {
+ return getBoolean(HBASE_INDEX_USE_COMPOUND_FILE, false);
+ }
+
+ public void setUseCompoundFile(boolean useCompoundFile) {
+ setBoolean(HBASE_INDEX_USE_COMPOUND_FILE, useCompoundFile);
+ }
+
+ public boolean doOptimize() {
+ return getBoolean(HBASE_INDEX_OPTIMIZE, true);
+ }
+
+ public void setDoOptimize(boolean doOptimize) {
+ setBoolean(HBASE_INDEX_OPTIMIZE, doOptimize);
+ }
+
+ public void addFromXML(String content) {
+ try {
+ DocumentBuilder builder = DocumentBuilderFactory.newInstance()
+ .newDocumentBuilder();
+
+ Document doc = builder
+ .parse(new ByteArrayInputStream(content.getBytes()));
+
+ Element root = doc.getDocumentElement();
+ if (!"configuration".equals(root.getTagName())) {
+ LOG.fatal("bad conf file: top-level element not <configuration>");
+ }
+
+ NodeList props = root.getChildNodes();
+ for (int i = 0; i < props.getLength(); i++) {
+ Node propNode = props.item(i);
+ if (!(propNode instanceof Element)) {
+ continue;
+ }
+
+ Element prop = (Element) propNode;
+ if ("property".equals(prop.getTagName())) {
+ propertyFromXML(prop, null);
+ } else if ("column".equals(prop.getTagName())) {
+ columnConfFromXML(prop);
+ } else {
+ LOG.warn("bad conf content: element neither <property> nor <column>");
+ }
+ }
+ } catch (Exception e) {
+ LOG.fatal("error parsing conf content: " + e);
+ throw new RuntimeException(e);
+ }
+ }
+
+ private void propertyFromXML(Element prop, Properties properties) {
+ NodeList fields = prop.getChildNodes();
+ String attr = null;
+ String value = null;
+
+ for (int j = 0; j < fields.getLength(); j++) {
+ Node fieldNode = fields.item(j);
+ if (!(fieldNode instanceof Element)) {
+ continue;
+ }
+
+ Element field = (Element) fieldNode;
+ if ("name".equals(field.getTagName())) {
+ attr = ((Text) field.getFirstChild()).getData();
+ }
+ if ("value".equals(field.getTagName()) && field.hasChildNodes()) {
+ value = ((Text) field.getFirstChild()).getData();
+ }
+ }
+
+ if (attr != null && value != null) {
+ if (properties == null) {
+ set(attr, value);
+ } else {
+ properties.setProperty(attr, value);
+ }
+ }
+ }
+
+ private void columnConfFromXML(Element column) {
+ ColumnConf columnConf = new ColumnConf();
+ NodeList props = column.getChildNodes();
+ for (int i = 0; i < props.getLength(); i++) {
+ Node propNode = props.item(i);
+ if (!(propNode instanceof Element)) {
+ continue;
+ }
+
+ Element prop = (Element) propNode;
+ if ("property".equals(prop.getTagName())) {
+ propertyFromXML(prop, columnConf);
+ } else {
+ LOG.warn("bad conf content: element not <property>");
+ }
+ }
+
+ if (columnConf.getProperty(HBASE_COLUMN_NAME) != null) {
+ columnMap.put(columnConf.getProperty(HBASE_COLUMN_NAME), columnConf);
+ } else {
+ LOG.warn("bad column conf: name not specified");
+ }
+ }
+
+ public void write(OutputStream out) {
+ try {
+ Document doc = writeDocument();
+ DOMSource source = new DOMSource(doc);
+ StreamResult result = new StreamResult(out);
+ TransformerFactory transFactory = TransformerFactory.newInstance();
+ Transformer transformer = transFactory.newTransformer();
+ transformer.transform(source, result);
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ private Document writeDocument() {
+ Iterator<Map.Entry<String, String>> iter = iterator();
+ try {
+ Document doc = DocumentBuilderFactory.newInstance().newDocumentBuilder()
+ .newDocument();
+ Element conf = doc.createElement("configuration");
+ doc.appendChild(conf);
+ conf.appendChild(doc.createTextNode("\n"));
+
+ Map.Entry<String, String> entry;
+ while (iter.hasNext()) {
+ entry = iter.next();
+ String name = entry.getKey();
+ String value = entry.getValue();
+ writeProperty(doc, conf, name, value);
+ }
+
+ Iterator<String> columnIter = columnNameIterator();
+ while (columnIter.hasNext()) {
+ writeColumn(doc, conf, columnIter.next());
+ }
+
+ return doc;
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ private void writeProperty(Document doc, Element parent, String name,
+ String value) {
+ Element propNode = doc.createElement("property");
+ parent.appendChild(propNode);
+
+ Element nameNode = doc.createElement("name");
+ nameNode.appendChild(doc.createTextNode(name));
+ propNode.appendChild(nameNode);
+
+ Element valueNode = doc.createElement("value");
+ valueNode.appendChild(doc.createTextNode(value));
+ propNode.appendChild(valueNode);
+
+ parent.appendChild(doc.createTextNode("\n"));
+ }
+
+ private void writeColumn(Document doc, Element parent, String columnName) {
+ Element column = doc.createElement("column");
+ parent.appendChild(column);
+ column.appendChild(doc.createTextNode("\n"));
+
+ ColumnConf columnConf = getColumn(columnName);
+ for (Map.Entry<Object, Object> entry : columnConf.entrySet()) {
+ if (entry.getKey() instanceof String
+ && entry.getValue() instanceof String) {
+ writeProperty(doc, column, (String) entry.getKey(), (String) entry
+ .getValue());
+ }
+ }
+ }
+
+ @Override
+ public String toString() {
+ StringWriter writer = new StringWriter();
+ try {
+ Document doc = writeDocument();
+ DOMSource source = new DOMSource(doc);
+ StreamResult result = new StreamResult(writer);
+ TransformerFactory transFactory = TransformerFactory.newInstance();
+ Transformer transformer = transFactory.newTransformer();
+ transformer.transform(source, result);
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ return writer.toString();
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/mapred/IndexOutputFormat.java b/src/java/org/apache/hadoop/hbase/mapred/IndexOutputFormat.java
new file mode 100644
index 0000000..a5e7c70
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/IndexOutputFormat.java
@@ -0,0 +1,163 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.FileOutputFormat;
+import org.apache.hadoop.mapred.RecordWriter;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.util.Progressable;
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.document.Document;
+import org.apache.lucene.index.IndexWriter;
+import org.apache.lucene.search.Similarity;
+
+/**
+ * Create a local index, unwrap Lucene documents created by reduce, add them to
+ * the index, and copy the index to the destination.
+ */
+public class IndexOutputFormat extends
+ FileOutputFormat<ImmutableBytesWritable, LuceneDocumentWrapper> {
+ static final Log LOG = LogFactory.getLog(IndexOutputFormat.class);
+
+ private Random random = new Random();
+
+ @Override
+ public RecordWriter<ImmutableBytesWritable, LuceneDocumentWrapper>
+ getRecordWriter(final FileSystem fs, JobConf job, String name,
+ final Progressable progress)
+ throws IOException {
+
+ final Path perm = new Path(FileOutputFormat.getOutputPath(job), name);
+ final Path temp = job.getLocalPath("index/_"
+ + Integer.toString(random.nextInt()));
+
+ LOG.info("To index into " + perm);
+
+ // delete old, if any
+ fs.delete(perm, true);
+
+ final IndexConfiguration indexConf = new IndexConfiguration();
+ String content = job.get("hbase.index.conf");
+ if (content != null) {
+ indexConf.addFromXML(content);
+ }
+
+ String analyzerName = indexConf.getAnalyzerName();
+ Analyzer analyzer;
+ try {
+ Class<?> analyzerClass = Class.forName(analyzerName);
+ analyzer = (Analyzer) analyzerClass.newInstance();
+ } catch (Exception e) {
+ throw new IOException("Error in creating an analyzer object "
+ + analyzerName);
+ }
+
+ // build locally first
+ final IndexWriter writer = new IndexWriter(fs.startLocalOutput(perm, temp)
+ .toString(), analyzer, true);
+
+ // no delete, so no need for maxBufferedDeleteTerms
+ writer.setMaxBufferedDocs(indexConf.getMaxBufferedDocs());
+ writer.setMaxFieldLength(indexConf.getMaxFieldLength());
+ writer.setMaxMergeDocs(indexConf.getMaxMergeDocs());
+ writer.setMergeFactor(indexConf.getMergeFactor());
+ String similarityName = indexConf.getSimilarityName();
+ if (similarityName != null) {
+ try {
+ Class<?> similarityClass = Class.forName(similarityName);
+ Similarity similarity = (Similarity) similarityClass.newInstance();
+ writer.setSimilarity(similarity);
+ } catch (Exception e) {
+ throw new IOException("Error in creating a similarty object "
+ + similarityName);
+ }
+ }
+ writer.setUseCompoundFile(indexConf.isUseCompoundFile());
+
+ return new RecordWriter<ImmutableBytesWritable, LuceneDocumentWrapper>() {
+ boolean closed;
+ private long docCount = 0;
+
+ public void write(ImmutableBytesWritable key,
+ LuceneDocumentWrapper value)
+ throws IOException {
+ // unwrap and index doc
+ Document doc = value.get();
+ writer.addDocument(doc);
+ docCount++;
+ progress.progress();
+ }
+
+ public void close(final Reporter reporter) throws IOException {
+ // spawn a thread to give progress heartbeats
+ Thread prog = new Thread() {
+ @Override
+ public void run() {
+ while (!closed) {
+ try {
+ reporter.setStatus("closing");
+ Thread.sleep(1000);
+ } catch (InterruptedException e) {
+ continue;
+ } catch (Throwable e) {
+ return;
+ }
+ }
+ }
+ };
+
+ try {
+ prog.start();
+
+ // optimize index
+ if (indexConf.doOptimize()) {
+ if (LOG.isInfoEnabled()) {
+ LOG.info("Optimizing index.");
+ }
+ writer.optimize();
+ }
+
+ // close index
+ writer.close();
+ if (LOG.isInfoEnabled()) {
+ LOG.info("Done indexing " + docCount + " docs.");
+ }
+
+ // copy to perm destination in dfs
+ fs.completeLocalOutput(perm, temp);
+ if (LOG.isInfoEnabled()) {
+ LOG.info("Copy done.");
+ }
+ } finally {
+ closed = true;
+ }
+ }
+ };
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/IndexTableReduce.java b/src/java/org/apache/hadoop/hbase/mapred/IndexTableReduce.java
new file mode 100644
index 0000000..ae489b2
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/IndexTableReduce.java
@@ -0,0 +1,110 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reducer;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.lucene.document.Document;
+import org.apache.lucene.document.Field;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Construct a Lucene document per row, which is consumed by IndexOutputFormat
+ * to build a Lucene index
+ */
+public class IndexTableReduce extends MapReduceBase implements
+ Reducer<ImmutableBytesWritable, RowResult, ImmutableBytesWritable, LuceneDocumentWrapper> {
+ private static final Log LOG = LogFactory.getLog(IndexTableReduce.class);
+ private IndexConfiguration indexConf;
+
+ @Override
+ public void configure(JobConf job) {
+ super.configure(job);
+ indexConf = new IndexConfiguration();
+ String content = job.get("hbase.index.conf");
+ if (content != null) {
+ indexConf.addFromXML(content);
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Index conf: " + indexConf);
+ }
+ }
+
+ @Override
+ public void close() throws IOException {
+ super.close();
+ }
+
+ public void reduce(ImmutableBytesWritable key, Iterator<RowResult> values,
+ OutputCollector<ImmutableBytesWritable, LuceneDocumentWrapper> output,
+ Reporter reporter)
+ throws IOException {
+ if (!values.hasNext()) {
+ return;
+ }
+
+ Document doc = new Document();
+
+ // index and store row key, row key already UTF-8 encoded
+ Field keyField = new Field(indexConf.getRowkeyName(),
+ Bytes.toString(key.get(), key.getOffset(), key.getLength()),
+ Field.Store.YES, Field.Index.UN_TOKENIZED);
+ keyField.setOmitNorms(true);
+ doc.add(keyField);
+
+ while (values.hasNext()) {
+ RowResult value = values.next();
+
+ // each column (name-value pair) is a field (name-value pair)
+ for (Map.Entry<byte [], Cell> entry : value.entrySet()) {
+ // name is already UTF-8 encoded
+ String column = Bytes.toString(entry.getKey());
+ byte[] columnValue = entry.getValue().getValue();
+ Field.Store store = indexConf.isStore(column)?
+ Field.Store.YES: Field.Store.NO;
+ Field.Index index = indexConf.isIndex(column)?
+ (indexConf.isTokenize(column)?
+ Field.Index.TOKENIZED: Field.Index.UN_TOKENIZED):
+ Field.Index.NO;
+
+ // UTF-8 encode value
+ Field field = new Field(column, Bytes.toString(columnValue),
+ store, index);
+ field.setBoost(indexConf.getBoost(column));
+ field.setOmitNorms(indexConf.isOmitNorms(column));
+
+ doc.add(field);
+ }
+ }
+ output.collect(key, new LuceneDocumentWrapper(doc));
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/LuceneDocumentWrapper.java b/src/java/org/apache/hadoop/hbase/mapred/LuceneDocumentWrapper.java
new file mode 100644
index 0000000..07fe30c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/LuceneDocumentWrapper.java
@@ -0,0 +1,55 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import org.apache.hadoop.io.Writable;
+import org.apache.lucene.document.Document;
+
+/**
+ * A utility class used to pass a lucene document from reduce to OutputFormat.
+ * It doesn't really serialize/deserialize a lucene document.
+ */
+class LuceneDocumentWrapper implements Writable {
+ private Document doc;
+
+ /**
+ * @param doc
+ */
+ public LuceneDocumentWrapper(Document doc) {
+ this.doc = doc;
+ }
+
+ /**
+ * @return the document
+ */
+ public Document get() {
+ return doc;
+ }
+
+ public void readFields(DataInput in) {
+ // intentionally left blank
+ }
+
+ public void write(DataOutput out) {
+ // intentionally left blank
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/RowCounter.java b/src/java/org/apache/hadoop/hbase/mapred/RowCounter.java
new file mode 100644
index 0000000..3ddfad4
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/RowCounter.java
@@ -0,0 +1,136 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.mapred.FileOutputFormat;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.lib.IdentityReducer;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * A job with a map to count rows.
+ * Map outputs table rows IF the input row has columns that have content.
+ * Uses an {@link IdentityReducer}
+ */
+public class RowCounter extends Configured implements Tool {
+ // Name of this 'program'
+ static final String NAME = "rowcounter";
+
+ /**
+ * Mapper that runs the count.
+ */
+ static class RowCounterMapper
+ implements TableMap<ImmutableBytesWritable, RowResult> {
+ private static enum Counters {ROWS}
+
+ public void map(ImmutableBytesWritable row, RowResult value,
+ OutputCollector<ImmutableBytesWritable, RowResult> output,
+ Reporter reporter)
+ throws IOException {
+ boolean content = false;
+ for (Map.Entry<byte [], Cell> e: value.entrySet()) {
+ Cell cell = e.getValue();
+ if (cell != null && cell.getValue().length > 0) {
+ content = true;
+ break;
+ }
+ }
+ if (!content) {
+ // Don't count rows that are all empty values.
+ return;
+ }
+ // Give out same value every time. We're only interested in the row/key
+ reporter.incrCounter(Counters.ROWS, 1);
+ }
+
+ public void configure(JobConf jc) {
+ // Nothing to do.
+ }
+
+ public void close() throws IOException {
+ // Nothing to do.
+ }
+ }
+
+ /**
+ * @param args
+ * @return the JobConf
+ * @throws IOException
+ */
+ public JobConf createSubmittableJob(String[] args) throws IOException {
+ JobConf c = new JobConf(getConf(), getClass());
+ c.setJobName(NAME);
+ // Columns are space delimited
+ StringBuilder sb = new StringBuilder();
+ final int columnoffset = 2;
+ for (int i = columnoffset; i < args.length; i++) {
+ if (i > columnoffset) {
+ sb.append(" ");
+ }
+ sb.append(args[i]);
+ }
+ // Second argument is the table name.
+ TableMapReduceUtil.initTableMapJob(args[1], sb.toString(),
+ RowCounterMapper.class, ImmutableBytesWritable.class, RowResult.class, c);
+ c.setNumReduceTasks(0);
+ // First arg is the output directory.
+ FileOutputFormat.setOutputPath(c, new Path(args[0]));
+ return c;
+ }
+
+ static int printUsage() {
+ System.out.println(NAME +
+ " <outputdir> <tablename> <column1> [<column2>...]");
+ return -1;
+ }
+
+ public int run(final String[] args) throws Exception {
+ // Make sure there are at least 3 parameters
+ if (args.length < 3) {
+ System.err.println("ERROR: Wrong number of parameters: " + args.length);
+ return printUsage();
+ }
+ JobClient.runJob(createSubmittableJob(args));
+ return 0;
+ }
+
+ /**
+ * @param args
+ * @throws Exception
+ */
+ public static void main(String[] args) throws Exception {
+ HBaseConfiguration c = new HBaseConfiguration();
+ int errCode = ToolRunner.run(c, new RowCounter(), args);
+ System.exit(errCode);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/mapred/RowCounter_Counters.properties b/src/java/org/apache/hadoop/hbase/mapred/RowCounter_Counters.properties
new file mode 100644
index 0000000..5f4e2c5
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/RowCounter_Counters.properties
@@ -0,0 +1,6 @@
+
+# ResourceBundle properties file for RowCounter MR job
+
+CounterGroupName= RowCounter
+
+ROWS.name= Rows
diff --git a/src/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java b/src/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java
new file mode 100644
index 0000000..38d56b8
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java
@@ -0,0 +1,85 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.FileInputFormat;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.JobConfigurable;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Convert HBase tabular data into a format that is consumable by Map/Reduce.
+ */
+public class TableInputFormat extends TableInputFormatBase implements
+ JobConfigurable {
+ private final Log LOG = LogFactory.getLog(TableInputFormat.class);
+
+ /**
+ * space delimited list of columns
+ *
+ * @see org.apache.hadoop.hbase.regionserver.HAbstractScanner for column name
+ * wildcards
+ */
+ public static final String COLUMN_LIST = "hbase.mapred.tablecolumns";
+
+ public void configure(JobConf job) {
+ Path[] tableNames = FileInputFormat.getInputPaths(job);
+ String colArg = job.get(COLUMN_LIST);
+ String[] colNames = colArg.split(" ");
+ byte [][] m_cols = new byte[colNames.length][];
+ for (int i = 0; i < m_cols.length; i++) {
+ m_cols[i] = Bytes.toBytes(colNames[i]);
+ }
+ setInputColumns(m_cols);
+ try {
+ setHTable(new HTable(new HBaseConfiguration(job), tableNames[0].getName()));
+ } catch (Exception e) {
+ LOG.error(StringUtils.stringifyException(e));
+ }
+ }
+
+ public void validateInput(JobConf job) throws IOException {
+ // expecting exactly one path
+ Path [] tableNames = FileInputFormat.getInputPaths(job);
+ if (tableNames == null || tableNames.length > 1) {
+ throw new IOException("expecting one table name");
+ }
+
+ // connected to table?
+ if (getHTable() == null) {
+ throw new IOException("could not connect to table '" +
+ tableNames[0].getName() + "'");
+ }
+
+ // expecting at least one column
+ String colArg = job.get(COLUMN_LIST);
+ if (colArg == null || colArg.length() == 0) {
+ throw new IOException("expecting at least one column");
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java b/src/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
new file mode 100644
index 0000000..ba82b7a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
@@ -0,0 +1,341 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.RowFilterSet;
+import org.apache.hadoop.hbase.filter.StopRowFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.mapred.InputFormat;
+import org.apache.hadoop.mapred.InputSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.RecordReader;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * A Base for {@link TableInputFormat}s. Receives a {@link HTable}, a
+ * byte[] of input columns and optionally a {@link RowFilterInterface}.
+ * Subclasses may use other TableRecordReader implementations.
+ * <p>
+ * An example of a subclass:
+ * <pre>
+ * class ExampleTIF extends TableInputFormatBase implements JobConfigurable {
+ *
+ * public void configure(JobConf job) {
+ * HTable exampleTable = new HTable(new HBaseConfiguration(job),
+ * Bytes.toBytes("exampleTable"));
+ * // mandatory
+ * setHTable(exampleTable);
+ * Text[] inputColumns = new byte [][] { Bytes.toBytes("columnA"),
+ * Bytes.toBytes("columnB") };
+ * // mandatory
+ * setInputColumns(inputColumns);
+ * RowFilterInterface exampleFilter = new RegExpRowFilter("keyPrefix.*");
+ * // optional
+ * setRowFilter(exampleFilter);
+ * }
+ *
+ * public void validateInput(JobConf job) throws IOException {
+ * }
+ * }
+ * </pre>
+ */
+public abstract class TableInputFormatBase
+implements InputFormat<ImmutableBytesWritable, RowResult> {
+ final Log LOG = LogFactory.getLog(TableInputFormatBase.class);
+ private byte [][] inputColumns;
+ private HTable table;
+ private TableRecordReader tableRecordReader;
+ private RowFilterInterface rowFilter;
+
+ /**
+ * Iterate over an HBase table data, return (Text, RowResult) pairs
+ */
+ protected class TableRecordReader
+ implements RecordReader<ImmutableBytesWritable, RowResult> {
+ private byte [] startRow;
+ private byte [] endRow;
+ private byte [] lastRow;
+ private RowFilterInterface trrRowFilter;
+ private Scanner scanner;
+ private HTable htable;
+ private byte [][] trrInputColumns;
+
+ /**
+ * Restart from survivable exceptions by creating a new scanner.
+ *
+ * @param firstRow
+ * @throws IOException
+ */
+ public void restart(byte[] firstRow) throws IOException {
+ if ((endRow != null) && (endRow.length > 0)) {
+ if (trrRowFilter != null) {
+ final Set<RowFilterInterface> rowFiltersSet =
+ new HashSet<RowFilterInterface>();
+ rowFiltersSet.add(new WhileMatchRowFilter(new StopRowFilter(endRow)));
+ rowFiltersSet.add(trrRowFilter);
+ this.scanner = this.htable.getScanner(trrInputColumns, startRow,
+ new RowFilterSet(RowFilterSet.Operator.MUST_PASS_ALL,
+ rowFiltersSet));
+ } else {
+ this.scanner =
+ this.htable.getScanner(trrInputColumns, firstRow, endRow);
+ }
+ } else {
+ this.scanner =
+ this.htable.getScanner(trrInputColumns, firstRow, trrRowFilter);
+ }
+ }
+
+ /**
+ * Build the scanner. Not done in constructor to allow for extension.
+ *
+ * @throws IOException
+ */
+ public void init() throws IOException {
+ restart(startRow);
+ }
+
+ /**
+ * @param htable the {@link HTable} to scan.
+ */
+ public void setHTable(HTable htable) {
+ this.htable = htable;
+ }
+
+ /**
+ * @param inputColumns the columns to be placed in {@link RowResult}.
+ */
+ public void setInputColumns(final byte [][] inputColumns) {
+ this.trrInputColumns = inputColumns;
+ }
+
+ /**
+ * @param startRow the first row in the split
+ */
+ public void setStartRow(final byte [] startRow) {
+ this.startRow = startRow;
+ }
+
+ /**
+ *
+ * @param endRow the last row in the split
+ */
+ public void setEndRow(final byte [] endRow) {
+ this.endRow = endRow;
+ }
+
+ /**
+ * @param rowFilter the {@link RowFilterInterface} to be used.
+ */
+ public void setRowFilter(RowFilterInterface rowFilter) {
+ this.trrRowFilter = rowFilter;
+ }
+
+ public void close() {
+ this.scanner.close();
+ }
+
+ /**
+ * @return ImmutableBytesWritable
+ *
+ * @see org.apache.hadoop.mapred.RecordReader#createKey()
+ */
+ public ImmutableBytesWritable createKey() {
+ return new ImmutableBytesWritable();
+ }
+
+ /**
+ * @return RowResult
+ *
+ * @see org.apache.hadoop.mapred.RecordReader#createValue()
+ */
+ public RowResult createValue() {
+ return new RowResult();
+ }
+
+ public long getPos() {
+ // This should be the ordinal tuple in the range;
+ // not clear how to calculate...
+ return 0;
+ }
+
+ public float getProgress() {
+ // Depends on the total number of tuples and getPos
+ return 0;
+ }
+
+ /**
+ * @param key HStoreKey as input key.
+ * @param value MapWritable as input value
+ * @return true if there was more data
+ * @throws IOException
+ */
+ public boolean next(ImmutableBytesWritable key, RowResult value)
+ throws IOException {
+ RowResult result;
+ try {
+ result = this.scanner.next();
+ } catch (UnknownScannerException e) {
+ LOG.debug("recovered from " + StringUtils.stringifyException(e));
+ restart(lastRow);
+ this.scanner.next(); // skip presumed already mapped row
+ result = this.scanner.next();
+ }
+
+ if (result != null && result.size() > 0) {
+ key.set(result.getRow());
+ lastRow = key.get();
+ Writables.copyWritable(result, value);
+ return true;
+ }
+ return false;
+ }
+ }
+
+ /**
+ * Builds a TableRecordReader. If no TableRecordReader was provided, uses
+ * the default.
+ *
+ * @see org.apache.hadoop.mapred.InputFormat#getRecordReader(InputSplit,
+ * JobConf, Reporter)
+ */
+ public RecordReader<ImmutableBytesWritable, RowResult> getRecordReader(
+ InputSplit split, JobConf job, Reporter reporter)
+ throws IOException {
+ TableSplit tSplit = (TableSplit) split;
+ TableRecordReader trr = this.tableRecordReader;
+ // if no table record reader was provided use default
+ if (trr == null) {
+ trr = new TableRecordReader();
+ }
+ trr.setStartRow(tSplit.getStartRow());
+ trr.setEndRow(tSplit.getEndRow());
+ trr.setHTable(this.table);
+ trr.setInputColumns(this.inputColumns);
+ trr.setRowFilter(this.rowFilter);
+ trr.init();
+ return trr;
+ }
+
+ /**
+ * Calculates the splits that will serve as input for the map tasks.
+ * <ul>
+ * Splits are created in number equal to the smallest between numSplits and
+ * the number of {@link HRegion}s in the table. If the number of splits is
+ * smaller than the number of {@link HRegion}s then splits are spanned across
+ * multiple {@link HRegion}s and are grouped the most evenly possible. In the
+ * case splits are uneven the bigger splits are placed first in the
+ * {@link InputSplit} array.
+ *
+ * @param job the map task {@link JobConf}
+ * @param numSplits a hint to calculate the number of splits (mapred.map.tasks).
+ *
+ * @return the input splits
+ *
+ * @see org.apache.hadoop.mapred.InputFormat#getSplits(org.apache.hadoop.mapred.JobConf, int)
+ */
+ public InputSplit[] getSplits(JobConf job, int numSplits) throws IOException {
+ byte [][] startKeys = this.table.getStartKeys();
+ if (startKeys == null || startKeys.length == 0) {
+ throw new IOException("Expecting at least one region");
+ }
+ if (this.table == null) {
+ throw new IOException("No table was provided");
+ }
+ if (this.inputColumns == null || this.inputColumns.length == 0) {
+ throw new IOException("Expecting at least one column");
+ }
+ int realNumSplits = numSplits > startKeys.length? startKeys.length:
+ numSplits;
+ InputSplit[] splits = new InputSplit[realNumSplits];
+ int middle = startKeys.length / realNumSplits;
+ int startPos = 0;
+ for (int i = 0; i < realNumSplits; i++) {
+ int lastPos = startPos + middle;
+ lastPos = startKeys.length % realNumSplits > i ? lastPos + 1 : lastPos;
+ String regionLocation = table.getRegionLocation(startKeys[startPos]).
+ getServerAddress().getHostname();
+ splits[i] = new TableSplit(this.table.getTableName(),
+ startKeys[startPos], ((i + 1) < realNumSplits) ? startKeys[lastPos]:
+ HConstants.EMPTY_START_ROW, regionLocation);
+ LOG.info("split: " + i + "->" + splits[i]);
+ startPos = lastPos;
+ }
+ return splits;
+ }
+
+ /**
+ * @param inputColumns to be passed in {@link RowResult} to the map task.
+ */
+ protected void setInputColumns(byte [][] inputColumns) {
+ this.inputColumns = inputColumns;
+ }
+
+ /**
+ * Allows subclasses to get the {@link HTable}.
+ */
+ protected HTable getHTable() {
+ return this.table;
+ }
+
+ /**
+ * Allows subclasses to set the {@link HTable}.
+ *
+ * @param table to get the data from
+ */
+ protected void setHTable(HTable table) {
+ this.table = table;
+ }
+
+ /**
+ * Allows subclasses to set the {@link TableRecordReader}.
+ *
+ * @param tableRecordReader
+ * to provide other {@link TableRecordReader} implementations.
+ */
+ protected void setTableRecordReader(TableRecordReader tableRecordReader) {
+ this.tableRecordReader = tableRecordReader;
+ }
+
+ /**
+ * Allows subclasses to set the {@link RowFilterInterface} to be used.
+ *
+ * @param rowFilter
+ */
+ protected void setRowFilter(RowFilterInterface rowFilter) {
+ this.rowFilter = rowFilter;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/TableMap.java b/src/java/org/apache/hadoop/hbase/mapred/TableMap.java
new file mode 100644
index 0000000..0090708
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/TableMap.java
@@ -0,0 +1,38 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapred.Mapper;
+
+/**
+ * Scan an HBase table to sort by a specified sort column.
+ * If the column does not exist, the record is not passed to Reduce.
+ *
+ * @param <K> WritableComparable key class
+ * @param <V> Writable value class
+ */
+public interface TableMap<K extends WritableComparable<K>, V extends Writable>
+extends Mapper<ImmutableBytesWritable, RowResult, K, V> {
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java b/src/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
new file mode 100644
index 0000000..390c651
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
@@ -0,0 +1,171 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapred.FileInputFormat;
+import org.apache.hadoop.mapred.JobConf;
+
+/**
+ * Utility for {@link TableMap} and {@link TableReduce}
+ */
+@SuppressWarnings("unchecked")
+public class TableMapReduceUtil {
+
+ /**
+ * Use this before submitting a TableMap job. It will
+ * appropriately set up the JobConf.
+ *
+ * @param table The table name to read from.
+ * @param columns The columns to scan.
+ * @param mapper The mapper class to use.
+ * @param outputKeyClass The class of the output key.
+ * @param outputValueClass The class of the output value.
+ * @param job The current job configuration to adjust.
+ */
+ public static void initTableMapJob(String table, String columns,
+ Class<? extends TableMap> mapper,
+ Class<? extends WritableComparable> outputKeyClass,
+ Class<? extends Writable> outputValueClass, JobConf job) {
+
+ job.setInputFormat(TableInputFormat.class);
+ job.setMapOutputValueClass(outputValueClass);
+ job.setMapOutputKeyClass(outputKeyClass);
+ job.setMapperClass(mapper);
+ FileInputFormat.addInputPaths(job, table);
+ job.set(TableInputFormat.COLUMN_LIST, columns);
+ }
+
+ /**
+ * Use this before submitting a TableReduce job. It will
+ * appropriately set up the JobConf.
+ *
+ * @param table The output table.
+ * @param reducer The reducer class to use.
+ * @param job The current job configuration to adjust.
+ * @throws IOException When determining the region count fails.
+ */
+ public static void initTableReduceJob(String table,
+ Class<? extends TableReduce> reducer, JobConf job)
+ throws IOException {
+ initTableReduceJob(table, reducer, job, null);
+ }
+
+ /**
+ * Use this before submitting a TableReduce job. It will
+ * appropriately set up the JobConf.
+ *
+ * @param table The output table.
+ * @param reducer The reducer class to use.
+ * @param job The current job configuration to adjust.
+ * @param partitioner Partitioner to use. Pass <code>null</code> to use
+ * default partitioner.
+ * @throws IOException When determining the region count fails.
+ */
+ public static void initTableReduceJob(String table,
+ Class<? extends TableReduce> reducer, JobConf job, Class partitioner)
+ throws IOException {
+ job.setOutputFormat(TableOutputFormat.class);
+ job.setReducerClass(reducer);
+ job.set(TableOutputFormat.OUTPUT_TABLE, table);
+ job.setOutputKeyClass(ImmutableBytesWritable.class);
+ job.setOutputValueClass(BatchUpdate.class);
+ if (partitioner == HRegionPartitioner.class) {
+ job.setPartitionerClass(HRegionPartitioner.class);
+ HTable outputTable = new HTable(new HBaseConfiguration(job), table);
+ int regions = outputTable.getRegionsInfo().size();
+ if (job.getNumReduceTasks() > regions) {
+ job.setNumReduceTasks(outputTable.getRegionsInfo().size());
+ }
+ } else if (partitioner != null) {
+ job.setPartitionerClass(partitioner);
+ }
+ }
+
+ /**
+ * Ensures that the given number of reduce tasks for the given job
+ * configuration does not exceed the number of regions for the given table.
+ *
+ * @param table The table to get the region count for.
+ * @param job The current job configuration to adjust.
+ * @throws IOException When retrieving the table details fails.
+ */
+ public void limitNumReduceTasks(String table, JobConf job)
+ throws IOException {
+ HTable outputTable = new HTable(new HBaseConfiguration(job), table);
+ int regions = outputTable.getRegionsInfo().size();
+ if (job.getNumReduceTasks() > regions)
+ job.setNumReduceTasks(regions);
+ }
+
+ /**
+ * Ensures that the given number of map tasks for the given job
+ * configuration does not exceed the number of regions for the given table.
+ *
+ * @param table The table to get the region count for.
+ * @param job The current job configuration to adjust.
+ * @throws IOException When retrieving the table details fails.
+ */
+ public void limitNumMapTasks(String table, JobConf job)
+ throws IOException {
+ HTable outputTable = new HTable(new HBaseConfiguration(job), table);
+ int regions = outputTable.getRegionsInfo().size();
+ if (job.getNumMapTasks() > regions)
+ job.setNumMapTasks(regions);
+ }
+
+ /**
+ * Sets the number of reduce tasks for the given job configuration to the
+ * number of regions the given table has.
+ *
+ * @param table The table to get the region count for.
+ * @param job The current job configuration to adjust.
+ * @throws IOException When retrieving the table details fails.
+ */
+ public void setNumReduceTasks(String table, JobConf job)
+ throws IOException {
+ HTable outputTable = new HTable(new HBaseConfiguration(job), table);
+ int regions = outputTable.getRegionsInfo().size();
+ job.setNumReduceTasks(regions);
+ }
+
+ /**
+ * Sets the number of map tasks for the given job configuration to the
+ * number of regions the given table has.
+ *
+ * @param table The table to get the region count for.
+ * @param job The current job configuration to adjust.
+ * @throws IOException When retrieving the table details fails.
+ */
+ public void setNumMapTasks(String table, JobConf job)
+ throws IOException {
+ HTable outputTable = new HTable(new HBaseConfiguration(job), table);
+ int regions = outputTable.getRegionsInfo().size();
+ job.setNumMapTasks(regions);
+ }
+
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/mapred/TableOutputFormat.java b/src/java/org/apache/hadoop/hbase/mapred/TableOutputFormat.java
new file mode 100644
index 0000000..9d6adb4
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/TableOutputFormat.java
@@ -0,0 +1,105 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapred.FileAlreadyExistsException;
+import org.apache.hadoop.mapred.InvalidJobConfException;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.FileOutputFormat;
+import org.apache.hadoop.mapred.RecordWriter;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.util.Progressable;
+
+/**
+ * Convert Map/Reduce output and write it to an HBase table
+ */
+public class TableOutputFormat extends
+FileOutputFormat<ImmutableBytesWritable, BatchUpdate> {
+
+ /** JobConf parameter that specifies the output table */
+ public static final String OUTPUT_TABLE = "hbase.mapred.outputtable";
+ private final Log LOG = LogFactory.getLog(TableOutputFormat.class);
+
+ /**
+ * Convert Reduce output (key, value) to (HStoreKey, KeyedDataArrayWritable)
+ * and write to an HBase table
+ */
+ protected static class TableRecordWriter
+ implements RecordWriter<ImmutableBytesWritable, BatchUpdate> {
+ private HTable m_table;
+
+ /**
+ * Instantiate a TableRecordWriter with the HBase HClient for writing.
+ *
+ * @param table
+ */
+ public TableRecordWriter(HTable table) {
+ m_table = table;
+ }
+
+ public void close(Reporter reporter)
+ throws IOException {
+ m_table.flushCommits();
+ }
+
+ public void write(ImmutableBytesWritable key,
+ BatchUpdate value) throws IOException {
+ m_table.commit(new BatchUpdate(value));
+ }
+ }
+
+ @Override
+ @SuppressWarnings("unchecked")
+ public RecordWriter getRecordWriter(FileSystem ignored,
+ JobConf job, String name, Progressable progress) throws IOException {
+
+ // expecting exactly one path
+
+ String tableName = job.get(OUTPUT_TABLE);
+ HTable table = null;
+ try {
+ table = new HTable(new HBaseConfiguration(job), tableName);
+ } catch(IOException e) {
+ LOG.error(e);
+ throw e;
+ }
+ table.setAutoFlush(false);
+ return new TableRecordWriter(table);
+ }
+
+ @Override
+ public void checkOutputSpecs(FileSystem ignored, JobConf job)
+ throws FileAlreadyExistsException, InvalidJobConfException, IOException {
+
+ String tableName = job.get(OUTPUT_TABLE);
+ if(tableName == null) {
+ throw new IOException("Must specify table name");
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/TableReduce.java b/src/java/org/apache/hadoop/hbase/mapred/TableReduce.java
new file mode 100644
index 0000000..7584770
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/TableReduce.java
@@ -0,0 +1,38 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapred.Reducer;
+
+/**
+ * Write a table, sorting by the input key
+ *
+ * @param <K> key class
+ * @param <V> value class
+ */
+@SuppressWarnings("unchecked")
+public interface TableReduce<K extends WritableComparable, V extends Writable>
+extends Reducer<K, V, ImmutableBytesWritable, BatchUpdate> {
+
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/mapred/TableSplit.java b/src/java/org/apache/hadoop/hbase/mapred/TableSplit.java
new file mode 100644
index 0000000..435e2a7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/TableSplit.java
@@ -0,0 +1,112 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.InputSplit;
+
+/**
+ * A table split corresponds to a key range [low, high)
+ */
+public class TableSplit implements InputSplit, Comparable<TableSplit> {
+ private byte [] m_tableName;
+ private byte [] m_startRow;
+ private byte [] m_endRow;
+ private String m_regionLocation;
+
+ /** default constructor */
+ public TableSplit() {
+ this(HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY,
+ HConstants.EMPTY_BYTE_ARRAY, "");
+ }
+
+ /**
+ * Constructor
+ * @param tableName
+ * @param startRow
+ * @param endRow
+ * @param location
+ */
+ public TableSplit(byte [] tableName, byte [] startRow, byte [] endRow,
+ final String location) {
+ this.m_tableName = tableName;
+ this.m_startRow = startRow;
+ this.m_endRow = endRow;
+ this.m_regionLocation = location;
+ }
+
+ /** @return table name */
+ public byte [] getTableName() {
+ return this.m_tableName;
+ }
+
+ /** @return starting row key */
+ public byte [] getStartRow() {
+ return this.m_startRow;
+ }
+
+ /** @return end row key */
+ public byte [] getEndRow() {
+ return this.m_endRow;
+ }
+
+ /** @return the region's hostname */
+ public String getRegionLocation() {
+ return this.m_regionLocation;
+ }
+
+ public String[] getLocations() {
+ return new String[] {this.m_regionLocation};
+ }
+
+ public long getLength() {
+ // Not clear how to obtain this... seems to be used only for sorting splits
+ return 0;
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ this.m_tableName = Bytes.readByteArray(in);
+ this.m_startRow = Bytes.readByteArray(in);
+ this.m_endRow = Bytes.readByteArray(in);
+ this.m_regionLocation = Bytes.toString(Bytes.readByteArray(in));
+ }
+
+ public void write(DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, this.m_tableName);
+ Bytes.writeByteArray(out, this.m_startRow);
+ Bytes.writeByteArray(out, this.m_endRow);
+ Bytes.writeByteArray(out, Bytes.toBytes(this.m_regionLocation));
+ }
+
+ @Override
+ public String toString() {
+ return m_regionLocation + ":" +
+ Bytes.toString(m_startRow) + "," + Bytes.toString(m_endRow);
+ }
+
+ public int compareTo(TableSplit o) {
+ return Bytes.compareTo(getStartRow(), o.getStartRow());
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/mapred/package-info.java b/src/java/org/apache/hadoop/hbase/mapred/package-info.java
new file mode 100644
index 0000000..e1d3719
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/mapred/package-info.java
@@ -0,0 +1,267 @@
+/*
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+Provides HBase <a href="http://wiki.apache.org/hadoop/HadoopMapReduce">MapReduce</a>
+Input/OutputFormats, a table indexing MapReduce job, and utility
+
+<h2>Table of Contents</h2>
+<ul>
+<li><a href="#classpath">HBase, MapReduce and the CLASSPATH</a></li>
+<li><a href="#sink">HBase as MapReduce job data source and sink</a></li>
+<li><a href="#examples">Example Code</a></li>
+</ul>
+
+<h2><a name="classpath">HBase, MapReduce and the CLASSPATH</a></h2>
+
+<p>MapReduce jobs deployed to a MapReduce cluster do not by default have access
+to the HBase configuration under <code>$HBASE_CONF_DIR</code> nor to HBase classes.
+You could add <code>hbase-site.xml</code> to $HADOOP_HOME/conf and add
+<code>hbase-X.X.X.jar</code> to the <code>$HADOOP_HOME/lib</code> and copy these
+changes across your cluster but the cleanest means of adding hbase configuration
+and classes to the cluster <code>CLASSPATH</code> is by uncommenting
+<code>HADOOP_CLASSPATH</code> in <code>$HADOOP_HOME/conf/hadoop-env.sh</code>
+and adding the path to the hbase jar and <code>$HBASE_CONF_DIR</code> directory.
+Then copy the amended configuration around the cluster.
+You'll probably need to restart the MapReduce cluster if you want it to notice
+the new configuration.
+</p>
+
+<p>For example, here is how you would amend <code>hadoop-env.sh</code> adding the
+built hbase jar, hbase conf, and the <code>PerformanceEvaluation</code> class from
+the built hbase test jar to the hadoop <code>CLASSPATH<code>:
+
+<blockquote><pre># Extra Java CLASSPATH elements. Optional.
+# export HADOOP_CLASSPATH=
+export HADOOP_CLASSPATH=$HBASE_HOME/build/test:$HBASE_HOME/build/hbase-X.X.X.jar:$HBASE_HOME/build/hbase-X.X.X-test.jar:$HBASE_HOME/conf</pre></blockquote>
+
+<p>Expand <code>$HBASE_HOME</code> in the above appropriately to suit your
+local environment.</p>
+
+<p>After copying the above change around your cluster, this is how you would run
+the PerformanceEvaluation MR job to put up 4 clients (Presumes a ready mapreduce
+cluster):
+
+<blockquote><pre>$HADOOP_HOME/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 4</pre></blockquote>
+
+The PerformanceEvaluation class wil be found on the CLASSPATH because you
+added <code>$HBASE_HOME/build/test</code> to HADOOP_CLASSPATH
+</p>
+
+<p>Another possibility, if for example you do not have access to hadoop-env.sh or
+are unable to restart the hadoop cluster, is bundling the hbase jar into a mapreduce
+job jar adding it and its dependencies under the job jar <code>lib/</code>
+directory and the hbase conf into a job jar <code>conf/</code> directory.
+</a>
+
+<h2><a name="sink">HBase as MapReduce job data source and sink</a></h2>
+
+<p>HBase can be used as a data source, {@link org.apache.hadoop.hbase.mapred.TableInputFormat TableInputFormat},
+and data sink, {@link org.apache.hadoop.hbase.mapred.TableOutputFormat TableOutputFormat}, for MapReduce jobs.
+Writing MapReduce jobs that read or write HBase, you'll probably want to subclass
+{@link org.apache.hadoop.hbase.mapred.TableMap TableMap} and/or
+{@link org.apache.hadoop.hbase.mapred.TableReduce TableReduce}. See the do-nothing
+pass-through classes {@link org.apache.hadoop.hbase.mapred.IdentityTableMap IdentityTableMap} and
+{@link org.apache.hadoop.hbase.mapred.IdentityTableReduce IdentityTableReduce} for basic usage. For a more
+involved example, see {@link org.apache.hadoop.hbase.mapred.BuildTableIndex BuildTableIndex}
+or review the <code>org.apache.hadoop.hbase.mapred.TestTableMapReduce</code> unit test.
+</p>
+
+<p>Running mapreduce jobs that have hbase as source or sink, you'll need to
+specify source/sink table and column names in your configuration.</p>
+
+<p>Reading from hbase, the TableInputFormat asks hbase for the list of
+regions and makes a map-per-region or <code>mapred.map.tasks maps</code>,
+whichever is smaller (If your job only has two maps, up mapred.map.tasks
+to a number > number of regions). Maps will run on the adjacent TaskTracker
+if you are running a TaskTracer and RegionServer per node.
+Writing, it may make sense to avoid the reduce step and write yourself back into
+hbase from inside your map. You'd do this when your job does not need the sort
+and collation that mapreduce does on the map emitted data; on insert,
+hbase 'sorts' so there is no point double-sorting (and shuffling data around
+your mapreduce cluster) unless you need to. If you do not need the reduce,
+you might just have your map emit counts of records processed just so the
+framework's report at the end of your job has meaning or set the number of
+reduces to zero and use TableOutputFormat. See example code
+below. If running the reduce step makes sense in your case, its usually better
+to have lots of reducers so load is spread across the hbase cluster.</p>
+
+<p>There is also a new hbase partitioner that will run as many reducers as
+currently existing regions. The
+{@link org.apache.hadoop.hbase.mapred.HRegionPartitioner} is suitable
+when your table is large and your upload is not such that it will greatly
+alter the number of existing regions when done; other use the default
+partitioner.
+</p>
+
+<h2><a name="examples">Example Code</a></h2>
+<h3>Sample Row Counter</h3>
+<p>See {@link org.apache.hadoop.hbase.mapred.RowCounter}. You should be able to run
+it by doing: <code>% ./bin/hadoop jar hbase-X.X.X.jar</code>. This will invoke
+the hbase MapReduce Driver class. Select 'rowcounter' from the choice of jobs
+offered. You may need to add the hbase conf directory to <code>$HADOOP_HOME/conf/hadoop-env.sh#HADOOP_CLASSPATH</code>
+so the rowcounter gets pointed at the right hbase cluster (or, build a new jar
+with an appropriate hbase-site.xml built into your job jar).
+</p>
+<h3>PerformanceEvaluation</h3>
+<p>See org.apache.hadoop.hbase.PerformanceEvaluation from hbase src/test. It runs
+a mapreduce job to run concurrent clients reading and writing hbase.
+</p>
+
+<h3>Sample MR Bulk Uploader</h3>
+<p>A students/classes example based on a contribution by Naama Kraus with logs of
+documentation can be found over in src/examples/mapred.
+Its the <code>org.apache.hadoop.hbase.mapred.SampleUploader</code> class.
+Just copy it under src/java/org/apache/hadoop/hbase/mapred to compile and try it
+(until we start generating an hbase examples jar). The class reads a data file
+from HDFS and per line, does an upload to HBase using TableReduce.
+Read the class comment for specification of inputs, prerequisites, etc.
+</p>
+
+<h3>Example to bulk import/load a text file into an HTable
+</h3>
+
+<p>Here's a sample program from
+<a href="http://www.spicylogic.com/allenday/blog/category/computing/distributed-systems/hadoop/hbase/">Allen Day</a>
+that takes an HDFS text file path and an HBase table name as inputs, and loads the contents of the text file to the table
+all up in the map phase.
+</p>
+
+<blockquote><pre>
+package com.spicylogic.hbase;
+package org.apache.hadoop.hbase.mapred;
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.FileInputFormat;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.Mapper;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.lib.NullOutputFormat;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Class that adds the parsed line from the input to hbase
+ * in the map function. Map has no emissions and job
+ * has no reduce.
+ */
+public class BulkImport implements Tool {
+ private static final String NAME = "BulkImport";
+ private Configuration conf;
+
+ public static class InnerMap extends MapReduceBase implements Mapper<LongWritable, Text, Text, Text> {
+ private HTable table;
+ private HBaseConfiguration HBconf;
+
+ public void map(LongWritable key, Text value,
+ OutputCollector<Text, Text> output, Reporter reporter)
+ throws IOException {
+ if ( table == null )
+ throw new IOException("table is null");
+
+ // Split input line on tab character
+ String [] splits = value.toString().split("\t");
+ if ( splits.length != 4 )
+ return;
+
+ String rowID = splits[0];
+ int timestamp = Integer.parseInt( splits[1] );
+ String colID = splits[2];
+ String cellValue = splits[3];
+
+ reporter.setStatus("Map emitting cell for row='" + rowID +
+ "', column='" + colID + "', time='" + timestamp + "'");
+
+ BatchUpdate bu = new BatchUpdate( rowID );
+ if ( timestamp > 0 )
+ bu.setTimestamp( timestamp );
+
+ bu.put(colID, cellValue.getBytes());
+ table.commit( bu );
+ }
+
+ public void configure(JobConf job) {
+ HBconf = new HBaseConfiguration(job);
+ try {
+ table = new HTable( HBconf, job.get("input.table") );
+ } catch (IOException e) {
+ // TODO Auto-generated catch block
+ e.printStackTrace();
+ }
+ }
+ }
+
+ public JobConf createSubmittableJob(String[] args) {
+ JobConf c = new JobConf(getConf(), BulkImport.class);
+ c.setJobName(NAME);
+ FileInputFormat.setInputPaths(c, new Path(args[0]));
+
+ c.set("input.table", args[1]);
+ c.setMapperClass(InnerMap.class);
+ c.setNumReduceTasks(0);
+ c.setOutputFormat(NullOutputFormat.class);
+ return c;
+ }
+
+ static int printUsage() {
+ System.err.println("Usage: " + NAME + " <input> <table_name>");
+ System.err.println("\twhere <input> is a tab-delimited text file with 4 columns.");
+ System.err.println("\t\tcolumn 1 = row ID");
+ System.err.println("\t\tcolumn 2 = timestamp (use a negative value for current time)");
+ System.err.println("\t\tcolumn 3 = column ID");
+ System.err.println("\t\tcolumn 4 = cell value");
+ return -1;
+ }
+
+ public int run(@SuppressWarnings("unused") String[] args) throws Exception {
+ // Make sure there are exactly 3 parameters left.
+ if (args.length != 2) {
+ return printUsage();
+ }
+ JobClient.runJob(createSubmittableJob(args));
+ return 0;
+ }
+
+ public Configuration getConf() {
+ return this.conf;
+ }
+
+ public void setConf(final Configuration c) {
+ this.conf = c;
+ }
+
+ public static void main(String[] args) throws Exception {
+ int errCode = ToolRunner.run(new Configuration(), new BulkImport(), args);
+ System.exit(errCode);
+ }
+}
+</pre></blockquote>
+
+*/
+package org.apache.hadoop.hbase.mapred;
diff --git a/src/java/org/apache/hadoop/hbase/master/AddColumn.java b/src/java/org/apache/hadoop/hbase/master/AddColumn.java
new file mode 100644
index 0000000..c46aa41
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/AddColumn.java
@@ -0,0 +1,50 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+
+/** Instantiated to add a column family to a table */
+class AddColumn extends ColumnOperation {
+ private final HColumnDescriptor newColumn;
+
+ AddColumn(final HMaster master, final byte [] tableName,
+ final HColumnDescriptor newColumn)
+ throws IOException {
+ super(master, tableName);
+ this.newColumn = newColumn;
+ }
+
+ @Override
+ protected void postProcessMeta(MetaRegion m, HRegionInterface server)
+ throws IOException {
+ for (HRegionInfo i: unservedRegions) {
+ // All we need to do to add a column is add it to the table descriptor.
+ // When the region is brought on-line, it will find the column missing
+ // and create it.
+ i.getTableDesc().addFamily(newColumn);
+ updateRegionInfo(server, m.getRegionName(), i);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/BaseScanner.java b/src/java/org/apache/hadoop/hbase/master/BaseScanner.java
new file mode 100644
index 0000000..a2c5e8e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/BaseScanner.java
@@ -0,0 +1,406 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.ipc.RemoteException;
+
+
+/**
+ * Base HRegion scanner class. Holds utilty common to <code>ROOT</code> and
+ * <code>META</code> HRegion scanners.
+ *
+ * <p>How do we know if all regions are assigned? After the initial scan of
+ * the <code>ROOT</code> and <code>META</code> regions, all regions known at
+ * that time will have been or are in the process of being assigned.</p>
+ *
+ * <p>When a region is split the region server notifies the master of the
+ * split and the new regions are assigned. But suppose the master loses the
+ * split message? We need to periodically rescan the <code>ROOT</code> and
+ * <code>META</code> regions.
+ * <ul>
+ * <li>If we rescan, any regions that are new but not assigned will have
+ * no server info. Any regions that are not being served by the same
+ * server will get re-assigned.</li>
+ *
+ * <li>Thus a periodic rescan of the root region will find any new
+ * <code>META</code> regions where we missed the <code>META</code> split
+ * message or we failed to detect a server death and consequently need to
+ * assign the region to a new server.</li>
+ *
+ * <li>if we keep track of all the known <code>META</code> regions, then
+ * we can rescan them periodically. If we do this then we can detect any
+ * regions for which we missed a region split message.</li>
+ * </ul>
+ *
+ * Thus just keeping track of all the <code>META</code> regions permits
+ * periodic rescanning which will detect unassigned regions (new or
+ * otherwise) without the need to keep track of every region.</p>
+ *
+ * <p>So the <code>ROOT</code> region scanner needs to wake up:
+ * <ol>
+ * <li>when the master receives notification that the <code>ROOT</code>
+ * region has been opened.</li>
+ * <li>periodically after the first scan</li>
+ * </ol>
+ *
+ * The <code>META</code> scanner needs to wake up:
+ * <ol>
+ * <li>when a <code>META</code> region comes on line</li>
+ * </li>periodically to rescan the online <code>META</code> regions</li>
+ * </ol>
+ *
+ * <p>A <code>META</code> region is not 'online' until it has been scanned
+ * once.
+ */
+abstract class BaseScanner extends Chore implements HConstants {
+ static final Log LOG = LogFactory.getLog(BaseScanner.class.getName());
+
+ private final boolean rootRegion;
+ protected final HMaster master;
+
+ protected boolean initialScanComplete;
+
+ protected abstract boolean initialScan();
+ protected abstract void maintenanceScan();
+
+ // will use this variable to synchronize and make sure we aren't interrupted
+ // mid-scan
+ final Object scannerLock = new Object();
+
+ BaseScanner(final HMaster master, final boolean rootRegion, final int period,
+ final AtomicBoolean stop) {
+ super(period, stop);
+ this.rootRegion = rootRegion;
+ this.master = master;
+ this.initialScanComplete = false;
+ }
+
+ /** @return true if initial scan completed successfully */
+ public boolean isInitialScanComplete() {
+ return initialScanComplete;
+ }
+
+ @Override
+ protected boolean initialChore() {
+ return initialScan();
+ }
+
+ @Override
+ protected void chore() {
+ maintenanceScan();
+ }
+
+ /**
+ * @param region Region to scan
+ * @throws IOException
+ */
+ protected void scanRegion(final MetaRegion region) throws IOException {
+ HRegionInterface regionServer = null;
+ long scannerId = -1L;
+ LOG.info(Thread.currentThread().getName() + " scanning meta region " +
+ region.toString());
+
+ // Array to hold list of split parents found. Scan adds to list. After
+ // scan we go check if parents can be removed.
+ Map<HRegionInfo, RowResult> splitParents =
+ new HashMap<HRegionInfo, RowResult>();
+ List<byte []> emptyRows = new ArrayList<byte []>();
+ int rows = 0;
+ try {
+ regionServer = master.connection.getHRegionConnection(region.getServer());
+ scannerId = regionServer.openScanner(region.getRegionName(),
+ COLUMN_FAMILY_ARRAY, EMPTY_START_ROW, HConstants.LATEST_TIMESTAMP, null);
+ while (true) {
+ RowResult values = regionServer.next(scannerId);
+ if (values == null || values.size() == 0) {
+ break;
+ }
+ HRegionInfo info = master.getHRegionInfo(values.getRow(), values);
+ if (info == null) {
+ emptyRows.add(values.getRow());
+ continue;
+ }
+ String serverName = Writables.cellToString(values.get(COL_SERVER));
+ long startCode = Writables.cellToLong(values.get(COL_STARTCODE));
+
+ // Note Region has been assigned.
+ checkAssigned(info, serverName, startCode);
+ if (isSplitParent(info)) {
+ splitParents.put(info, values);
+ }
+ rows += 1;
+ }
+ if (rootRegion) {
+ this.master.regionManager.setNumMetaRegions(rows);
+ }
+ } catch (IOException e) {
+ if (e instanceof RemoteException) {
+ e = RemoteExceptionHandler.decodeRemoteException((RemoteException) e);
+ if (e instanceof UnknownScannerException) {
+ // Reset scannerId so we do not try closing a scanner the other side
+ // has lost account of: prevents duplicated stack trace out of the
+ // below close in the finally.
+ scannerId = -1L;
+ }
+ }
+ throw e;
+ } finally {
+ try {
+ if (scannerId != -1L && regionServer != null) {
+ regionServer.close(scannerId);
+ }
+ } catch (IOException e) {
+ LOG.error("Closing scanner",
+ RemoteExceptionHandler.checkIOException(e));
+ }
+ }
+
+ // Scan is finished.
+
+ // First clean up any meta region rows which had null HRegionInfos
+ if (emptyRows.size() > 0) {
+ LOG.warn("Found " + emptyRows.size() + " rows with empty HRegionInfo " +
+ "while scanning meta region " + Bytes.toString(region.getRegionName()));
+ this.master.deleteEmptyMetaRows(regionServer, region.getRegionName(),
+ emptyRows);
+ }
+
+ // Take a look at split parents to see if any we can clean up.
+
+ if (splitParents.size() > 0) {
+ for (Map.Entry<HRegionInfo, RowResult> e : splitParents.entrySet()) {
+ HRegionInfo hri = e.getKey();
+ cleanupSplits(region.getRegionName(), regionServer, hri, e.getValue());
+ }
+ }
+ LOG.info(Thread.currentThread().getName() + " scan of " + rows +
+ " row(s) of meta region " + region.toString() + " complete");
+ }
+
+ /*
+ * @param info Region to check.
+ * @return True if this is a split parent.
+ */
+ private boolean isSplitParent(final HRegionInfo info) {
+ if (!info.isSplit()) {
+ return false;
+ }
+ if (!info.isOffline()) {
+ LOG.warn("Region is split but not offline: " +
+ info.getRegionNameAsString());
+ }
+ return true;
+ }
+
+ /*
+ * If daughters no longer hold reference to the parents, delete the parent.
+ * @param metaRegionName Meta region name.
+ * @param server HRegionInterface of meta server to talk to
+ * @param parent HRegionInfo of split parent
+ * @param rowContent Content of <code>parent</code> row in
+ * <code>metaRegionName</code>
+ * @return True if we removed <code>parent</code> from meta table and from
+ * the filesystem.
+ * @throws IOException
+ */
+ private boolean cleanupSplits(final byte [] metaRegionName,
+ final HRegionInterface srvr, final HRegionInfo parent,
+ RowResult rowContent)
+ throws IOException {
+ boolean result = false;
+ boolean hasReferencesA = hasReferences(metaRegionName, srvr,
+ parent.getRegionName(), rowContent, COL_SPLITA);
+ boolean hasReferencesB = hasReferences(metaRegionName, srvr,
+ parent.getRegionName(), rowContent, COL_SPLITB);
+ if (!hasReferencesA && !hasReferencesB) {
+ LOG.info("Deleting region " + parent.getRegionNameAsString() +
+ " (encoded=" + parent.getEncodedName() +
+ ") because daughter splits no longer hold references");
+ HRegion.deleteRegion(this.master.fs, this.master.rootdir, parent);
+ HRegion.removeRegionFromMETA(srvr, metaRegionName,
+ parent.getRegionName());
+ result = true;
+ }
+ return result;
+ }
+
+ /*
+ * Checks if a daughter region -- either splitA or splitB -- still holds
+ * references to parent. If not, removes reference to the split from
+ * the parent meta region row.
+ * @param metaRegionName Name of meta region to look in.
+ * @param srvr Where region resides.
+ * @param parent Parent region name.
+ * @param rowContent Keyed content of the parent row in meta region.
+ * @param splitColumn Column name of daughter split to examine
+ * @return True if still has references to parent.
+ * @throws IOException
+ */
+ private boolean hasReferences(final byte [] metaRegionName,
+ final HRegionInterface srvr, final byte [] parent,
+ RowResult rowContent, final byte [] splitColumn)
+ throws IOException {
+ boolean result = false;
+ HRegionInfo split =
+ Writables.getHRegionInfo(rowContent.get(splitColumn));
+ if (split == null) {
+ return result;
+ }
+ Path tabledir = new Path(this.master.rootdir, split.getTableDesc().getNameAsString());
+ for (HColumnDescriptor family: split.getTableDesc().getFamilies()) {
+ Path p = Store.getStoreHomedir(tabledir, split.getEncodedName(),
+ family.getName());
+ // Look for reference files. Call listStatus with an anonymous
+ // instance of PathFilter.
+ FileStatus [] ps = this.master.fs.listStatus(p,
+ new PathFilter () {
+ public boolean accept(Path path) {
+ return StoreFile.isReference(path);
+ }
+ }
+ );
+
+ if (ps != null && ps.length > 0) {
+ result = true;
+ break;
+ }
+ }
+
+ if (result) {
+ return result;
+ }
+
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(split.getRegionNameAsString() +
+ " no longer has references to " + Bytes.toString(parent));
+ }
+
+ BatchUpdate b = new BatchUpdate(parent);
+ b.delete(splitColumn);
+ srvr.batchUpdate(metaRegionName, b, -1L);
+
+ return result;
+ }
+
+ protected void checkAssigned(final HRegionInfo info,
+ final String serverAddress, final long startCode)
+ throws IOException {
+ String serverName = null;
+ if (serverAddress != null && serverAddress.length() > 0) {
+ serverName = HServerInfo.getServerName(serverAddress, startCode);
+ }
+ HServerInfo storedInfo = null;
+ synchronized (this.master.regionManager) {
+ /*
+ * We don't assign regions that are offline, in transition or were on
+ * a dead server. Regions that were on a dead server will get reassigned
+ * by ProcessServerShutdown
+ */
+ if (info.isOffline() ||
+ this.master.regionManager.
+ regionIsInTransition(info.getRegionNameAsString()) ||
+ (serverName != null && this.master.serverManager.isDead(serverName))) {
+ return;
+ }
+ if (serverName != null) {
+ storedInfo = this.master.serverManager.getServerInfo(serverName);
+ }
+
+ // If we can't find the HServerInfo, then add it to the list of
+ // unassigned regions.
+ if (storedInfo == null) {
+ // The current assignment is invalid
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Current assignment of " + info.getRegionNameAsString() +
+ " is not valid; " + " Server '" + serverAddress + "' startCode: " +
+ startCode + " unknown.");
+ }
+
+ // Recover the region server's log if there is one.
+ // This is only done from here if we are restarting and there is stale
+ // data in the meta region. Once we are on-line, dead server log
+ // recovery is handled by lease expiration and ProcessServerShutdown
+ if (!this.master.regionManager.isInitialMetaScanComplete() &&
+ serverName != null) {
+ Path logDir =
+ new Path(this.master.rootdir, HLog.getHLogDirectoryName(serverName));
+ try {
+ if (master.fs.exists(logDir)) {
+ this.master.regionManager.splitLogLock.lock();
+ try {
+ HLog.splitLog(master.rootdir, logDir, master.fs,
+ master.getConfiguration());
+ } finally {
+ this.master.regionManager.splitLogLock.unlock();
+ }
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Split " + logDir.toString());
+ }
+ } catch (IOException e) {
+ LOG.warn("unable to split region server log because: ", e);
+ throw e;
+ }
+ }
+ // Now get the region assigned
+ this.master.regionManager.setUnassigned(info, true);
+ }
+ }
+ }
+
+ /**
+ * Notify the thread to die at the end of its next run
+ */
+ public void interruptIfAlive() {
+ synchronized(scannerLock){
+ if (isAlive()) {
+ super.interrupt();
+ }
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/ChangeTableState.java b/src/java/org/apache/hadoop/hbase/master/ChangeTableState.java
new file mode 100644
index 0000000..b7c09f7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ChangeTableState.java
@@ -0,0 +1,133 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Writables;
+
+/** Instantiated to enable or disable a table */
+class ChangeTableState extends TableOperation {
+ private final Log LOG = LogFactory.getLog(this.getClass());
+ private boolean online;
+
+ protected final Map<String, HashSet<HRegionInfo>> servedRegions =
+ new HashMap<String, HashSet<HRegionInfo>>();
+
+ protected long lockid;
+
+ ChangeTableState(final HMaster master, final byte [] tableName,
+ final boolean onLine)
+ throws IOException {
+ super(master, tableName);
+ this.online = onLine;
+ }
+
+ @Override
+ protected void processScanItem(String serverName, HRegionInfo info) {
+
+ if (isBeingServed(serverName)) {
+ HashSet<HRegionInfo> regions = servedRegions.get(serverName);
+ if (regions == null) {
+ regions = new HashSet<HRegionInfo>();
+ }
+ regions.add(info);
+ servedRegions.put(serverName, regions);
+ }
+ }
+
+ @Override
+ protected void postProcessMeta(MetaRegion m, HRegionInterface server)
+ throws IOException {
+ // Process regions not being served
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("processing unserved regions");
+ }
+ for (HRegionInfo i: unservedRegions) {
+ if (i.isOffline() && i.isSplit()) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Skipping region " + i.toString() +
+ " because it is offline because it has been split");
+ }
+ continue;
+ }
+
+ // Update meta table
+ BatchUpdate b = new BatchUpdate(i.getRegionName());
+ updateRegionInfo(b, i);
+ b.delete(COL_SERVER);
+ b.delete(COL_STARTCODE);
+ server.batchUpdate(m.getRegionName(), b, -1L);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Updated columns in row: " + i.getRegionNameAsString());
+ }
+
+ synchronized (master.regionManager) {
+ if (online) {
+ // Bring offline regions on-line
+ if (!master.regionManager.regionIsOpening(i.getRegionNameAsString())) {
+ master.regionManager.setUnassigned(i, false);
+ }
+ } else {
+ // Prevent region from getting assigned.
+ master.regionManager.removeRegion(i);
+ }
+ }
+ }
+
+ // Process regions currently being served
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("processing regions currently being served");
+ }
+ synchronized (master.regionManager) {
+ for (Map.Entry<String, HashSet<HRegionInfo>> e: servedRegions.entrySet()) {
+ String serverName = e.getKey();
+ if (online) {
+ LOG.debug("Already online");
+ continue; // Already being served
+ }
+
+ // Cause regions being served to be taken off-line and disabled
+ for (HRegionInfo i: e.getValue()) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("adding region " + i.getRegionNameAsString() + " to kill list");
+ }
+ // this marks the regions to be closed
+ master.regionManager.setClosing(serverName, i, true);
+ }
+ }
+ }
+ servedRegions.clear();
+ }
+
+ protected void updateRegionInfo(final BatchUpdate b, final HRegionInfo i)
+ throws IOException {
+ i.setOffline(!online);
+ b.put(COL_REGIONINFO, Writables.getBytes(i));
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/ColumnOperation.java b/src/java/org/apache/hadoop/hbase/master/ColumnOperation.java
new file mode 100644
index 0000000..bf5718e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ColumnOperation.java
@@ -0,0 +1,57 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.TableNotDisabledException;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Writables;
+
+abstract class ColumnOperation extends TableOperation {
+ private final Log LOG = LogFactory.getLog(this.getClass());
+
+ protected ColumnOperation(final HMaster master, final byte [] tableName)
+ throws IOException {
+ super(master, tableName);
+ }
+
+ @Override
+ protected void processScanItem(String serverName, final HRegionInfo info)
+ throws IOException {
+ if (isEnabled(info)) {
+ throw new TableNotDisabledException(tableName);
+ }
+ }
+
+ protected void updateRegionInfo(HRegionInterface server, byte [] regionName,
+ HRegionInfo i) throws IOException {
+ BatchUpdate b = new BatchUpdate(i.getRegionName());
+ b.put(COL_REGIONINFO, Writables.getBytes(i));
+ server.batchUpdate(regionName, b, -1L);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("updated columns in row: " + i.getRegionNameAsString());
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/DeleteColumn.java b/src/java/org/apache/hadoop/hbase/master/DeleteColumn.java
new file mode 100644
index 0000000..75b8cad
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/DeleteColumn.java
@@ -0,0 +1,53 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.regionserver.Store;
+
+/** Instantiated to remove a column family from a table */
+class DeleteColumn extends ColumnOperation {
+ private final byte [] columnName;
+
+ DeleteColumn(final HMaster master, final byte [] tableName,
+ final byte [] columnName)
+ throws IOException {
+ super(master, tableName);
+ this.columnName = columnName;
+ }
+
+ @Override
+ protected void postProcessMeta(MetaRegion m, HRegionInterface server)
+ throws IOException {
+ for (HRegionInfo i: unservedRegions) {
+ i.getTableDesc().removeFamily(columnName);
+ updateRegionInfo(server, m.getRegionName(), i);
+ // Delete the directories used by the column
+ Path tabledir =
+ new Path(this.master.rootdir, i.getTableDesc().getNameAsString());
+ this.master.fs.delete(Store.getStoreHomedir(tabledir, i.getEncodedName(),
+ this.columnName), true);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/HMaster.java b/src/java/org/apache/hadoop/hbase/master/HMaster.java
new file mode 100644
index 0000000..15069db
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -0,0 +1,1074 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.lang.management.RuntimeMXBean;
+import java.lang.reflect.Constructor;
+import java.net.InetAddress;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.DelayQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.DistributedFileSystem;
+import org.apache.hadoop.dfs.FSConstants;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HServerLoad;
+import org.apache.hadoop.hbase.HStoreKey;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.LocalHBaseCluster;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.RegionHistorian;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.TableExistsException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.ServerConnection;
+import org.apache.hadoop.hbase.client.ServerConnectionManager;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.ipc.HBaseRPC;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+import org.apache.hadoop.hbase.ipc.HBaseServer;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.hbase.ipc.HMasterRegionInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.master.metrics.MasterMetrics;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.InfoServer;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Sleeper;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * HMaster is the "master server" for a HBase.
+ * There is only one HMaster for a single HBase deployment.
+ *
+ * NOTE: This class extends Thread rather than Chore because the sleep time
+ * can be interrupted when there is something to do, rather than the Chore
+ * sleep time which is invariant.
+ */
+public class HMaster extends Thread implements HConstants, HMasterInterface,
+ HMasterRegionInterface {
+
+ static final Log LOG = LogFactory.getLog(HMaster.class.getName());
+
+ public long getProtocolVersion(String protocol, long clientVersion) {
+ return HBaseRPCProtocolVersion.versionID;
+ }
+
+ // We start out with closed flag on. Using AtomicBoolean rather than
+ // plain boolean because want to pass a reference to supporting threads
+ // started here in HMaster rather than have them have to know about the
+ // hosting class
+ volatile AtomicBoolean closed = new AtomicBoolean(true);
+ volatile AtomicBoolean shutdownRequested = new AtomicBoolean(false);
+ volatile boolean fsOk = true;
+ final Path rootdir;
+ private final HBaseConfiguration conf;
+ final FileSystem fs;
+ final Random rand;
+ final int threadWakeFrequency;
+ final int numRetries;
+ final long maxRegionOpenTime;
+ final int leaseTimeout;
+ private final ZooKeeperWrapper zooKeeperWrapper;
+ private final ZKMasterAddressWatcher zkMasterAddressWatcher;
+
+ volatile DelayQueue<RegionServerOperation> delayedToDoQueue =
+ new DelayQueue<RegionServerOperation>();
+ volatile BlockingQueue<RegionServerOperation> toDoQueue =
+ new LinkedBlockingQueue<RegionServerOperation>();
+
+ private final HBaseServer server;
+ private final HServerAddress address;
+
+ final ServerConnection connection;
+
+ final int metaRescanInterval;
+
+ // A Sleeper that sleeps for threadWakeFrequency
+ private final Sleeper sleeper;
+
+ // Default access so accesible from unit tests. MASTER is name of the webapp
+ // and the attribute name used stuffing this instance into web context.
+ InfoServer infoServer;
+
+ /** Name of master server */
+ public static final String MASTER = "master";
+
+ /** @return InfoServer object */
+ public InfoServer getInfoServer() {
+ return infoServer;
+ }
+
+ ServerManager serverManager;
+ RegionManager regionManager;
+
+ private MasterMetrics metrics;
+
+ /**
+ * Build the HMaster out of a raw configuration item.
+ * @param conf configuration
+ *
+ * @throws IOException
+ */
+ public HMaster(HBaseConfiguration conf) throws IOException {
+ // find out our address. If it's set in config, use that, otherwise look it
+ // up in DNS.
+ String addressStr = conf.get(MASTER_ADDRESS);
+ if (addressStr == null) {
+ addressStr = conf.get(MASTER_HOST_NAME);
+ if (addressStr == null) {
+ addressStr = InetAddress.getLocalHost().getCanonicalHostName();
+ }
+ addressStr += ":";
+ addressStr += conf.get("hbase.master.port", Integer.toString(DEFAULT_MASTER_PORT));
+ }
+ HServerAddress address = new HServerAddress(addressStr);
+ LOG.info("My address is " + address);
+
+ this.conf = conf;
+ this.rootdir = new Path(conf.get(HBASE_DIR));
+ try {
+ FSUtils.validateRootPath(this.rootdir);
+ } catch (IOException e) {
+ LOG.fatal("Not starting HMaster because the root directory path '" +
+ this.rootdir + "' is not valid. Check the setting of the" +
+ " configuration parameter '" + HBASE_DIR + "'", e);
+ throw e;
+ }
+ this.threadWakeFrequency = conf.getInt(THREAD_WAKE_FREQUENCY, 10 * 1000);
+ // The filesystem hbase wants to use is probably not what is set into
+ // fs.default.name; its value is probably the default.
+ this.conf.set("fs.default.name", this.rootdir.toString());
+ this.fs = FileSystem.get(conf);
+ if (this.fs instanceof DistributedFileSystem) {
+ // Make sure dfs is not in safe mode
+ String message = "Waiting for dfs to exit safe mode...";
+ while (((DistributedFileSystem) fs).setSafeMode(
+ FSConstants.SafeModeAction.SAFEMODE_GET)) {
+ LOG.info(message);
+ try {
+ Thread.sleep(this.threadWakeFrequency);
+ } catch (InterruptedException e) {
+ //continue
+ }
+ }
+ }
+ this.conf.set(HConstants.HBASE_DIR, this.rootdir.toString());
+ this.rand = new Random();
+
+ try {
+ // Make sure the hbase root directory exists!
+ if (!fs.exists(rootdir)) {
+ fs.mkdirs(rootdir);
+ FSUtils.setVersion(fs, rootdir);
+ } else {
+ FSUtils.checkVersion(fs, rootdir, true);
+ }
+
+ // Make sure the root region directory exists!
+ if (!FSUtils.rootRegionExists(fs, rootdir)) {
+ bootstrap();
+ }
+ } catch (IOException e) {
+ LOG.fatal("Not starting HMaster because:", e);
+ throw e;
+ }
+
+ this.numRetries = conf.getInt("hbase.client.retries.number", 2);
+ this.maxRegionOpenTime =
+ conf.getLong("hbase.hbasemaster.maxregionopen", 120 * 1000);
+ this.leaseTimeout = conf.getInt("hbase.master.lease.period", 120 * 1000);
+
+ this.server = HBaseRPC.getServer(this, address.getBindAddress(),
+ address.getPort(), conf.getInt("hbase.regionserver.handler.count", 10),
+ false, conf);
+
+ // The rpc-server port can be ephemeral... ensure we have the correct info
+ this.address = new HServerAddress(server.getListenerAddress());
+
+ this.connection = ServerConnectionManager.getConnection(conf);
+
+ this.metaRescanInterval =
+ conf.getInt("hbase.master.meta.thread.rescanfrequency", 60 * 1000);
+
+ this.sleeper = new Sleeper(this.threadWakeFrequency, this.closed);
+
+ zooKeeperWrapper = new ZooKeeperWrapper(conf);
+ zkMasterAddressWatcher = new ZKMasterAddressWatcher(zooKeeperWrapper);
+ serverManager = new ServerManager(this);
+ regionManager = new RegionManager(this);
+
+ writeAddressToZooKeeper();
+
+ // We're almost open for business
+ this.closed.set(false);
+ LOG.info("HMaster initialized on " + this.address.toString());
+ }
+
+ private void writeAddressToZooKeeper() {
+ while (true) {
+ zkMasterAddressWatcher.waitForMasterAddressAvailability();
+ if (zooKeeperWrapper.writeMasterAddress(address)) {
+ return;
+ }
+ }
+ }
+
+ private void bootstrap() throws IOException {
+ LOG.info("BOOTSTRAP: creating ROOT and first META regions");
+ try {
+ HRegion root = HRegion.createHRegion(HRegionInfo.ROOT_REGIONINFO,
+ this.rootdir, this.conf);
+ HRegion meta = HRegion.createHRegion(HRegionInfo.FIRST_META_REGIONINFO,
+ this.rootdir, this.conf);
+
+ // Add first region from the META table to the ROOT region.
+ HRegion.addRegionToMETA(root, meta);
+ root.close();
+ root.getLog().closeAndDelete();
+ meta.close();
+ meta.getLog().closeAndDelete();
+ } catch (IOException e) {
+ e = RemoteExceptionHandler.checkIOException(e);
+ LOG.error("bootstrap", e);
+ throw e;
+ }
+ }
+
+ /**
+ * Checks to see if the file system is still accessible.
+ * If not, sets closed
+ * @return false if file system is not available
+ */
+ protected boolean checkFileSystem() {
+ if (fsOk) {
+ try {
+ FSUtils.checkFileSystemAvailable(fs);
+ } catch (IOException e) {
+ LOG.fatal("Shutting down HBase cluster: file system not available", e);
+ closed.set(true);
+ fsOk = false;
+ }
+ }
+ return fsOk;
+ }
+
+ /** @return HServerAddress of the master server */
+ public HServerAddress getMasterAddress() {
+ return address;
+ }
+
+ /**
+ * @return Hbase root dir.
+ */
+ public Path getRootDir() {
+ return this.rootdir;
+ }
+
+ /**
+ * @return Read-only map of servers to serverinfo.
+ */
+ public Map<String, HServerInfo> getServersToServerInfo() {
+ return serverManager.getServersToServerInfo();
+ }
+
+ public Map<HServerAddress, HServerInfo> getServerAddressToServerInfo() {
+ return serverManager.getServerAddressToServerInfo();
+ }
+
+ /**
+ * @return Read-only map of servers to load.
+ */
+ public Map<String, HServerLoad> getServersToLoad() {
+ return serverManager.getServersToLoad();
+ }
+
+ /** @return The average load */
+ public double getAverageLoad() {
+ return serverManager.getAverageLoad();
+ }
+
+ /** @return the number of regions on filesystem */
+ public int countRegionsOnFS() {
+ try {
+ return regionManager.countRegionsOnFS();
+ } catch (IOException e) {
+ LOG.warn("Get count of Regions on FileSystem error : " +
+ StringUtils.stringifyException(e));
+ }
+ return -1;
+ }
+
+ /**
+ * @return Location of the <code>-ROOT-</code> region.
+ */
+ public HServerAddress getRootRegionLocation() {
+ HServerAddress rootServer = null;
+ if (!shutdownRequested.get() && !closed.get()) {
+ rootServer = regionManager.getRootRegionLocation();
+ }
+ return rootServer;
+ }
+
+ /**
+ * Wait until root region is available
+ */
+ public void waitForRootRegionLocation() {
+ regionManager.waitForRootRegionLocation();
+ }
+
+ /**
+ * @return Read-only map of online regions.
+ */
+ public Map<byte [], MetaRegion> getOnlineMetaRegions() {
+ return regionManager.getOnlineMetaRegions();
+ }
+
+ /** Main processing loop */
+ @Override
+ public void run() {
+ final String threadName = "HMaster";
+ Thread.currentThread().setName(threadName);
+ startServiceThreads();
+ /* Main processing loop */
+ try {
+ while (!closed.get()) {
+ // check if we should be shutting down
+ if (shutdownRequested.get()) {
+ // The region servers won't all exit until we stop scanning the
+ // meta regions
+ regionManager.stopScanners();
+ if (serverManager.numServers() == 0) {
+ startShutdown();
+ break;
+ }
+ }
+ // work on the TodoQueue. If that fails, we should shut down.
+ if (!processToDoQueue()) {
+ break;
+ }
+ }
+ } catch (Throwable t) {
+ LOG.fatal("Unhandled exception. Starting shutdown.", t);
+ closed.set(true);
+ }
+
+ // Wait for all the remaining region servers to report in.
+ serverManager.letRegionServersShutdown();
+
+ /*
+ * Clean up and close up shop
+ */
+ RegionHistorian.getInstance().offline();
+ if (this.infoServer != null) {
+ LOG.info("Stopping infoServer");
+ try {
+ this.infoServer.stop();
+ } catch (Exception ex) {
+ ex.printStackTrace();
+ }
+ }
+ server.stop(); // Stop server
+ regionManager.stop();
+
+ // Join up with all threads
+ LOG.info("HMaster main thread exiting");
+ }
+
+ /**
+ * Try to get an operation off of the todo queue and perform it.
+ */
+ private boolean processToDoQueue() {
+ RegionServerOperation op = null;
+
+ // block until the root region is online
+ if (regionManager.getRootRegionLocation() != null) {
+ // We can't process server shutdowns unless the root region is online
+ op = delayedToDoQueue.poll();
+ }
+
+ // if there aren't any todo items in the queue, sleep for a bit.
+ if (op == null ) {
+ try {
+ op = toDoQueue.poll(threadWakeFrequency, TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+
+ // at this point, if there's still no todo operation, or we're supposed to
+ // be closed, return.
+ if (op == null || closed.get()) {
+ return true;
+ }
+
+ try {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Processing todo: " + op.toString());
+ }
+
+ // perform the operation.
+ if (!op.process()) {
+ // Operation would have blocked because not all meta regions are
+ // online. This could cause a deadlock, because this thread is waiting
+ // for the missing meta region(s) to come back online, but since it
+ // is waiting, it cannot process the meta region online operation it
+ // is waiting for. So put this operation back on the queue for now.
+ if (toDoQueue.size() == 0) {
+ // The queue is currently empty so wait for a while to see if what
+ // we need comes in first
+ sleeper.sleep();
+ }
+ try {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Put " + op.toString() + " back on queue");
+ }
+ toDoQueue.put(op);
+ } catch (InterruptedException e) {
+ throw new RuntimeException(
+ "Putting into toDoQueue was interrupted.", e);
+ }
+ }
+ } catch (Exception ex) {
+ // There was an exception performing the operation.
+ if (ex instanceof RemoteException) {
+ try {
+ ex = RemoteExceptionHandler.decodeRemoteException(
+ (RemoteException)ex);
+ } catch (IOException e) {
+ ex = e;
+ LOG.warn("main processing loop: " + op.toString(), e);
+ }
+ }
+ // make sure the filesystem is still ok. otherwise, we're toast.
+ if (!checkFileSystem()) {
+ return false;
+ }
+ LOG.warn("Processing pending operations: " + op.toString(), ex);
+ try {
+ // put the operation back on the queue... maybe it'll work next time.
+ toDoQueue.put(op);
+ } catch (InterruptedException e) {
+ throw new RuntimeException(
+ "Putting into toDoQueue was interrupted.", e);
+ } catch (Exception e) {
+ LOG.error("main processing loop: " + op.toString(), e);
+ }
+ }
+ return true;
+ }
+
+ /*
+ * Start up all services. If any of these threads gets an unhandled exception
+ * then they just die with a logged message. This should be fine because
+ * in general, we do not expect the master to get such unhandled exceptions
+ * as OOMEs; it should be lightly loaded. See what HRegionServer does if
+ * need to install an unexpected exception handler.
+ */
+ private void startServiceThreads() {
+ // Do after main thread name has been set
+ this.metrics = new MasterMetrics();
+ try {
+ regionManager.start();
+ // Put up info server.
+ int port = this.conf.getInt("hbase.master.info.port", 60010);
+ if (port >= 0) {
+ String a = this.conf.get("hbase.master.info.bindAddress", "0.0.0.0");
+ this.infoServer = new InfoServer(MASTER, a, port, false);
+ this.infoServer.setAttribute(MASTER, this);
+ this.infoServer.start();
+ }
+ // Start the server so everything else is running before we start
+ // receiving requests.
+ this.server.start();
+ } catch (IOException e) {
+ if (e instanceof RemoteException) {
+ try {
+ e = RemoteExceptionHandler.decodeRemoteException((RemoteException) e);
+ } catch (IOException ex) {
+ LOG.warn("thread start", ex);
+ }
+ }
+ // Something happened during startup. Shut things down.
+ this.closed.set(true);
+ LOG.error("Failed startup", e);
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Started service threads");
+ }
+ }
+
+ /*
+ * Start shutting down the master
+ */
+ void startShutdown() {
+ closed.set(true);
+ regionManager.stopScanners();
+ synchronized(toDoQueue) {
+ toDoQueue.clear(); // Empty the queue
+ delayedToDoQueue.clear(); // Empty shut down queue
+ toDoQueue.notifyAll(); // Wake main thread
+ }
+ serverManager.notifyServers();
+ }
+
+ /*
+ * HMasterRegionInterface
+ */
+ public MapWritable regionServerStartup(final HServerInfo serverInfo)
+ throws IOException {
+ // Set the address for now even tho it will not be persisted on HRS side
+ // If the address given is not the default one,
+ // use the IP given by the user.
+ if (serverInfo.getServerAddress().getBindAddress().equals(
+ DEFAULT_HOST)) {
+ String rsAddress = HBaseServer.getRemoteAddress();
+ serverInfo.setServerAddress(new HServerAddress(rsAddress,
+ serverInfo.getServerAddress().getPort()));
+ }
+ // Register with server manager
+ this.serverManager.regionServerStartup(serverInfo);
+ // Send back some config info
+ return createConfigurationSubset();
+ }
+
+ /**
+ * @return Subset of configuration to pass initializing regionservers: e.g.
+ * the filesystem to use and root directory to use.
+ */
+ protected MapWritable createConfigurationSubset() {
+ MapWritable mw = addConfig(new MapWritable(), HConstants.HBASE_DIR);
+ // Get the real address of the HRS.
+ String rsAddress = HBaseServer.getRemoteAddress();
+ if (rsAddress != null) {
+ mw.put(new Text("hbase.regionserver.address"), new Text(rsAddress));
+ }
+
+ return addConfig(mw, "fs.default.name");
+ }
+
+ private MapWritable addConfig(final MapWritable mw, final String key) {
+ mw.put(new Text(key), new Text(this.conf.get(key)));
+ return mw;
+ }
+
+ public HMsg[] regionServerReport(HServerInfo serverInfo, HMsg msgs[],
+ HRegionInfo[] mostLoadedRegions)
+ throws IOException {
+ return serverManager.regionServerReport(serverInfo, msgs,
+ mostLoadedRegions);
+ }
+
+ /*
+ * HMasterInterface
+ */
+
+ public boolean isMasterRunning() {
+ return !closed.get();
+ }
+
+ public void shutdown() {
+ LOG.info("Cluster shutdown requested. Starting to quiesce servers");
+ this.shutdownRequested.set(true);
+ }
+
+ public void createTable(HTableDescriptor desc)
+ throws IOException {
+ if (!isMasterRunning()) {
+ throw new MasterNotRunningException();
+ }
+ HRegionInfo newRegion = new HRegionInfo(desc, null, null);
+
+ for (int tries = 0; tries < numRetries; tries++) {
+ try {
+ // We can not create a table unless meta regions have already been
+ // assigned and scanned.
+ if (!regionManager.areAllMetaRegionsOnline()) {
+ throw new NotAllMetaRegionsOnlineException();
+ }
+ createTable(newRegion);
+ LOG.info("created table " + desc.getNameAsString());
+ break;
+ } catch (TableExistsException e) {
+ throw e;
+ } catch (IOException e) {
+ if (tries == numRetries - 1) {
+ throw RemoteExceptionHandler.checkIOException(e);
+ }
+ sleeper.sleep();
+ }
+ }
+ }
+
+ private synchronized void createTable(final HRegionInfo newRegion)
+ throws IOException {
+ String tableName = newRegion.getTableDesc().getNameAsString();
+ // 1. Check to see if table already exists. Get meta region where
+ // table would sit should it exist. Open scanner on it. If a region
+ // for the table we want to create already exists, then table already
+ // created. Throw already-exists exception.
+ MetaRegion m = regionManager.getFirstMetaRegionForRegion(newRegion);
+
+ byte [] metaRegionName = m.getRegionName();
+ HRegionInterface srvr = connection.getHRegionConnection(m.getServer());
+ byte[] firstRowInTable = Bytes.toBytes(tableName + ",,");
+ long scannerid = srvr.openScanner(metaRegionName, COL_REGIONINFO_ARRAY,
+ firstRowInTable, LATEST_TIMESTAMP, null);
+ try {
+ RowResult data = srvr.next(scannerid);
+ if (data != null && data.size() > 0) {
+ HRegionInfo info = Writables.getHRegionInfo(data.get(COL_REGIONINFO));
+ if (info.getTableDesc().getNameAsString().equals(tableName)) {
+ // A region for this table already exists. Ergo table exists.
+ throw new TableExistsException(tableName);
+ }
+ }
+ } finally {
+ srvr.close(scannerid);
+ }
+ regionManager.createRegion(newRegion, srvr, metaRegionName);
+ }
+
+ public void deleteTable(final byte [] tableName) throws IOException {
+ if (Bytes.equals(tableName, ROOT_TABLE_NAME)) {
+ throw new IOException("Can't delete root table");
+ }
+ new TableDelete(this, tableName).process();
+ LOG.info("deleted table: " + Bytes.toString(tableName));
+ }
+
+ public void addColumn(byte [] tableName, HColumnDescriptor column)
+ throws IOException {
+ new AddColumn(this, tableName, column).process();
+ }
+
+ public void modifyColumn(byte [] tableName, byte [] columnName,
+ HColumnDescriptor descriptor)
+ throws IOException {
+ new ModifyColumn(this, tableName, columnName, descriptor).process();
+ }
+
+ public void deleteColumn(final byte [] tableName, final byte [] c)
+ throws IOException {
+ new DeleteColumn(this, tableName, HStoreKey.getFamily(c)).process();
+ }
+
+ public void enableTable(final byte [] tableName) throws IOException {
+ if (Bytes.equals(tableName, ROOT_TABLE_NAME)) {
+ throw new IOException("Can't enable root table");
+ }
+ new ChangeTableState(this, tableName, true).process();
+ }
+
+ public void disableTable(final byte [] tableName) throws IOException {
+ if (Bytes.equals(tableName, ROOT_TABLE_NAME)) {
+ throw new IOException("Can't disable root table");
+ }
+ new ChangeTableState(this, tableName, false).process();
+ }
+
+ private List<Pair<HRegionInfo,HServerAddress>>
+ getTableRegions(final byte [] tableName) throws IOException {
+ List<Pair<HRegionInfo,HServerAddress>> result =
+ new ArrayList<Pair<HRegionInfo,HServerAddress>>();
+ Set<MetaRegion> regions = regionManager.getMetaRegionsForTable(tableName);
+ byte [] firstRowInTable = Bytes.toBytes(Bytes.toString(tableName) + ",,");
+ for (MetaRegion m: regions) {
+ byte [] metaRegionName = m.getRegionName();
+ HRegionInterface srvr = connection.getHRegionConnection(m.getServer());
+ long scannerid =
+ srvr.openScanner(metaRegionName,
+ new byte[][] {COL_REGIONINFO, COL_SERVER},
+ firstRowInTable,
+ LATEST_TIMESTAMP,
+ null);
+ try {
+ while (true) {
+ RowResult data = srvr.next(scannerid);
+ if (data == null || data.size() <= 0)
+ break;
+ HRegionInfo info = Writables.getHRegionInfo(data.get(COL_REGIONINFO));
+ if (Bytes.compareTo(info.getTableDesc().getName(), tableName) == 0) {
+ Cell cell = data.get(COL_SERVER);
+ if (cell != null) {
+ HServerAddress server =
+ new HServerAddress(Bytes.toString(cell.getValue()));
+ result.add(new Pair<HRegionInfo,HServerAddress>(info, server));
+ }
+ } else {
+ break;
+ }
+ }
+ } finally {
+ srvr.close(scannerid);
+ }
+ }
+ return result;
+ }
+
+ private Pair<HRegionInfo,HServerAddress>
+ getTableRegionClosest(final byte [] tableName, final byte [] rowKey)
+ throws IOException {
+ Set<MetaRegion> regions = regionManager.getMetaRegionsForTable(tableName);
+ for (MetaRegion m: regions) {
+ byte [] firstRowInTable = Bytes.toBytes(Bytes.toString(tableName) + ",,");
+ byte [] metaRegionName = m.getRegionName();
+ HRegionInterface srvr = connection.getHRegionConnection(m.getServer());
+ long scannerid =
+ srvr.openScanner(metaRegionName,
+ new byte[][] {COL_REGIONINFO, COL_SERVER},
+ firstRowInTable,
+ LATEST_TIMESTAMP,
+ null);
+ try {
+ while (true) {
+ RowResult data = srvr.next(scannerid);
+ if (data == null || data.size() <= 0)
+ break;
+ HRegionInfo info = Writables.getHRegionInfo(data.get(COL_REGIONINFO));
+ if (Bytes.compareTo(info.getTableDesc().getName(), tableName) == 0) {
+ if ((Bytes.compareTo(info.getStartKey(), rowKey) >= 0) &&
+ (Bytes.compareTo(info.getEndKey(), rowKey) < 0)) {
+ Cell cell = data.get(COL_SERVER);
+ if (cell != null) {
+ HServerAddress server =
+ new HServerAddress(Bytes.toString(cell.getValue()));
+ return new Pair<HRegionInfo,HServerAddress>(info, server);
+ }
+ }
+ } else {
+ break;
+ }
+ }
+ } finally {
+ srvr.close(scannerid);
+ }
+ }
+ return null;
+ }
+
+ private Pair<HRegionInfo,HServerAddress>
+ getTableRegionFromName(final byte [] regionName)
+ throws IOException {
+ byte [] tableName = HRegionInfo.parseRegionName(regionName)[0];
+ Set<MetaRegion> regions = regionManager.getMetaRegionsForTable(tableName);
+ for (MetaRegion m: regions) {
+ byte [] metaRegionName = m.getRegionName();
+ HRegionInterface srvr = connection.getHRegionConnection(m.getServer());
+ RowResult data = srvr.getRow(metaRegionName, regionName,
+ new byte[][] {COL_REGIONINFO, COL_SERVER},
+ HConstants.LATEST_TIMESTAMP, 1, -1L);
+ if(data == null || data.size() <= 0) continue;
+ HRegionInfo info = Writables.getHRegionInfo(data.get(COL_REGIONINFO));
+ Cell cell = data.get(COL_SERVER);
+ if(cell != null) {
+ HServerAddress server =
+ new HServerAddress(Bytes.toString(cell.getValue()));
+ return new Pair<HRegionInfo,HServerAddress>(info, server);
+ }
+ }
+ return null;
+ }
+
+ /**
+ * Get row from meta table.
+ * @param row
+ * @param columns
+ * @return RowResult
+ * @throws IOException
+ */
+ protected RowResult getFromMETA(final byte [] row, final byte [][] columns)
+ throws IOException {
+ MetaRegion meta = this.regionManager.getMetaRegionForRow(row);
+ HRegionInterface srvr = getMETAServer(meta);
+ return srvr.getRow(meta.getRegionName(), row, columns,
+ HConstants.LATEST_TIMESTAMP, 1, -1);
+ }
+
+ /*
+ * @param meta
+ * @return Server connection to <code>meta</code> .META. region.
+ * @throws IOException
+ */
+ private HRegionInterface getMETAServer(final MetaRegion meta)
+ throws IOException {
+ return this.connection.getHRegionConnection(meta.getServer());
+ }
+
+ public void modifyTable(final byte[] tableName, int op, Writable[] args)
+ throws IOException {
+ switch (op) {
+ case MODIFY_TABLE_SET_HTD:
+ if (args == null || args.length < 1 ||
+ !(args[0] instanceof HTableDescriptor))
+ throw new IOException("SET_HTD request requires an HTableDescriptor");
+ HTableDescriptor htd = (HTableDescriptor) args[0];
+ LOG.info("modifyTable(SET_HTD): " + htd);
+ new ModifyTableMeta(this, tableName, htd).process();
+ break;
+
+ case MODIFY_TABLE_SPLIT:
+ case MODIFY_TABLE_COMPACT:
+ case MODIFY_TABLE_MAJOR_COMPACT:
+ case MODIFY_TABLE_FLUSH:
+ if (args != null && args.length > 0) {
+ if (!(args[0] instanceof ImmutableBytesWritable))
+ throw new IOException(
+ "request argument must be ImmutableBytesWritable");
+ Pair<HRegionInfo,HServerAddress> pair = null;
+ if(tableName == null) {
+ byte [] regionName = ((ImmutableBytesWritable)args[0]).get();
+ pair = getTableRegionFromName(regionName);
+ } else {
+ byte [] rowKey = ((ImmutableBytesWritable)args[0]).get();
+ pair = getTableRegionClosest(tableName, rowKey);
+ }
+ if (pair != null) {
+ this.regionManager.startAction(pair.getFirst().getRegionName(),
+ pair.getFirst(), pair.getSecond(), op);
+ }
+ } else {
+ for (Pair<HRegionInfo,HServerAddress> pair: getTableRegions(tableName))
+ this.regionManager.startAction(pair.getFirst().getRegionName(),
+ pair.getFirst(), pair.getSecond(), op);
+ }
+ break;
+
+ case MODIFY_CLOSE_REGION:
+ if (args == null || args.length < 1 || args.length > 2) {
+ throw new IOException("Requires at least a region name; " +
+ "or cannot have more than region name and servername");
+ }
+ // Arguments are regionname and an optional server name.
+ byte [] regionname = ((ImmutableBytesWritable)args[0]).get();
+ String servername = null;
+ if (args.length == 2) {
+ servername = Bytes.toString(((ImmutableBytesWritable)args[1]).get());
+ }
+ // Need hri
+ RowResult rr = getFromMETA(regionname, HConstants.COLUMN_FAMILY_ARRAY);
+ HRegionInfo hri = getHRegionInfo(rr.getRow(), rr);
+ if (servername == null) {
+ // Get server from the .META. if it wasn't passed as argument
+ servername = Writables.cellToString(rr.get(COL_SERVER));
+ }
+ LOG.info("Marking " + hri.getRegionNameAsString() +
+ " as closed on " + servername + "; cleaning SERVER + STARTCODE; " +
+ "master will tell regionserver to close region on next heartbeat");
+ this.regionManager.setClosing(servername, hri, hri.isOffline());
+ MetaRegion meta = this.regionManager.getMetaRegionForRow(regionname);
+ HRegionInterface srvr = getMETAServer(meta);
+ HRegion.cleanRegionInMETA(srvr, meta.getRegionName(), hri);
+ break;
+
+ default:
+ throw new IOException("unsupported modifyTable op " + op);
+ }
+ }
+
+ /**
+ * @return Server metrics
+ */
+ public MasterMetrics getMetrics() {
+ return this.metrics;
+ }
+
+ /*
+ * Managing leases
+ */
+
+ /**
+ * @return Return configuration being used by this server.
+ */
+ public HBaseConfiguration getConfiguration() {
+ return this.conf;
+ }
+
+ /*
+ * Get HRegionInfo from passed META map of row values.
+ * Returns null if none found (and logs fact that expected COL_REGIONINFO
+ * was missing). Utility method used by scanners of META tables.
+ * @param row name of the row
+ * @param map Map to do lookup in.
+ * @return Null or found HRegionInfo.
+ * @throws IOException
+ */
+ HRegionInfo getHRegionInfo(final byte [] row, final Map<byte [], Cell> map)
+ throws IOException {
+ Cell regioninfo = map.get(COL_REGIONINFO);
+ if (regioninfo == null) {
+ StringBuilder sb = new StringBuilder();
+ for (byte [] e: map.keySet()) {
+ if (sb.length() > 0) {
+ sb.append(", ");
+ }
+ sb.append(Bytes.toString(e));
+ }
+ LOG.warn(Bytes.toString(COL_REGIONINFO) + " is empty for row: " +
+ Bytes.toString(row) + "; has keys: " + sb.toString());
+ return null;
+ }
+ return Writables.getHRegionInfo(regioninfo.getValue());
+ }
+
+ /*
+ * When we find rows in a meta region that has an empty HRegionInfo, we
+ * clean them up here.
+ *
+ * @param s connection to server serving meta region
+ * @param metaRegionName name of the meta region we scanned
+ * @param emptyRows the row keys that had empty HRegionInfos
+ */
+ protected void deleteEmptyMetaRows(HRegionInterface s,
+ byte [] metaRegionName,
+ List<byte []> emptyRows) {
+ for (byte [] regionName: emptyRows) {
+ try {
+ HRegion.removeRegionFromMETA(s, metaRegionName, regionName);
+ LOG.warn("Removed region: " + Bytes.toString(regionName) +
+ " from meta region: " +
+ Bytes.toString(metaRegionName) + " because HRegionInfo was empty");
+ } catch (IOException e) {
+ LOG.error("deleting region: " + Bytes.toString(regionName) +
+ " from meta region: " + Bytes.toString(metaRegionName), e);
+ }
+ }
+ }
+
+ /**
+ * Get the ZK wrapper object
+ * @return the zookeeper wrapper
+ */
+ public ZooKeeperWrapper getZooKeeperWrapper() {
+ return zooKeeperWrapper;
+ }
+
+ /*
+ * Main program
+ */
+
+ private static void printUsageAndExit() {
+ System.err.println("Usage: java org.apache.hbase.HMaster " +
+ "[--bind=hostname:port] start|stop");
+ System.exit(0);
+ }
+
+ protected static void doMain(String [] args,
+ Class<? extends HMaster> masterClass) {
+
+ if (args.length < 1) {
+ printUsageAndExit();
+ }
+
+ HBaseConfiguration conf = new HBaseConfiguration();
+
+ // Process command-line args. TODO: Better cmd-line processing
+ // (but hopefully something not as painful as cli options).
+
+ final String addressArgKey = "--bind=";
+ for (String cmd: args) {
+ if (cmd.startsWith(addressArgKey)) {
+ conf.set(MASTER_ADDRESS, cmd.substring(addressArgKey.length()));
+ continue;
+ }
+
+ if (cmd.equals("start")) {
+ try {
+ RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
+ if (runtime != null) {
+ LOG.info("vmName=" + runtime.getVmName() + ", vmVendor=" +
+ runtime.getVmVendor() + ", vmVersion=" + runtime.getVmVersion());
+ LOG.info("vmInputArguments=" + runtime.getInputArguments());
+ }
+ // If 'local', defer to LocalHBaseCluster instance.
+ if (LocalHBaseCluster.isLocal(conf)) {
+ (new LocalHBaseCluster(conf)).startup();
+ } else {
+ Constructor<? extends HMaster> c =
+ masterClass.getConstructor(HBaseConfiguration.class);
+ HMaster master = c.newInstance(conf);
+ master.start();
+ }
+ } catch (Throwable t) {
+ LOG.error( "Can not start master", t);
+ System.exit(-1);
+ }
+ break;
+ }
+
+ if (cmd.equals("stop")) {
+ HBaseAdmin adm = null;
+ try {
+ adm = new HBaseAdmin(conf);
+ } catch (MasterNotRunningException e) {
+ LOG.error("master is not running");
+ System.exit(0);
+ }
+ try {
+ adm.shutdown();
+ } catch (Throwable t) {
+ LOG.error( "Can not stop master", t);
+ System.exit(-1);
+ }
+ break;
+ }
+
+ // Print out usage if we get to here.
+ printUsageAndExit();
+ }
+ }
+
+ /**
+ * Main program
+ * @param args
+ */
+ public static void main(String [] args) {
+ doMain(args, HMaster.class);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/InvalidColumnNameException.java b/src/java/org/apache/hadoop/hbase/master/InvalidColumnNameException.java
new file mode 100644
index 0000000..7794205
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/InvalidColumnNameException.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+
+/**
+ * Thrown when an invalid column name is encountered
+ */
+public class InvalidColumnNameException extends DoNotRetryIOException {
+ private static final long serialVersionUID = 1L << 29 - 1L;
+ /** default constructor */
+ public InvalidColumnNameException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public InvalidColumnNameException(String s) {
+ super(s);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/MetaRegion.java b/src/java/org/apache/hadoop/hbase/master/MetaRegion.java
new file mode 100644
index 0000000..24f57b0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/MetaRegion.java
@@ -0,0 +1,99 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+/** Describes a meta region and its server */
+public class MetaRegion implements Comparable<MetaRegion> {
+ private final HServerAddress server;
+ private final byte [] regionName;
+ private final byte [] startKey;
+
+ MetaRegion(final HServerAddress server, final byte [] regionName) {
+ this (server, regionName, HConstants.EMPTY_START_ROW);
+ }
+
+ MetaRegion(final HServerAddress server, final byte [] regionName,
+ final byte [] startKey) {
+ if (server == null) {
+ throw new IllegalArgumentException("server cannot be null");
+ }
+ this.server = server;
+ if (regionName == null) {
+ throw new IllegalArgumentException("regionName cannot be null");
+ }
+ this.regionName = regionName;
+ this.startKey = startKey;
+ }
+
+ @Override
+ public String toString() {
+ return "{regionname: " + Bytes.toString(this.regionName) +
+ ", startKey: <" + Bytes.toString(this.startKey) +
+ ">, server: " + this.server.toString() + "}";
+ }
+
+ /** @return the regionName */
+ public byte [] getRegionName() {
+ return regionName;
+ }
+
+ /** @return the server */
+ public HServerAddress getServer() {
+ return server;
+ }
+
+ /** @return the startKey */
+ public byte [] getStartKey() {
+ return startKey;
+ }
+
+ @Override
+ public boolean equals(Object o) {
+ return o instanceof MetaRegion && this.compareTo((MetaRegion)o) == 0;
+ }
+
+ @Override
+ public int hashCode() {
+ int result = Arrays.hashCode(this.regionName);
+ result ^= Arrays.hashCode(this.startKey);
+ return result;
+ }
+
+ // Comparable
+
+ public int compareTo(MetaRegion other) {
+ int result = Bytes.compareTo(this.regionName, other.getRegionName());
+ if(result == 0) {
+ result = Bytes.compareTo(this.startKey, other.getStartKey());
+ if (result == 0) {
+ // Might be on different host?
+ result = this.server.compareTo(other.server);
+ }
+ }
+ return result;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/MetaScanner.java b/src/java/org/apache/hadoop/hbase/master/MetaScanner.java
new file mode 100644
index 0000000..8c619c7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/MetaScanner.java
@@ -0,0 +1,181 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+
+/**
+ * MetaScanner <code>META</code> table.
+ *
+ * When a <code>META</code> server comes on line, a MetaRegion object is
+ * queued up by regionServerReport() and this thread wakes up.
+ *
+ * It's important to do this work in a separate thread, or else the blocking
+ * action would prevent other work from getting done.
+ */
+class MetaScanner extends BaseScanner {
+ /** Initial work for the meta scanner is queued up here */
+ private volatile BlockingQueue<MetaRegion> metaRegionsToScan =
+ new LinkedBlockingQueue<MetaRegion>();
+
+ private final List<MetaRegion> metaRegionsToRescan =
+ new ArrayList<MetaRegion>();
+
+ /**
+ * Constructor
+ *
+ * @param master
+ */
+ public MetaScanner(HMaster master) {
+ super(master, false, master.metaRescanInterval, master.shutdownRequested);
+ }
+
+ // Don't retry if we get an error while scanning. Errors are most often
+ // caused by the server going away. Wait until next rescan interval when
+ // things should be back to normal.
+ private boolean scanOneMetaRegion(MetaRegion region) {
+ while (!this.master.closed.get() &&
+ !this.master.regionManager.isInitialRootScanComplete() &&
+ this.master.regionManager.getRootRegionLocation() == null) {
+ sleep();
+ }
+ if (this.master.closed.get()) {
+ return false;
+ }
+
+ try {
+ // Don't interrupt us while we're working
+ synchronized (scannerLock) {
+ scanRegion(region);
+ this.master.regionManager.putMetaRegionOnline(region);
+ }
+ } catch (IOException e) {
+ e = RemoteExceptionHandler.checkIOException(e);
+ LOG.warn("Scan one META region: " + region.toString(), e);
+ // The region may have moved (TestRegionServerAbort, etc.). If
+ // so, either it won't be in the onlineMetaRegions list or its host
+ // address has changed and the containsValue will fail. If not
+ // found, best thing to do here is probably return.
+ if (!this.master.regionManager.isMetaRegionOnline(region.getStartKey())) {
+ LOG.debug("Scanned region is no longer in map of online " +
+ "regions or its value has changed");
+ return false;
+ }
+ // Make sure the file system is still available
+ this.master.checkFileSystem();
+ } catch (Exception e) {
+ // If for some reason we get some other kind of exception,
+ // at least log it rather than go out silently.
+ LOG.error("Unexpected exception", e);
+ }
+ return true;
+ }
+
+ @Override
+ protected boolean initialScan() {
+ MetaRegion region = null;
+ while (!this.master.closed.get() &&
+ (region == null && metaRegionsToScan.size() > 0) &&
+ !metaRegionsScanned()) {
+ try {
+ region = metaRegionsToScan.poll(this.master.threadWakeFrequency,
+ TimeUnit.MILLISECONDS);
+ } catch (InterruptedException e) {
+ // continue
+ }
+ if (region == null && metaRegionsToRescan.size() != 0) {
+ region = metaRegionsToRescan.remove(0);
+ }
+ if (region != null) {
+ if (!scanOneMetaRegion(region)) {
+ metaRegionsToRescan.add(region);
+ }
+ }
+ }
+ initialScanComplete = true;
+ return true;
+ }
+
+ @Override
+ protected void maintenanceScan() {
+ List<MetaRegion> regions =
+ this.master.regionManager.getListOfOnlineMetaRegions();
+ int regionCount = 0;
+ for (MetaRegion r: regions) {
+ scanOneMetaRegion(r);
+ regionCount++;
+ }
+ LOG.info("All " + regionCount + " .META. region(s) scanned");
+ metaRegionsScanned();
+ }
+
+ /*
+ * Called by the meta scanner when it has completed scanning all meta
+ * regions. This wakes up any threads that were waiting for this to happen.
+ * @param totalRows Total rows scanned.
+ * @param regionCount Count of regions in .META. table.
+ * @return False if number of meta regions matches count of online regions.
+ */
+ private synchronized boolean metaRegionsScanned() {
+ if (!this.master.regionManager.isInitialRootScanComplete() ||
+ this.master.regionManager.numMetaRegions() !=
+ this.master.regionManager.numOnlineMetaRegions()) {
+ return false;
+ }
+ notifyAll();
+ return true;
+ }
+
+ /**
+ * Other threads call this method to wait until all the meta regions have
+ * been scanned.
+ */
+ synchronized boolean waitForMetaRegionsOrClose() {
+ while (!this.master.closed.get()) {
+ synchronized (master.regionManager) {
+ if (this.master.regionManager.isInitialRootScanComplete() &&
+ this.master.regionManager.numMetaRegions() ==
+ this.master.regionManager.numOnlineMetaRegions()) {
+ break;
+ }
+ }
+ try {
+ wait(this.master.threadWakeFrequency);
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ return this.master.closed.get();
+ }
+
+ /**
+ * Add another meta region to scan to the queue.
+ */
+ void addMetaRegionToScan(MetaRegion m) {
+ metaRegionsToScan.add(m);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/ModifyColumn.java b/src/java/org/apache/hadoop/hbase/master/ModifyColumn.java
new file mode 100644
index 0000000..c50ca5d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ModifyColumn.java
@@ -0,0 +1,55 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.HRegionInfo;
+
+/** Instantiated to modify an existing column family on a table */
+class ModifyColumn extends ColumnOperation {
+ private final HColumnDescriptor descriptor;
+ private final byte [] columnName;
+
+ ModifyColumn(final HMaster master, final byte [] tableName,
+ final byte [] columnName, HColumnDescriptor descriptor)
+ throws IOException {
+ super(master, tableName);
+ this.descriptor = descriptor;
+ this.columnName = columnName;
+ }
+
+ @Override
+ protected void postProcessMeta(MetaRegion m, HRegionInterface server)
+ throws IOException {
+ for (HRegionInfo i: unservedRegions) {
+ if (i.getTableDesc().hasFamily(columnName)) {
+ i.getTableDesc().addFamily(descriptor);
+ updateRegionInfo(server, m.getRegionName(), i);
+ } else { // otherwise, we have an error.
+ throw new InvalidColumnNameException("Column family '" +
+ Bytes.toString(columnName) +
+ "' doesn't exist, so cannot be modified.");
+ }
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/ModifyTableMeta.java b/src/java/org/apache/hadoop/hbase/master/ModifyTableMeta.java
new file mode 100644
index 0000000..7caa4bd
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ModifyTableMeta.java
@@ -0,0 +1,77 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableNotDisabledException;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/** Instantiated to modify table descriptor metadata */
+class ModifyTableMeta extends TableOperation {
+
+ private static Log LOG = LogFactory.getLog(ModifyTableMeta.class);
+
+ private HTableDescriptor desc;
+
+ ModifyTableMeta(final HMaster master, final byte [] tableName,
+ HTableDescriptor desc)
+ throws IOException {
+ super(master, tableName);
+ this.desc = desc;
+ LOG.debug("modifying " + Bytes.toString(tableName) + ": " +
+ desc.toString());
+ }
+
+ protected void updateRegionInfo(HRegionInterface server, byte [] regionName,
+ HRegionInfo i)
+ throws IOException {
+ BatchUpdate b = new BatchUpdate(i.getRegionName());
+ b.put(COL_REGIONINFO, Writables.getBytes(i));
+ server.batchUpdate(regionName, b, -1L);
+ LOG.debug("updated HTableDescriptor for region " + i.getRegionNameAsString());
+ }
+
+ @Override
+ protected void processScanItem(String serverName,
+ final HRegionInfo info) throws IOException {
+ if (isEnabled(info)) {
+ throw new TableNotDisabledException(Bytes.toString(tableName));
+ }
+ }
+
+ @Override
+ protected void postProcessMeta(MetaRegion m, HRegionInterface server)
+ throws IOException {
+ for (HRegionInfo i: unservedRegions) {
+ i.setTableDesc(desc);
+ updateRegionInfo(server, m.getRegionName(), i);
+ }
+ // kick off a meta scan right away
+ master.regionManager.metaScannerThread.interrupt();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/NotAllMetaRegionsOnlineException.java b/src/java/org/apache/hadoop/hbase/master/NotAllMetaRegionsOnlineException.java
new file mode 100644
index 0000000..52d8ab7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/NotAllMetaRegionsOnlineException.java
@@ -0,0 +1,45 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Thrown when an operation requires the root and all meta regions to be online
+ */
+public class NotAllMetaRegionsOnlineException extends DoNotRetryIOException {
+ private static final long serialVersionUID = 6439786157874827523L;
+
+ /**
+ * default constructor
+ */
+ public NotAllMetaRegionsOnlineException() {
+ super();
+ }
+
+ /**
+ * @param message
+ */
+ public NotAllMetaRegionsOnlineException(String message) {
+ super(message);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/ProcessRegionClose.java b/src/java/org/apache/hadoop/hbase/master/ProcessRegionClose.java
new file mode 100644
index 0000000..5537360
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ProcessRegionClose.java
@@ -0,0 +1,92 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.HRegionInfo;
+
+/**
+ * ProcessRegionClose is the way we do post-processing on a closed region. We
+ * only spawn one of these asynchronous tasks when the region needs to be
+ * either offlined or deleted. We used to create one of these tasks whenever
+ * a region was closed, but since closing a region that isn't being offlined
+ * or deleted doesn't actually require post processing, it's no longer
+ * necessary.
+ */
+class ProcessRegionClose extends ProcessRegionStatusChange {
+ protected final boolean offlineRegion;
+ protected final boolean reassignRegion;
+
+ /**
+ * @param master
+ * @param regionInfo Region to operate on
+ * @param offlineRegion if true, set the region to offline in meta
+ * @param reassignRegion if true, region is to be reassigned
+ */
+ public ProcessRegionClose(HMaster master, HRegionInfo regionInfo,
+ boolean offlineRegion, boolean reassignRegion) {
+
+ super(master, regionInfo);
+ this.offlineRegion = offlineRegion;
+ this.reassignRegion = reassignRegion;
+ }
+
+ @Override
+ public String toString() {
+ return "ProcessRegionClose of " + this.regionInfo.getRegionNameAsString() +
+ ", " + this.offlineRegion;
+ }
+
+ @Override
+ protected boolean process() throws IOException {
+ Boolean result = null;
+ if (offlineRegion) {
+ result =
+ new RetryableMetaOperation<Boolean>(getMetaRegion(), this.master) {
+ public Boolean call() throws IOException {
+ LOG.info("region closed: " + regionInfo.getRegionNameAsString());
+
+ // We can't proceed unless the meta region we are going to update
+ // is online. metaRegionAvailable() will put this operation on the
+ // delayedToDoQueue, so return true so the operation is not put
+ // back on the toDoQueue
+
+ if (metaRegionAvailable()) {
+ // offline the region in meta and then remove it from the
+ // set of regions in transition
+ HRegion.offlineRegionInMETA(server, metaRegionName,
+ regionInfo);
+ master.regionManager.removeRegion(regionInfo);
+ }
+ return true;
+ }
+ }.doWithRetries();
+ result = result == null ? true : result;
+
+ } else if (reassignRegion) {
+ // we are reassigning the region eventually, so set it unassigned
+ master.regionManager.setUnassigned(regionInfo, false);
+ }
+
+ return result == null ? true : result;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/ProcessRegionOpen.java b/src/java/org/apache/hadoop/hbase/master/ProcessRegionOpen.java
new file mode 100644
index 0000000..7ab2b7b
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ProcessRegionOpen.java
@@ -0,0 +1,126 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.RegionHistorian;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * ProcessRegionOpen is instantiated when a region server reports that it is
+ * serving a region. This applies to all meta and user regions except the
+ * root region which is handled specially.
+ */
+class ProcessRegionOpen extends ProcessRegionStatusChange {
+ protected final HServerInfo serverInfo;
+
+ /**
+ * @param master
+ * @param info
+ * @param regionInfo
+ * @throws IOException
+ */
+ public ProcessRegionOpen(HMaster master, HServerInfo info,
+ HRegionInfo regionInfo)
+ throws IOException {
+ super(master, regionInfo);
+ if (info == null) {
+ throw new NullPointerException("HServerInfo cannot be null; " +
+ "hbase-958 debugging");
+ }
+ this.serverInfo = info;
+ }
+
+ @Override
+ public String toString() {
+ return "PendingOpenOperation from " + HServerInfo.getServerName(serverInfo);
+ }
+
+ @Override
+ protected boolean process() throws IOException {
+ Boolean result =
+ new RetryableMetaOperation<Boolean>(getMetaRegion(), this.master) {
+ private final RegionHistorian historian = RegionHistorian.getInstance();
+
+ public Boolean call() throws IOException {
+ LOG.info(regionInfo.getRegionNameAsString() + " open on " +
+ serverInfo.getServerAddress().toString());
+ if (!metaRegionAvailable()) {
+ // We can't proceed unless the meta region we are going to update
+ // is online. metaRegionAvailable() has put this operation on the
+ // delayedToDoQueue, so return true so the operation is not put
+ // back on the toDoQueue
+ return true;
+ }
+
+ // Register the newly-available Region's location.
+ LOG.info("updating row " + regionInfo.getRegionNameAsString() +
+ " in region " + Bytes.toString(metaRegionName) + " with " +
+ " with startcode " + serverInfo.getStartCode() + " and server " +
+ serverInfo.getServerAddress());
+ BatchUpdate b = new BatchUpdate(regionInfo.getRegionName());
+ b.put(COL_SERVER,
+ Bytes.toBytes(serverInfo.getServerAddress().toString()));
+ b.put(COL_STARTCODE, Bytes.toBytes(serverInfo.getStartCode()));
+ server.batchUpdate(metaRegionName, b, -1L);
+ if (!this.historian.isOnline()) {
+ // This is safest place to do the onlining of the historian in
+ // the master. When we get to here, we know there is a .META.
+ // for the historian to go against.
+ this.historian.online(this.master.getConfiguration());
+ }
+ this.historian.addRegionOpen(regionInfo, serverInfo.getServerAddress());
+ synchronized (master.regionManager) {
+ if (isMetaTable) {
+ // It's a meta region.
+ MetaRegion m =
+ new MetaRegion(new HServerAddress(serverInfo.getServerAddress()),
+ regionInfo.getRegionName(), regionInfo.getStartKey());
+ if (!master.regionManager.isInitialMetaScanComplete()) {
+ // Put it on the queue to be scanned for the first time.
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Adding " + m.toString() + " to regions to scan");
+ }
+ master.regionManager.addMetaRegionToScan(m);
+ } else {
+ // Add it to the online meta regions
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Adding to onlineMetaRegions: " + m.toString());
+ }
+ master.regionManager.putMetaRegionOnline(m);
+ // Interrupting the Meta Scanner sleep so that it can
+ // process regions right away
+ master.regionManager.metaScannerThread.interrupt();
+ }
+ }
+ // If updated successfully, remove from pending list.
+ master.regionManager.removeRegion(regionInfo);
+ return true;
+ }
+ }
+ }.doWithRetries();
+ return result == null ? true : result;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/ProcessRegionStatusChange.java b/src/java/org/apache/hadoop/hbase/master/ProcessRegionStatusChange.java
new file mode 100644
index 0000000..e0ef0e2
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ProcessRegionStatusChange.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+
+/**
+ * Abstract class that performs common operations for
+ * @see #ProcessRegionClose and @see #ProcessRegionOpen
+ */
+abstract class ProcessRegionStatusChange extends RegionServerOperation {
+ protected final boolean isMetaTable;
+ protected final HRegionInfo regionInfo;
+ private volatile MetaRegion metaRegion = null;
+ protected volatile byte[] metaRegionName = null;
+
+ /**
+ * @param master
+ * @param regionInfo
+ */
+ public ProcessRegionStatusChange(HMaster master, HRegionInfo regionInfo) {
+ super(master);
+ this.regionInfo = regionInfo;
+ this.isMetaTable = regionInfo.isMetaTable();
+ }
+
+ protected boolean metaRegionAvailable() {
+ boolean available = true;
+ if (isMetaTable) {
+ // This operation is for the meta table
+ if (!rootAvailable()) {
+ // But we can't proceed unless the root region is available
+ available = false;
+ }
+ } else {
+ if (!master.regionManager.isInitialRootScanComplete() ||
+ !metaTableAvailable()) {
+ // The root region has not been scanned or the meta table is not
+ // available so we can't proceed.
+ // Put the operation on the delayedToDoQueue
+ requeue();
+ available = false;
+ }
+ }
+ return available;
+ }
+
+ protected MetaRegion getMetaRegion() {
+ if (isMetaTable) {
+ this.metaRegionName = HRegionInfo.ROOT_REGIONINFO.getRegionName();
+ this.metaRegion = new MetaRegion(master.getRootRegionLocation(),
+ this.metaRegionName, HConstants.EMPTY_START_ROW);
+ } else {
+ this.metaRegion =
+ master.regionManager.getFirstMetaRegionForRegion(regionInfo);
+ this.metaRegionName = this.metaRegion.getRegionName();
+ }
+ return this.metaRegion;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/ProcessServerShutdown.java b/src/java/org/apache/hadoop/hbase/master/ProcessServerShutdown.java
new file mode 100644
index 0000000..8c5d793
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ProcessServerShutdown.java
@@ -0,0 +1,318 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.io.RowResult;
+
+/**
+ * Instantiated when a server's lease has expired, meaning it has crashed.
+ * The region server's log file needs to be split up for each region it was
+ * serving, and the regions need to get reassigned.
+ */
+class ProcessServerShutdown extends RegionServerOperation {
+ private final String deadServer;
+ private final boolean rootRegionServer;
+ private boolean rootRegionReassigned = false;
+ private Path oldLogDir;
+ private boolean logSplit;
+ private boolean rootRescanned;
+
+
+ private static class ToDoEntry {
+ boolean regionOffline;
+ final byte [] row;
+ final HRegionInfo info;
+
+ ToDoEntry(final byte [] row, final HRegionInfo info) {
+ this.regionOffline = false;
+ this.row = row;
+ this.info = info;
+ }
+ }
+
+ /**
+ * @param master
+ * @param serverInfo
+ * @param rootRegionServer
+ */
+ public ProcessServerShutdown(HMaster master, HServerInfo serverInfo,
+ boolean rootRegionServer) {
+ super(master);
+ this.deadServer = HServerInfo.getServerName(serverInfo);
+ this.rootRegionServer = rootRegionServer;
+ this.logSplit = false;
+ this.rootRescanned = false;
+ this.oldLogDir =
+ new Path(master.rootdir, HLog.getHLogDirectoryName(serverInfo));
+ }
+
+ @Override
+ public String toString() {
+ return "ProcessServerShutdown of " + this.deadServer;
+ }
+
+ /** Finds regions that the dead region server was serving
+ */
+ protected void scanMetaRegion(HRegionInterface server, long scannerId,
+ byte [] regionName)
+ throws IOException {
+ List<ToDoEntry> toDoList = new ArrayList<ToDoEntry>();
+ Set<HRegionInfo> regions = new HashSet<HRegionInfo>();
+ List<byte []> emptyRows = new ArrayList<byte []>();
+ try {
+ while (true) {
+ RowResult values = null;
+ try {
+ values = server.next(scannerId);
+ } catch (IOException e) {
+ LOG.error("Shutdown scanning of meta region",
+ RemoteExceptionHandler.checkIOException(e));
+ break;
+ }
+ if (values == null || values.size() == 0) {
+ break;
+ }
+ byte [] row = values.getRow();
+ // Check server name. If null, skip (We used to consider it was on
+ // shutdown server but that would mean that we'd reassign regions that
+ // were already out being assigned, ones that were product of a split
+ // that happened while the shutdown was being processed.
+ String serverAddress = Writables.cellToString(values.get(COL_SERVER));
+ long startCode = Writables.cellToLong(values.get(COL_STARTCODE));
+ String serverName = null;
+ if (serverAddress != null && serverAddress.length() > 0) {
+ serverName = HServerInfo.getServerName(serverAddress, startCode);
+ }
+ if (serverName == null || !deadServer.equals(serverName)) {
+ // This isn't the server you're looking for - move along
+ continue;
+ }
+
+ if (LOG.isDebugEnabled() && row != null) {
+ LOG.debug("Shutdown scanner for " + serverName + " processing " +
+ Bytes.toString(row));
+ }
+
+ HRegionInfo info = master.getHRegionInfo(row, values);
+ if (info == null) {
+ emptyRows.add(row);
+ continue;
+ }
+
+ synchronized (master.regionManager) {
+ if (info.isMetaTable()) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("removing meta region " +
+ Bytes.toString(info.getRegionName()) +
+ " from online meta regions");
+ }
+ master.regionManager.offlineMetaRegion(info.getStartKey());
+ }
+
+ ToDoEntry todo = new ToDoEntry(row, info);
+ toDoList.add(todo);
+
+ if (master.regionManager.isOfflined(info.getRegionNameAsString()) ||
+ info.isOffline()) {
+ master.regionManager.removeRegion(info);
+ // Mark region offline
+ if (!info.isOffline()) {
+ todo.regionOffline = true;
+ }
+ } else {
+ if (!info.isOffline() && !info.isSplit()) {
+ // Get region reassigned
+ regions.add(info);
+ }
+ }
+ }
+ }
+ } finally {
+ if (scannerId != -1L) {
+ try {
+ server.close(scannerId);
+ } catch (IOException e) {
+ LOG.error("Closing scanner",
+ RemoteExceptionHandler.checkIOException(e));
+ }
+ }
+ }
+
+ // Scan complete. Remove any rows which had empty HRegionInfos
+
+ if (emptyRows.size() > 0) {
+ LOG.warn("Found " + emptyRows.size() +
+ " rows with empty HRegionInfo while scanning meta region " +
+ Bytes.toString(regionName));
+ master.deleteEmptyMetaRows(server, regionName, emptyRows);
+ }
+ // Update server in root/meta entries
+ for (ToDoEntry e: toDoList) {
+ if (e.regionOffline) {
+ HRegion.offlineRegionInMETA(server, regionName, e.info);
+ }
+ }
+
+ // Get regions reassigned
+ for (HRegionInfo info: regions) {
+ master.regionManager.setUnassigned(info, true);
+ }
+ }
+
+ private class ScanRootRegion extends RetryableMetaOperation<Boolean> {
+ ScanRootRegion(MetaRegion m, HMaster master) {
+ super(m, master);
+ }
+
+ public Boolean call() throws IOException {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("process server shutdown scanning root region on " +
+ master.getRootRegionLocation().getBindAddress());
+ }
+ long scannerId = server.openScanner(
+ HRegionInfo.ROOT_REGIONINFO.getRegionName(), COLUMN_FAMILY_ARRAY,
+ EMPTY_START_ROW, HConstants.LATEST_TIMESTAMP, null);
+ scanMetaRegion(server, scannerId,
+ HRegionInfo.ROOT_REGIONINFO.getRegionName());
+ return true;
+ }
+ }
+
+ private class ScanMetaRegions extends RetryableMetaOperation<Boolean> {
+ ScanMetaRegions(MetaRegion m, HMaster master) {
+ super(m, master);
+ }
+
+ public Boolean call() throws IOException {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("process server shutdown scanning " +
+ Bytes.toString(m.getRegionName()) + " on " + m.getServer());
+ }
+ long scannerId =
+ server.openScanner(m.getRegionName(), COLUMN_FAMILY_ARRAY,
+ EMPTY_START_ROW, HConstants.LATEST_TIMESTAMP, null);
+ scanMetaRegion(server, scannerId, m.getRegionName());
+ return true;
+ }
+ }
+
+ @Override
+ protected boolean process() throws IOException {
+ LOG.info("process shutdown of server " + this.deadServer +
+ ": logSplit: " +
+ logSplit + ", rootRescanned: " + rootRescanned +
+ ", numberOfMetaRegions: " +
+ master.regionManager.numMetaRegions() +
+ ", onlineMetaRegions.size(): " +
+ master.regionManager.numOnlineMetaRegions());
+ if (!logSplit) {
+ // Process the old log file
+ if (master.fs.exists(oldLogDir)) {
+ if (!master.regionManager.splitLogLock.tryLock()) {
+ return false;
+ }
+ try {
+ HLog.splitLog(master.rootdir, oldLogDir, master.fs,
+ master.getConfiguration());
+ } finally {
+ master.regionManager.splitLogLock.unlock();
+ }
+ }
+ logSplit = true;
+ }
+
+ if (this.rootRegionServer && !this.rootRegionReassigned) {
+ // avoid multiple root region reassignment
+ this.rootRegionReassigned = true;
+ // The server that died was serving the root region. Now that the log
+ // has been split, get it reassigned.
+ master.regionManager.reassignRootRegion();
+ // When we call rootAvailable below, it will put us on the delayed
+ // to do queue to allow some time to pass during which the root
+ // region will hopefully get reassigned.
+ }
+
+ if (!rootAvailable()) {
+ // Return true so that worker does not put this request back on the
+ // toDoQueue.
+ // rootAvailable() has already put it on the delayedToDoQueue
+ return true;
+ }
+
+ if (!rootRescanned) {
+ // Scan the ROOT region
+ Boolean result = new ScanRootRegion(
+ new MetaRegion(master.getRootRegionLocation(),
+ HRegionInfo.ROOT_REGIONINFO.getRegionName(),
+ HConstants.EMPTY_START_ROW), this.master).doWithRetries();
+ if (result == null) {
+ // Master is closing - give up
+ return true;
+ }
+
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("process server shutdown scanning root region on " +
+ master.getRootRegionLocation().getBindAddress() +
+ " finished " + Thread.currentThread().getName());
+ }
+ rootRescanned = true;
+ }
+ if (!metaTableAvailable()) {
+ // We can't proceed because not all meta regions are online.
+ // metaAvailable() has put this request on the delayedToDoQueue
+ // Return true so that worker does not put this on the toDoQueue
+ return true;
+ }
+
+ List<MetaRegion> regions = master.regionManager.getListOfOnlineMetaRegions();
+ for (MetaRegion r: regions) {
+ Boolean result = new ScanMetaRegions(r, this.master).doWithRetries();
+ if (result == null) {
+ break;
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("process server shutdown finished scanning " +
+ Bytes.toString(r.getRegionName()) + " on " + r.getServer());
+ }
+ }
+ // Remove this server from dead servers list. Finished splitting logs.
+ this.master.serverManager.removeDeadServer(deadServer);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Removed " + deadServer + " from deadservers Map");
+ }
+ return true;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/RegionManager.java b/src/java/org/apache/hadoop/hbase/master/RegionManager.java
new file mode 100644
index 0000000..e7f1511
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/RegionManager.java
@@ -0,0 +1,1356 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.Collections;
+import java.util.concurrent.ConcurrentSkipListMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HServerLoad;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.RegionHistorian;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+
+/**
+ * Class to manage assigning regions to servers, state of root and meta, etc.
+ */
+class RegionManager implements HConstants {
+ protected static final Log LOG = LogFactory.getLog(RegionManager.class);
+
+ private AtomicReference<HServerAddress> rootRegionLocation =
+ new AtomicReference<HServerAddress>(null);
+
+ private volatile boolean safeMode = true;
+
+ final Lock splitLogLock = new ReentrantLock();
+
+ private final RootScanner rootScannerThread;
+ final MetaScanner metaScannerThread;
+
+ /** Set by root scanner to indicate the number of meta regions */
+ private final AtomicInteger numberOfMetaRegions = new AtomicInteger();
+
+ /** These are the online meta regions */
+ private final NavigableMap<byte [], MetaRegion> onlineMetaRegions =
+ new ConcurrentSkipListMap<byte [], MetaRegion>(Bytes.BYTES_COMPARATOR);
+
+ private static final byte[] OVERLOADED = Bytes.toBytes("Overloaded");
+
+ /**
+ * Map of region name to RegionState for regions that are in transition such as
+ *
+ * unassigned -> pendingOpen -> open
+ * closing -> pendingClose -> closed; if (closed && !offline) -> unassigned
+ *
+ * At the end of a transition, removeRegion is used to remove the region from
+ * the map (since it is no longer in transition)
+ *
+ * Note: Needs to be SortedMap so we can specify a comparator
+ *
+ * @see RegionState inner-class below
+ */
+ private final SortedMap<String, RegionState> regionsInTransition =
+ Collections.synchronizedSortedMap(new TreeMap<String, RegionState>());
+
+ // How many regions to assign a server at a time.
+ private final int maxAssignInOneGo;
+
+ private final HMaster master;
+ private final RegionHistorian historian;
+ private final float slop;
+
+ /** Set of regions to split. */
+ private final SortedMap<byte[], Pair<HRegionInfo,HServerAddress>>
+ regionsToSplit = Collections.synchronizedSortedMap(
+ new TreeMap<byte[],Pair<HRegionInfo,HServerAddress>>
+ (Bytes.BYTES_COMPARATOR));
+ /** Set of regions to compact. */
+ private final SortedMap<byte[], Pair<HRegionInfo,HServerAddress>>
+ regionsToCompact = Collections.synchronizedSortedMap(
+ new TreeMap<byte[],Pair<HRegionInfo,HServerAddress>>
+ (Bytes.BYTES_COMPARATOR));
+ /** Set of regions to major compact. */
+ private final SortedMap<byte[], Pair<HRegionInfo,HServerAddress>>
+ regionsToMajorCompact = Collections.synchronizedSortedMap(
+ new TreeMap<byte[],Pair<HRegionInfo,HServerAddress>>
+ (Bytes.BYTES_COMPARATOR));
+ /** Set of regions to flush. */
+ private final SortedMap<byte[], Pair<HRegionInfo,HServerAddress>>
+ regionsToFlush = Collections.synchronizedSortedMap(
+ new TreeMap<byte[],Pair<HRegionInfo,HServerAddress>>
+ (Bytes.BYTES_COMPARATOR));
+
+ private final ZooKeeperWrapper zooKeeperWrapper;
+ private final int zooKeeperNumRetries;
+ private final int zooKeeperPause;
+
+ RegionManager(HMaster master) {
+ HBaseConfiguration conf = master.getConfiguration();
+
+ this.master = master;
+ this.historian = RegionHistorian.getInstance();
+ this.maxAssignInOneGo = conf.getInt("hbase.regions.percheckin", 10);
+ this.slop = conf.getFloat("hbase.regions.slop", (float)0.1);
+
+ // The root region
+ rootScannerThread = new RootScanner(master);
+
+ // Scans the meta table
+ metaScannerThread = new MetaScanner(master);
+
+ zooKeeperWrapper = master.getZooKeeperWrapper();
+ zooKeeperNumRetries = conf.getInt(ZOOKEEPER_RETRIES, DEFAULT_ZOOKEEPER_RETRIES);
+ zooKeeperPause = conf.getInt(ZOOKEEPER_PAUSE, DEFAULT_ZOOKEEPER_PAUSE);
+
+ reassignRootRegion();
+ }
+
+ void start() {
+ Threads.setDaemonThreadRunning(rootScannerThread,
+ "RegionManager.rootScanner");
+ Threads.setDaemonThreadRunning(metaScannerThread,
+ "RegionManager.metaScanner");
+ }
+
+ void unsetRootRegion() {
+ synchronized (regionsInTransition) {
+ rootRegionLocation.set(null);
+ regionsInTransition.remove(
+ HRegionInfo.ROOT_REGIONINFO.getRegionNameAsString());
+ }
+ }
+
+ void reassignRootRegion() {
+ unsetRootRegion();
+ if (!master.shutdownRequested.get()) {
+ synchronized (regionsInTransition) {
+ RegionState s = new RegionState(HRegionInfo.ROOT_REGIONINFO);
+ s.setUnassigned();
+ regionsInTransition.put(
+ HRegionInfo.ROOT_REGIONINFO.getRegionNameAsString(), s);
+ }
+ }
+ }
+
+ /*
+ * Assigns regions to region servers attempting to balance the load across
+ * all region servers. Note that no synchronization is necessary as the caller
+ * (ServerManager.processMsgs) already owns the monitor for the RegionManager.
+ *
+ * @param info
+ * @param mostLoadedRegions
+ * @param returnMsgs
+ */
+ void assignRegions(HServerInfo info, HRegionInfo[] mostLoadedRegions,
+ ArrayList<HMsg> returnMsgs) {
+ HServerLoad thisServersLoad = info.getLoad();
+ // figure out what regions need to be assigned and aren't currently being
+ // worked on elsewhere.
+ Set<RegionState> regionsToAssign = regionsAwaitingAssignment();
+ if (regionsToAssign.size() == 0) {
+ // There are no regions waiting to be assigned.
+ if (!inSafeMode()) {
+ // We only do load balancing once all regions are assigned.
+ // This prevents churn while the cluster is starting up.
+ double avgLoad = master.serverManager.getAverageLoad();
+ double avgLoadWithSlop = avgLoad +
+ ((this.slop != 0)? avgLoad * this.slop: avgLoad);
+ if (avgLoad > 2.0 &&
+ thisServersLoad.getNumberOfRegions() > avgLoadWithSlop) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Server " + info.getServerName() +
+ " is overloaded. Server load: " +
+ thisServersLoad.getNumberOfRegions() + " avg: " + avgLoad +
+ ", slop: " + this.slop);
+ }
+ unassignSomeRegions(info, thisServersLoad,
+ avgLoad, mostLoadedRegions, returnMsgs);
+ }
+ }
+ } else {
+ // if there's only one server, just give it all the regions
+ if (master.serverManager.numServers() == 1) {
+ assignRegionsToOneServer(regionsToAssign, info, returnMsgs);
+ } else {
+ // otherwise, give this server a few regions taking into account the
+ // load of all the other servers.
+ assignRegionsToMultipleServers(thisServersLoad, regionsToAssign,
+ info, returnMsgs);
+ }
+ }
+ }
+
+ /*
+ * Make region assignments taking into account multiple servers' loads.
+ *
+ * Note that no synchronization is needed while we iterate over
+ * regionsInTransition because this method is only called by assignRegions
+ * whose caller owns the monitor for RegionManager
+ */
+ private void assignRegionsToMultipleServers(final HServerLoad thisServersLoad,
+ final Set<RegionState> regionsToAssign, final HServerInfo info,
+ final ArrayList<HMsg> returnMsgs) {
+
+ int nRegionsToAssign = regionsToAssign.size();
+ int nregions = regionsPerServer(nRegionsToAssign, thisServersLoad);
+ nRegionsToAssign -= nregions;
+ if (nRegionsToAssign > 0) {
+ // We still have more regions to assign. See how many we can assign
+ // before this server becomes more heavily loaded than the next
+ // most heavily loaded server.
+ HServerLoad heavierLoad = new HServerLoad();
+ int nservers = computeNextHeaviestLoad(thisServersLoad, heavierLoad);
+
+ nregions = 0;
+
+ // Advance past any less-loaded servers
+ for (HServerLoad load = new HServerLoad(thisServersLoad);
+ load.compareTo(heavierLoad) <= 0 && nregions < nRegionsToAssign;
+ load.setNumberOfRegions(load.getNumberOfRegions() + 1), nregions++) {
+ // continue;
+ }
+
+ if (nregions < nRegionsToAssign) {
+ // There are some more heavily loaded servers
+ // but we can't assign all the regions to this server.
+ if (nservers > 0) {
+ // There are other servers that can share the load.
+ // Split regions that need assignment across the servers.
+ nregions = (int) Math.ceil((1.0 * nRegionsToAssign)
+ / (1.0 * nservers));
+ } else {
+ // No other servers with same load.
+ // Split regions over all available servers
+ nregions = (int) Math.ceil((1.0 * nRegionsToAssign)
+ / (1.0 * master.serverManager.numServers()));
+ }
+ } else {
+ // Assign all regions to this server
+ nregions = nRegionsToAssign;
+ }
+
+ if (nregions > this.maxAssignInOneGo) {
+ nregions = this.maxAssignInOneGo;
+ }
+
+ for (RegionState s: regionsToAssign) {
+ doRegionAssignment(s, info, returnMsgs);
+ if (--nregions <= 0) {
+ break;
+ }
+ }
+ }
+ }
+
+ /*
+ * Assign all to the only server. An unlikely case but still possible.
+ *
+ * Note that no synchronization is needed on regionsInTransition while
+ * iterating on it because the only caller is assignRegions whose caller owns
+ * the monitor for RegionManager
+ *
+ * @param regionsToAssign
+ * @param serverName
+ * @param returnMsgs
+ */
+ private void assignRegionsToOneServer(final Set<RegionState> regionsToAssign,
+ final HServerInfo info, final ArrayList<HMsg> returnMsgs) {
+ for (RegionState s: regionsToAssign) {
+ doRegionAssignment(s, info, returnMsgs);
+ }
+ }
+
+ /*
+ * Do single region assignment.
+ * @param rs
+ * @param sinfo
+ * @param returnMsgs
+ */
+ private void doRegionAssignment(final RegionState rs,
+ final HServerInfo sinfo, final ArrayList<HMsg> returnMsgs) {
+ String regionName = rs.getRegionInfo().getRegionNameAsString();
+ LOG.info("Assigning region " + regionName + " to " + sinfo.getServerName());
+ rs.setPendingOpen(sinfo.getServerName());
+ this.regionsInTransition.put(regionName, rs);
+ this.historian.addRegionAssignment(rs.getRegionInfo(),
+ sinfo.getServerName());
+ returnMsgs.add(new HMsg(HMsg.Type.MSG_REGION_OPEN, rs.getRegionInfo()));
+ }
+
+ /*
+ * @param nRegionsToAssign
+ * @param thisServersLoad
+ * @return How many regions we can assign to more lightly loaded servers
+ */
+ private int regionsPerServer(final int numUnassignedRegions,
+ final HServerLoad thisServersLoad) {
+
+ SortedMap<HServerLoad, Set<String>> lightServers =
+ new TreeMap<HServerLoad, Set<String>>();
+
+ // Get all the servers who are more lightly loaded than this one.
+ synchronized (master.serverManager.loadToServers) {
+ lightServers.putAll(master.serverManager.loadToServers.headMap(thisServersLoad));
+ }
+
+ // Examine the list of servers that are more lightly loaded than this one.
+ // Pretend that we will assign regions to these more lightly loaded servers
+ // until they reach load equal with ours. Then, see how many regions are left
+ // unassigned. That is how many regions we should assign to this server.
+ int nRegions = 0;
+ for (Map.Entry<HServerLoad, Set<String>> e : lightServers.entrySet()) {
+ HServerLoad lightLoad = new HServerLoad(e.getKey());
+ do {
+ lightLoad.setNumberOfRegions(lightLoad.getNumberOfRegions() + 1);
+ nRegions += 1;
+ } while (lightLoad.compareTo(thisServersLoad) <= 0
+ && nRegions < numUnassignedRegions);
+
+ nRegions *= e.getValue().size();
+ if (nRegions >= numUnassignedRegions) {
+ break;
+ }
+ }
+ return nRegions;
+ }
+
+ /*
+ * Get the set of regions that should be assignable in this pass.
+ *
+ * Note that no synchronization on regionsInTransition is needed because the
+ * only caller (assignRegions, whose caller is ServerManager.processMsgs) owns
+ * the monitor for RegionManager
+ */
+ private Set<RegionState> regionsAwaitingAssignment() {
+ // set of regions we want to assign to this server
+ Set<RegionState> regionsToAssign = new HashSet<RegionState>();
+
+ // Look over the set of regions that aren't currently assigned to
+ // determine which we should assign to this server.
+ for (RegionState s: regionsInTransition.values()) {
+ HRegionInfo i = s.getRegionInfo();
+ if (i == null) {
+ continue;
+ }
+ if (numberOfMetaRegions.get() != onlineMetaRegions.size() &&
+ !i.isMetaRegion()) {
+ // Can't assign user regions until all meta regions have been assigned
+ // and are on-line
+ continue;
+ }
+ if (s.isUnassigned()) {
+ regionsToAssign.add(s);
+ }
+ }
+ return regionsToAssign;
+ }
+
+ /*
+ * Figure out the load that is next highest amongst all regionservers. Also,
+ * return how many servers exist at that load.
+ */
+ private int computeNextHeaviestLoad(HServerLoad referenceLoad,
+ HServerLoad heavierLoad) {
+
+ SortedMap<HServerLoad, Set<String>> heavyServers =
+ new TreeMap<HServerLoad, Set<String>>();
+ synchronized (master.serverManager.loadToServers) {
+ heavyServers.putAll(
+ master.serverManager.loadToServers.tailMap(referenceLoad));
+ }
+ int nservers = 0;
+ for (Map.Entry<HServerLoad, Set<String>> e : heavyServers.entrySet()) {
+ Set<String> servers = e.getValue();
+ nservers += servers.size();
+ if (e.getKey().compareTo(referenceLoad) == 0) {
+ // This is the load factor of the server we are considering
+ nservers -= 1;
+ continue;
+ }
+
+ // If we get here, we are at the first load entry that is a
+ // heavier load than the server we are considering
+ heavierLoad.setNumberOfRequests(e.getKey().getNumberOfRequests());
+ heavierLoad.setNumberOfRegions(e.getKey().getNumberOfRegions());
+ break;
+ }
+ return nservers;
+ }
+
+ /*
+ * The server checking in right now is overloaded. We will tell it to close
+ * some or all of its most loaded regions, allowing it to reduce its load.
+ * The closed regions will then get picked up by other underloaded machines.
+ *
+ * Note that no synchronization is needed because the only caller
+ * (assignRegions) whose caller owns the monitor for RegionManager
+ */
+ private void unassignSomeRegions(final HServerInfo info,
+ final HServerLoad load, final double avgLoad,
+ final HRegionInfo[] mostLoadedRegions, ArrayList<HMsg> returnMsgs) {
+ int numRegionsToClose = load.getNumberOfRegions() - (int)Math.ceil(avgLoad);
+ LOG.debug("Choosing to reassign " + numRegionsToClose
+ + " regions. mostLoadedRegions has " + mostLoadedRegions.length
+ + " regions in it.");
+ int regionIdx = 0;
+ int regionsClosed = 0;
+ int skipped = 0;
+ while (regionsClosed < numRegionsToClose &&
+ regionIdx < mostLoadedRegions.length) {
+ HRegionInfo currentRegion = mostLoadedRegions[regionIdx];
+ regionIdx++;
+ // skip the region if it's meta or root
+ if (currentRegion.isRootRegion() || currentRegion.isMetaTable()) {
+ continue;
+ }
+ String regionName = currentRegion.getRegionNameAsString();
+ if (regionIsInTransition(regionName)) {
+ skipped++;
+ continue;
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Going to close region " + regionName);
+ }
+ // make a message to close the region
+ returnMsgs.add(new HMsg(HMsg.Type.MSG_REGION_CLOSE, currentRegion,
+ OVERLOADED));
+ // mark the region as closing
+ setClosing(info.getServerName(), currentRegion, false);
+ setPendingClose(regionName);
+ // increment the count of regions we've marked
+ regionsClosed++;
+ }
+ LOG.info("Skipped " + skipped + " region(s) that are in transition states");
+ }
+
+ static class TableDirFilter implements PathFilter {
+
+ public boolean accept(Path path) {
+ // skip the region servers' log dirs && version file
+ // HBASE-1112 want to sperate the log dirs from table's data dirs by a special character.
+ String pathname = path.getName();
+ return !pathname.startsWith("log_") && !pathname.equals(VERSION_FILE_NAME);
+ }
+
+ }
+
+ static class RegionDirFilter implements PathFilter {
+
+ public boolean accept(Path path) {
+ return !path.getName().equals(HREGION_COMPACTIONDIR_NAME);
+ }
+
+ }
+
+ /**
+ * @return the rough number of the regions on fs
+ * Note: this method simply counts the regions on fs by accumulating all the dirs
+ * in each table dir (${HBASE_ROOT}/$TABLE) and skipping logfiles, compaction dirs.
+ * @throws IOException
+ */
+ public int countRegionsOnFS() throws IOException {
+ int regions = 0;
+
+ FileStatus[] tableDirs =
+ master.fs.listStatus(master.rootdir, new TableDirFilter());
+
+ FileStatus[] regionDirs;
+ RegionDirFilter rdf = new RegionDirFilter();
+ for(FileStatus tabledir : tableDirs) {
+ if(tabledir.isDir()) {
+ regionDirs = master.fs.listStatus(tabledir.getPath(), rdf);
+ regions += regionDirs.length;
+ }
+ }
+
+ return regions;
+ }
+
+ /**
+ * @return Read-only map of online regions.
+ */
+ public Map<byte [], MetaRegion> getOnlineMetaRegions() {
+ synchronized (onlineMetaRegions) {
+ return Collections.unmodifiableMap(onlineMetaRegions);
+ }
+ }
+
+ public boolean metaRegionsInTransition() {
+ synchronized (onlineMetaRegions) {
+ for (MetaRegion metaRegion : onlineMetaRegions.values()) {
+ String regionName = Bytes.toString(metaRegion.getRegionName());
+ if (regionIsInTransition(regionName)) {
+ return true;
+ }
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Stop the root and meta scanners so that the region servers serving meta
+ * regions can shut down.
+ */
+ public void stopScanners() {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("telling root scanner to stop");
+ }
+ rootScannerThread.interruptIfAlive();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("telling meta scanner to stop");
+ }
+ metaScannerThread.interruptIfAlive();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("meta and root scanners notified");
+ }
+ }
+
+ /** Stop the region assigner */
+ public void stop() {
+ try {
+ if (rootScannerThread.isAlive()) {
+ rootScannerThread.join(); // Wait for the root scanner to finish.
+ }
+ } catch (Exception iex) {
+ LOG.warn("root scanner", iex);
+ }
+ try {
+ if (metaScannerThread.isAlive()) {
+ metaScannerThread.join(); // Wait for meta scanner to finish.
+ }
+ } catch(Exception iex) {
+ LOG.warn("meta scanner", iex);
+ }
+ zooKeeperWrapper.close();
+ }
+
+ /**
+ * Block until meta regions are online or we're shutting down.
+ * @return true if we found meta regions, false if we're closing.
+ */
+ public boolean areAllMetaRegionsOnline() {
+ synchronized (onlineMetaRegions) {
+ return (rootRegionLocation.get() != null &&
+ numberOfMetaRegions.get() == onlineMetaRegions.size());
+ }
+ }
+
+ /**
+ * Search our map of online meta regions to find the first meta region that
+ * should contain a pointer to <i>newRegion</i>.
+ * @param newRegion
+ * @return MetaRegion where the newRegion should live
+ */
+ public MetaRegion getFirstMetaRegionForRegion(HRegionInfo newRegion) {
+ synchronized (onlineMetaRegions) {
+ if (onlineMetaRegions.size() == 0) {
+ return null;
+ } else if (onlineMetaRegions.size() == 1) {
+ return onlineMetaRegions.get(onlineMetaRegions.firstKey());
+ } else {
+ if (onlineMetaRegions.containsKey(newRegion.getRegionName())) {
+ return onlineMetaRegions.get(newRegion.getRegionName());
+ }
+ return onlineMetaRegions.get(onlineMetaRegions.headMap(
+ newRegion.getTableDesc().getName()).lastKey());
+ }
+ }
+ }
+
+ /**
+ * Get a set of all the meta regions that contain info about a given table.
+ * @param tableName Table you need to know all the meta regions for
+ * @return set of MetaRegion objects that contain the table
+ * @throws NotAllMetaRegionsOnlineException
+ */
+ public Set<MetaRegion> getMetaRegionsForTable(byte [] tableName)
+ throws NotAllMetaRegionsOnlineException {
+ byte [] firstMetaRegion = null;
+ Set<MetaRegion> metaRegions = new HashSet<MetaRegion>();
+ if (Bytes.equals(tableName, HConstants.META_TABLE_NAME)) {
+ if (rootRegionLocation.get() == null) {
+ throw new NotAllMetaRegionsOnlineException(
+ Bytes.toString(HConstants.ROOT_TABLE_NAME));
+ }
+ metaRegions.add(new MetaRegion(rootRegionLocation.get(),
+ HRegionInfo.ROOT_REGIONINFO.getRegionName()));
+ } else {
+ if (!areAllMetaRegionsOnline()) {
+ throw new NotAllMetaRegionsOnlineException();
+ }
+ synchronized (onlineMetaRegions) {
+ if (onlineMetaRegions.size() == 1) {
+ firstMetaRegion = onlineMetaRegions.firstKey();
+ } else if (onlineMetaRegions.containsKey(tableName)) {
+ firstMetaRegion = tableName;
+ } else {
+ firstMetaRegion = onlineMetaRegions.headMap(tableName).lastKey();
+ }
+ metaRegions.addAll(onlineMetaRegions.tailMap(firstMetaRegion).values());
+ }
+ }
+ return metaRegions;
+ }
+
+ /**
+ * Get metaregion that would host passed in row.
+ * @param row Row need to know all the meta regions for
+ * @return set of MetaRegion objects that contain the table
+ * @throws NotAllMetaRegionsOnlineException
+ */
+ public MetaRegion getMetaRegionForRow(final byte [] row)
+ throws NotAllMetaRegionsOnlineException {
+ if (!areAllMetaRegionsOnline()) {
+ throw new NotAllMetaRegionsOnlineException();
+ }
+ return this.onlineMetaRegions.floorEntry(row).getValue();
+ }
+
+ /**
+ * Create a new HRegion, put a row for it into META (or ROOT), and mark the
+ * new region unassigned so that it will get assigned to a region server.
+ * @param newRegion HRegionInfo for the region to create
+ * @param server server hosting the META (or ROOT) region where the new
+ * region needs to be noted
+ * @param metaRegionName name of the meta region where new region is to be
+ * written
+ * @throws IOException
+ */
+ public void createRegion(HRegionInfo newRegion, HRegionInterface server,
+ byte [] metaRegionName)
+ throws IOException {
+ // 2. Create the HRegion
+ HRegion region = HRegion.createHRegion(newRegion, master.rootdir,
+ master.getConfiguration());
+
+ // 3. Insert into meta
+ HRegionInfo info = region.getRegionInfo();
+ byte [] regionName = region.getRegionName();
+ BatchUpdate b = new BatchUpdate(regionName);
+ b.put(COL_REGIONINFO, Writables.getBytes(info));
+ server.batchUpdate(metaRegionName, b, -1L);
+
+ // 4. Close the new region to flush it to disk. Close its log file too.
+ region.close();
+ region.getLog().closeAndDelete();
+
+ // 5. Get it assigned to a server
+ setUnassigned(info, true);
+ }
+
+ /**
+ * Set a MetaRegion as online.
+ * @param metaRegion
+ */
+ public void putMetaRegionOnline(MetaRegion metaRegion) {
+ onlineMetaRegions.put(metaRegion.getStartKey(), metaRegion);
+ }
+
+ /**
+ * Get a list of online MetaRegions
+ * @return list of MetaRegion objects
+ */
+ public List<MetaRegion> getListOfOnlineMetaRegions() {
+ List<MetaRegion> regions = null;
+ synchronized(onlineMetaRegions) {
+ regions = new ArrayList<MetaRegion>(onlineMetaRegions.values());
+ }
+ return regions;
+ }
+
+ /**
+ * Count of online meta regions
+ * @return count of online meta regions
+ */
+ public int numOnlineMetaRegions() {
+ return onlineMetaRegions.size();
+ }
+
+ /**
+ * Check if a meta region is online by its name
+ * @param startKey name of the meta region to check
+ * @return true if the region is online, false otherwise
+ */
+ public boolean isMetaRegionOnline(byte [] startKey) {
+ return onlineMetaRegions.containsKey(startKey);
+ }
+
+ /**
+ * Set an online MetaRegion offline - remove it from the map.
+ * @param startKey region name
+ */
+ public void offlineMetaRegion(byte [] startKey) {
+ onlineMetaRegions.remove(startKey);
+ }
+
+ /**
+ * Remove a region from the region state map.
+ *
+ * @param info
+ */
+ public void removeRegion(HRegionInfo info) {
+ this.regionsInTransition.remove(info.getRegionNameAsString());
+ }
+
+ /**
+ * @param regionName
+ * @return true if the named region is in a transition state
+ */
+ public boolean regionIsInTransition(String regionName) {
+ return regionsInTransition.containsKey(regionName);
+ }
+
+ /**
+ * @param regionName
+ * @return true if the region is unassigned, pendingOpen or open
+ */
+ public boolean regionIsOpening(String regionName) {
+ RegionState state = regionsInTransition.get(regionName);
+ if (state != null) {
+ return state.isOpening();
+ }
+ return false;
+ }
+
+ /**
+ * Set a region to unassigned
+ * @param info Region to set unassigned
+ * @param force if true mark region unassigned whatever its current state
+ */
+ public void setUnassigned(HRegionInfo info, boolean force) {
+ synchronized(this.regionsInTransition) {
+ RegionState s = regionsInTransition.get(info.getRegionNameAsString());
+ if (s == null) {
+ s = new RegionState(info);
+ regionsInTransition.put(info.getRegionNameAsString(), s);
+ }
+ if (force || (!s.isPendingOpen() && !s.isOpen())) {
+ s.setUnassigned();
+ }
+ }
+ }
+
+ /**
+ * Check if a region is on the unassigned list
+ * @param info HRegionInfo to check for
+ * @return true if on the unassigned list, false if it isn't. Note that this
+ * means a region could not be on the unassigned list AND not be assigned, if
+ * it happens to be between states.
+ */
+ public boolean isUnassigned(HRegionInfo info) {
+ synchronized (regionsInTransition) {
+ RegionState s = regionsInTransition.get(info.getRegionNameAsString());
+ if (s != null) {
+ return s.isUnassigned();
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Check if a region has been assigned and we're waiting for a response from
+ * the region server.
+ *
+ * @param regionName name of the region
+ * @return true if open, false otherwise
+ */
+ public boolean isPendingOpen(String regionName) {
+ synchronized (regionsInTransition) {
+ RegionState s = regionsInTransition.get(regionName);
+ if (s != null) {
+ return s.isPendingOpen();
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Region has been assigned to a server and the server has told us it is open
+ * @param regionName
+ */
+ public void setOpen(String regionName) {
+ synchronized (regionsInTransition) {
+ RegionState s = regionsInTransition.get(regionName);
+ if (s != null) {
+ s.setOpen();
+ }
+ }
+ }
+
+ /**
+ * @param regionName
+ * @return true if region is marked to be offlined.
+ */
+ public boolean isOfflined(String regionName) {
+ synchronized (regionsInTransition) {
+ RegionState s = regionsInTransition.get(regionName);
+ if (s != null) {
+ return s.isOfflined();
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Mark a region as closing
+ * @param serverName
+ * @param regionInfo
+ * @param setOffline
+ */
+ public void setClosing(final String serverName, final HRegionInfo regionInfo,
+ final boolean setOffline) {
+ synchronized (this.regionsInTransition) {
+ RegionState s =
+ this.regionsInTransition.get(regionInfo.getRegionNameAsString());
+ if (s == null) {
+ s = new RegionState(regionInfo);
+ }
+ s.setClosing(serverName, setOffline);
+ this.regionsInTransition.put(regionInfo.getRegionNameAsString(), s);
+ }
+ }
+
+ /**
+ * Remove the map of region names to region infos waiting to be offlined for a
+ * given server
+ *
+ * @param serverName
+ * @return set of infos to close
+ */
+ public Set<HRegionInfo> getMarkedToClose(String serverName) {
+ Set<HRegionInfo> result = new HashSet<HRegionInfo>();
+ synchronized (regionsInTransition) {
+ for (RegionState s: regionsInTransition.values()) {
+ if (s.isClosing() && !s.isPendingClose() && !s.isClosed() &&
+ s.getServerName().compareTo(serverName) == 0) {
+ result.add(s.getRegionInfo());
+ }
+ }
+ }
+ return result;
+ }
+
+ /**
+ * Called when we have told a region server to close the region
+ *
+ * @param regionName
+ */
+ public void setPendingClose(String regionName) {
+ synchronized (regionsInTransition) {
+ RegionState s = regionsInTransition.get(regionName);
+ if (s != null) {
+ s.setPendingClose();
+ }
+ }
+ }
+
+ /**
+ * @param regionName
+ */
+ public void setClosed(String regionName) {
+ synchronized (regionsInTransition) {
+ RegionState s = regionsInTransition.get(regionName);
+ if (s != null) {
+ s.setClosed();
+ }
+ }
+ }
+ /**
+ * Add a meta region to the scan queue
+ * @param m MetaRegion that needs to get scanned
+ */
+ public void addMetaRegionToScan(MetaRegion m) {
+ metaScannerThread.addMetaRegionToScan(m);
+ }
+
+ /**
+ * Check if the initial root scan has been completed.
+ * @return true if scan completed, false otherwise
+ */
+ public boolean isInitialRootScanComplete() {
+ return rootScannerThread.isInitialScanComplete();
+ }
+
+ /**
+ * Check if the initial meta scan has been completed.
+ * @return true if meta completed, false otherwise
+ */
+ public boolean isInitialMetaScanComplete() {
+ return metaScannerThread.isInitialScanComplete();
+ }
+
+ private boolean tellZooKeeperOutOfSafeMode() {
+ for (int attempt = 0; attempt < zooKeeperNumRetries; ++attempt) {
+ if (zooKeeperWrapper.writeOutOfSafeMode()) {
+ return true;
+ }
+
+ sleep(attempt);
+ }
+
+ LOG.error("Failed to tell ZooKeeper we're out of safe mode after " +
+ zooKeeperNumRetries + " retries");
+
+ return false;
+ }
+
+ /**
+ * @return true if the initial meta scan is complete and there are no
+ * unassigned or pending regions
+ */
+ public boolean inSafeMode() {
+ if (safeMode) {
+ if(isInitialMetaScanComplete() && regionsInTransition.size() == 0 &&
+ tellZooKeeperOutOfSafeMode()) {
+ master.connection.unsetRootRegionLocation();
+ safeMode = false;
+ LOG.info("exiting safe mode");
+ } else {
+ LOG.info("in safe mode");
+ }
+ }
+ return safeMode;
+ }
+
+ /**
+ * Get the root region location.
+ * @return HServerAddress describing root region server.
+ */
+ public HServerAddress getRootRegionLocation() {
+ return rootRegionLocation.get();
+ }
+
+ /**
+ * Block until either the root region location is available or we're shutting
+ * down.
+ */
+ public void waitForRootRegionLocation() {
+ synchronized (rootRegionLocation) {
+ while (!master.closed.get() && rootRegionLocation.get() == null) {
+ // rootRegionLocation will be filled in when we get an 'open region'
+ // regionServerReport message from the HRegionServer that has been
+ // allocated the ROOT region below.
+ try {
+ rootRegionLocation.wait();
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ }
+
+ /**
+ * Return the number of meta regions.
+ * @return number of meta regions
+ */
+ public int numMetaRegions() {
+ return numberOfMetaRegions.get();
+ }
+
+ /**
+ * Bump the count of meta regions up one
+ */
+ public void incrementNumMetaRegions() {
+ numberOfMetaRegions.incrementAndGet();
+ }
+
+ private long getPauseTime(int tries) {
+ int attempt = tries;
+ if (attempt >= RETRY_BACKOFF.length) {
+ attempt = RETRY_BACKOFF.length - 1;
+ }
+ return this.zooKeeperPause * RETRY_BACKOFF[attempt];
+ }
+
+ private void sleep(int attempt) {
+ try {
+ Thread.sleep(getPauseTime(attempt));
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+
+ private void writeRootRegionLocationToZooKeeper(HServerAddress address) {
+ for (int attempt = 0; attempt < zooKeeperNumRetries; ++attempt) {
+ if (zooKeeperWrapper.writeRootRegionLocation(address)) {
+ return;
+ }
+
+ sleep(attempt);
+ }
+
+ LOG.error("Failed to write root region location to ZooKeeper after " +
+ zooKeeperNumRetries + " retries, shutting down");
+
+ this.master.shutdown();
+ }
+
+ /**
+ * Set the root region location.
+ * @param address Address of the region server where the root lives
+ */
+ public void setRootRegionLocation(HServerAddress address) {
+ writeRootRegionLocationToZooKeeper(address);
+
+ synchronized (rootRegionLocation) {
+ rootRegionLocation.set(new HServerAddress(address));
+ rootRegionLocation.notifyAll();
+ }
+ }
+
+ /**
+ * Set the number of meta regions.
+ * @param num Number of meta regions
+ */
+ public void setNumMetaRegions(int num) {
+ numberOfMetaRegions.set(num);
+ }
+
+ /**
+ * @param regionName
+ * @param info
+ * @param server
+ * @param op
+ */
+ public void startAction(byte[] regionName, HRegionInfo info,
+ HServerAddress server, int op) {
+ switch (op) {
+ case HConstants.MODIFY_TABLE_SPLIT:
+ startAction(regionName, info, server, this.regionsToSplit);
+ break;
+ case HConstants.MODIFY_TABLE_COMPACT:
+ startAction(regionName, info, server, this.regionsToCompact);
+ break;
+ case HConstants.MODIFY_TABLE_MAJOR_COMPACT:
+ startAction(regionName, info, server, this.regionsToMajorCompact);
+ break;
+ case HConstants.MODIFY_TABLE_FLUSH:
+ startAction(regionName, info, server, this.regionsToFlush);
+ break;
+ default:
+ throw new IllegalArgumentException("illegal table action " + op);
+ }
+ }
+
+ private void startAction(final byte[] regionName, final HRegionInfo info,
+ final HServerAddress server,
+ final SortedMap<byte[], Pair<HRegionInfo,HServerAddress>> map) {
+ map.put(regionName, new Pair<HRegionInfo,HServerAddress>(info, server));
+ }
+
+ /**
+ * @param regionName
+ * @param op
+ */
+ public void endAction(byte[] regionName, int op) {
+ switch (op) {
+ case HConstants.MODIFY_TABLE_SPLIT:
+ this.regionsToSplit.remove(regionName);
+ break;
+ case HConstants.MODIFY_TABLE_COMPACT:
+ this.regionsToCompact.remove(regionName);
+ break;
+ case HConstants.MODIFY_TABLE_MAJOR_COMPACT:
+ this.regionsToMajorCompact.remove(regionName);
+ break;
+ case HConstants.MODIFY_TABLE_FLUSH:
+ this.regionsToFlush.remove(regionName);
+ break;
+ default:
+ throw new IllegalArgumentException("illegal table action " + op);
+ }
+ }
+
+ /**
+ * @param regionName
+ */
+ public void endActions(byte[] regionName) {
+ regionsToSplit.remove(regionName);
+ regionsToCompact.remove(regionName);
+ }
+
+ /**
+ * Send messages to the given region server asking it to split any
+ * regions in 'regionsToSplit', etc.
+ * @param serverInfo
+ * @param returnMsgs
+ */
+ public void applyActions(HServerInfo serverInfo, ArrayList<HMsg> returnMsgs) {
+ applyActions(serverInfo, returnMsgs, this.regionsToCompact,
+ HMsg.Type.MSG_REGION_COMPACT);
+ applyActions(serverInfo, returnMsgs, this.regionsToSplit,
+ HMsg.Type.MSG_REGION_SPLIT);
+ applyActions(serverInfo, returnMsgs, this.regionsToFlush,
+ HMsg.Type.MSG_REGION_FLUSH);
+ applyActions(serverInfo, returnMsgs, this.regionsToMajorCompact,
+ HMsg.Type.MSG_REGION_MAJOR_COMPACT);
+ }
+
+ private void applyActions(final HServerInfo serverInfo,
+ final ArrayList<HMsg> returnMsgs,
+ SortedMap<byte[], Pair<HRegionInfo,HServerAddress>> map,
+ final HMsg.Type msg) {
+ HServerAddress addr = serverInfo.getServerAddress();
+ Iterator<Pair<HRegionInfo, HServerAddress>> i = map.values().iterator();
+ synchronized (map) {
+ while (i.hasNext()) {
+ Pair<HRegionInfo,HServerAddress> pair = i.next();
+ if (addr.equals(pair.getSecond())) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Sending " + msg + " " + pair.getFirst() + " to " + addr);
+ }
+ returnMsgs.add(new HMsg(msg, pair.getFirst()));
+ i.remove();
+ }
+ }
+ }
+ }
+
+ /*
+ * State of a Region as it transitions from closed to open, etc. See
+ * note on regionsInTransition data member above for listing of state
+ * transitions.
+ */
+ private static class RegionState implements Comparable<RegionState> {
+ private final HRegionInfo regionInfo;
+ private volatile boolean unassigned = false;
+ private volatile boolean pendingOpen = false;
+ private volatile boolean open = false;
+ private volatile boolean closing = false;
+ private volatile boolean pendingClose = false;
+ private volatile boolean closed = false;
+ private volatile boolean offlined = false;
+
+ /* Set when region is assigned or closing */
+ private volatile String serverName = null;
+
+ /* Constructor */
+ RegionState(HRegionInfo info) {
+ this.regionInfo = info;
+ }
+
+ synchronized HRegionInfo getRegionInfo() {
+ return this.regionInfo;
+ }
+
+ synchronized byte [] getRegionName() {
+ return this.regionInfo.getRegionName();
+ }
+
+ /*
+ * @return Server this region was assigned to
+ */
+ synchronized String getServerName() {
+ return this.serverName;
+ }
+
+ /*
+ * @return true if the region is being opened
+ */
+ synchronized boolean isOpening() {
+ return this.unassigned || this.pendingOpen || this.open;
+ }
+
+ /*
+ * @return true if region is unassigned
+ */
+ synchronized boolean isUnassigned() {
+ return unassigned;
+ }
+
+ /*
+ * Note: callers of this method (reassignRootRegion,
+ * regionsAwaitingAssignment, setUnassigned) ensure that this method is not
+ * called unless it is safe to do so.
+ */
+ synchronized void setUnassigned() {
+ this.unassigned = true;
+ this.pendingOpen = false;
+ this.open = false;
+ this.closing = false;
+ this.pendingClose = false;
+ this.closed = false;
+ this.offlined = false;
+ this.serverName = null;
+ }
+
+ synchronized boolean isPendingOpen() {
+ return pendingOpen;
+ }
+
+ /*
+ * @param serverName Server region was assigned to.
+ */
+ synchronized void setPendingOpen(final String serverName) {
+ if (!this.unassigned) {
+ throw new IllegalStateException(
+ "Cannot assign a region that is not currently unassigned. State: " +
+ toString());
+ }
+ this.unassigned = false;
+ this.pendingOpen = true;
+ this.open = false;
+ this.closing = false;
+ this.pendingClose = false;
+ this.closed = false;
+ this.offlined = false;
+ this.serverName = serverName;
+ }
+
+ synchronized boolean isOpen() {
+ return open;
+ }
+
+ synchronized void setOpen() {
+ if (!pendingOpen) {
+ throw new IllegalStateException(
+ "Cannot set a region as open if it has not been pending. State: " +
+ toString());
+ }
+ this.unassigned = false;
+ this.pendingOpen = false;
+ this.open = true;
+ this.closing = false;
+ this.pendingClose = false;
+ this.closed = false;
+ this.offlined = false;
+ }
+
+ synchronized boolean isClosing() {
+ return closing;
+ }
+
+ synchronized void setClosing(String serverName, boolean setOffline) {
+ this.unassigned = false;
+ this.pendingOpen = false;
+ this.open = false;
+ this.closing = true;
+ this.pendingClose = false;
+ this.closed = false;
+ this.offlined = setOffline;
+ this.serverName = serverName;
+ }
+
+ synchronized boolean isPendingClose() {
+ return this.pendingClose;
+ }
+
+ synchronized void setPendingClose() {
+ if (!closing) {
+ throw new IllegalStateException(
+ "Cannot set a region as pending close if it has not been closing. " +
+ "State: " + toString());
+ }
+ this.unassigned = false;
+ this.pendingOpen = false;
+ this.open = false;
+ this.closing = false;
+ this.pendingClose = true;
+ this.closed = false;
+ }
+
+ synchronized boolean isClosed() {
+ return this.closed;
+ }
+
+ synchronized void setClosed() {
+ if (!pendingClose && !pendingOpen) {
+ throw new IllegalStateException(
+ "Cannot set a region to be closed if it was not already marked as" +
+ " pending close or pending open. State: " + toString());
+ }
+ this.unassigned = false;
+ this.pendingOpen = false;
+ this.open = false;
+ this.closing = false;
+ this.pendingClose = false;
+ this.closed = true;
+ }
+
+ synchronized boolean isOfflined() {
+ return this.offlined;
+ }
+
+ @Override
+ public synchronized String toString() {
+ return ("name=" + Bytes.toString(getRegionName()) +
+ ", unassigned=" + this.unassigned +
+ ", pendingOpen=" + this.pendingOpen +
+ ", open=" + this.open +
+ ", closing=" + this.closing +
+ ", pendingClose=" + this.pendingClose +
+ ", closed=" + this.closed +
+ ", offlined=" + this.offlined);
+ }
+
+ @Override
+ public boolean equals(Object o) {
+ if (this == o) {
+ return true;
+ }
+ if (o == null || getClass() != o.getClass()) {
+ return false;
+ }
+ return this.compareTo((RegionState) o) == 0;
+ }
+
+ @Override
+ public int hashCode() {
+ return Bytes.toString(getRegionName()).hashCode();
+ }
+
+ public int compareTo(RegionState o) {
+ if (o == null) {
+ return 1;
+ }
+ return Bytes.compareTo(getRegionName(), o.getRegionName());
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/RegionServerOperation.java b/src/java/org/apache/hadoop/hbase/master/RegionServerOperation.java
new file mode 100644
index 0000000..e11da11
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/RegionServerOperation.java
@@ -0,0 +1,94 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.concurrent.Delayed;
+import java.util.concurrent.TimeUnit;
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+
+abstract class RegionServerOperation implements Delayed, HConstants {
+ protected static final Log LOG =
+ LogFactory.getLog(RegionServerOperation.class.getName());
+
+ private long expire;
+ protected final HMaster master;
+ protected final int numRetries;
+
+ protected RegionServerOperation(HMaster master) {
+ this.master = master;
+ this.numRetries = master.numRetries;
+ // Set the future time at which we expect to be released from the
+ // DelayQueue we're inserted in on lease expiration.
+ this.expire = System.currentTimeMillis() + this.master.leaseTimeout / 2;
+ }
+
+ public long getDelay(TimeUnit unit) {
+ return unit.convert(this.expire - System.currentTimeMillis(),
+ TimeUnit.MILLISECONDS);
+ }
+
+ public int compareTo(Delayed o) {
+ return Long.valueOf(getDelay(TimeUnit.MILLISECONDS)
+ - o.getDelay(TimeUnit.MILLISECONDS)).intValue();
+ }
+
+ protected void requeue() {
+ this.expire = System.currentTimeMillis() + this.master.leaseTimeout / 2;
+ master.delayedToDoQueue.put(this);
+ }
+
+ protected boolean rootAvailable() {
+ boolean available = true;
+ if (master.getRootRegionLocation() == null) {
+ available = false;
+ requeue();
+ }
+ return available;
+ }
+
+ protected boolean metaTableAvailable() {
+ boolean available = true;
+ if ((master.regionManager.numMetaRegions() !=
+ master.regionManager.numOnlineMetaRegions()) ||
+ master.regionManager.metaRegionsInTransition()) {
+ // We can't proceed because not all of the meta regions are online.
+ // We can't block either because that would prevent the meta region
+ // online message from being processed. In order to prevent spinning
+ // in the run queue, put this request on the delay queue to give
+ // other threads the opportunity to get the meta regions on-line.
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("numberOfMetaRegions: " +
+ master.regionManager.numMetaRegions() +
+ ", onlineMetaRegions.size(): " +
+ master.regionManager.numOnlineMetaRegions());
+ LOG.debug("Requeuing because not all meta regions are online");
+ }
+ available = false;
+ requeue();
+ }
+ return available;
+ }
+
+ protected abstract boolean process() throws IOException;
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/RetryableMetaOperation.java b/src/java/org/apache/hadoop/hbase/master/RetryableMetaOperation.java
new file mode 100644
index 0000000..7de89c9
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/RetryableMetaOperation.java
@@ -0,0 +1,100 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.TableNotDisabledException;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Sleeper;
+
+/**
+ * Uses Callable pattern so that operations against meta regions do not need
+ * to duplicate retry logic.
+ */
+abstract class RetryableMetaOperation<T> implements Callable<T> {
+ protected final Log LOG = LogFactory.getLog(this.getClass());
+ protected final Sleeper sleeper;
+ protected final MetaRegion m;
+ protected final HMaster master;
+
+ protected HRegionInterface server;
+
+ protected RetryableMetaOperation(MetaRegion m, HMaster master) {
+ this.m = m;
+ this.master = master;
+ this.sleeper = new Sleeper(master.threadWakeFrequency, master.closed);
+ }
+
+ protected T doWithRetries()
+ throws IOException, RuntimeException {
+ List<IOException> exceptions = new ArrayList<IOException>();
+ for(int tries = 0; tries < master.numRetries; tries++) {
+ if (master.closed.get()) {
+ return null;
+ }
+ try {
+ this.server = master.connection.getHRegionConnection(m.getServer());
+ return this.call();
+ } catch (IOException e) {
+ if (e instanceof TableNotFoundException ||
+ e instanceof TableNotDisabledException ||
+ e instanceof InvalidColumnNameException) {
+ throw e;
+ }
+ if (e instanceof RemoteException) {
+ e = RemoteExceptionHandler.decodeRemoteException((RemoteException) e);
+ }
+ if (tries == master.numRetries - 1) {
+ if (LOG.isDebugEnabled()) {
+ StringBuilder message = new StringBuilder(
+ "Trying to contact region server for regionName '" +
+ Bytes.toString(m.getRegionName()) + "', but failed after " +
+ (tries + 1) + " attempts.\n");
+ int i = 1;
+ for (IOException e2 : exceptions) {
+ message.append("Exception " + i + ":\n" + e2);
+ }
+ LOG.debug(message);
+ }
+ this.master.checkFileSystem();
+ throw e;
+ }
+ if (LOG.isDebugEnabled()) {
+ exceptions.add(e);
+ }
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ sleeper.sleep();
+ }
+ return null;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/RootScanner.java b/src/java/org/apache/hadoop/hbase/master/RootScanner.java
new file mode 100644
index 0000000..2bdeefa
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/RootScanner.java
@@ -0,0 +1,81 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+
+/** Scanner for the <code>ROOT</code> HRegion. */
+class RootScanner extends BaseScanner {
+ /**
+ * Constructor
+ * @param master
+ */
+ public RootScanner(HMaster master) {
+ super(master, true, master.metaRescanInterval, master.shutdownRequested);
+ }
+
+ /**
+ * Don't retry if we get an error while scanning. Errors are most often
+ *
+ * caused by the server going away. Wait until next rescan interval when
+ * things should be back to normal.
+ * @return True if successfully scanned.
+ */
+ private boolean scanRoot() {
+ master.waitForRootRegionLocation();
+ if (master.closed.get()) {
+ return false;
+ }
+
+ try {
+ // Don't interrupt us while we're working
+ synchronized(scannerLock) {
+ if (master.getRootRegionLocation() != null) {
+ scanRegion(new MetaRegion(master.getRootRegionLocation(),
+ HRegionInfo.ROOT_REGIONINFO.getRegionName()));
+ }
+ }
+ } catch (IOException e) {
+ e = RemoteExceptionHandler.checkIOException(e);
+ LOG.warn("Scan ROOT region", e);
+ // Make sure the file system is still available
+ master.checkFileSystem();
+ } catch (Exception e) {
+ // If for some reason we get some other kind of exception,
+ // at least log it rather than go out silently.
+ LOG.error("Unexpected exception", e);
+ }
+ return true;
+ }
+
+ @Override
+ protected boolean initialScan() {
+ this.initialScanComplete = scanRoot();
+ return initialScanComplete;
+ }
+
+ @Override
+ protected void maintenanceScan() {
+ scanRoot();
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/ServerManager.java b/src/java/org/apache/hadoop/hbase/master/ServerManager.java
new file mode 100644
index 0000000..b9f3cff
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ServerManager.java
@@ -0,0 +1,791 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.Collections;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HServerLoad;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.Leases;
+import org.apache.hadoop.hbase.HMsg.Type;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.Watcher.Event.EventType;
+
+/**
+ * The ServerManager class manages info about region servers - HServerInfo,
+ * load numbers, dying servers, etc.
+ */
+class ServerManager implements HConstants {
+ static final Log LOG =
+ LogFactory.getLog(ServerManager.class.getName());
+ private static final HMsg REGIONSERVER_QUIESCE =
+ new HMsg(Type.MSG_REGIONSERVER_QUIESCE);
+ private static final HMsg REGIONSERVER_STOP =
+ new HMsg(Type.MSG_REGIONSERVER_STOP);
+ private static final HMsg CALL_SERVER_STARTUP =
+ new HMsg(Type.MSG_CALL_SERVER_STARTUP);
+ private static final HMsg [] EMPTY_HMSG_ARRAY = new HMsg[0];
+
+ private final AtomicInteger quiescedServers = new AtomicInteger(0);
+ private final ZooKeeperWrapper zooKeeperWrapper;
+
+ /** The map of known server names to server info */
+ final Map<String, HServerInfo> serversToServerInfo =
+ new ConcurrentHashMap<String, HServerInfo>();
+
+ final Map<HServerAddress, HServerInfo> serverAddressToServerInfo =
+ new ConcurrentHashMap<HServerAddress, HServerInfo>();
+
+ /**
+ * Set of known dead servers. On znode expiration, servers are added here.
+ * This is needed in case of a network partitioning where the server's lease
+ * expires, but the server is still running. After the network is healed,
+ * and it's server logs are recovered, it will be told to call server startup
+ * because by then, its regions have probably been reassigned.
+ */
+ protected final Set<String> deadServers =
+ Collections.synchronizedSet(new HashSet<String>());
+
+ /** SortedMap server load -> Set of server names */
+ final SortedMap<HServerLoad, Set<String>> loadToServers =
+ Collections.synchronizedSortedMap(new TreeMap<HServerLoad, Set<String>>());
+
+ /** Map of server names -> server load */
+ final Map<String, HServerLoad> serversToLoad =
+ new ConcurrentHashMap<String, HServerLoad>();
+
+ protected HMaster master;
+
+ /* The regionserver will not be assigned or asked close regions if it
+ * is currently opening >= this many regions.
+ */
+ private final int nobalancingCount;
+
+ class ServerMonitor extends Chore {
+
+ ServerMonitor(final int period, final AtomicBoolean stop) {
+ super(period, stop);
+ }
+
+ protected void chore() {
+ int numServers = serverAddressToServerInfo.size();
+ int numDeadServers = deadServers.size();
+ double averageLoad = getAverageLoad();
+ LOG.info(numServers + " region servers, " + numDeadServers +
+ " dead, average load " + averageLoad);
+ if (numDeadServers > 0) {
+ LOG.info("DEAD [");
+ for (String server: deadServers) {
+ LOG.info(" " + server);
+ }
+ LOG.info("]");
+ }
+ }
+
+ }
+
+ ServerMonitor serverMonitorThread;
+
+ /**
+ * @param master
+ */
+ public ServerManager(HMaster master) {
+ this.master = master;
+ zooKeeperWrapper = master.getZooKeeperWrapper();
+ this.nobalancingCount = master.getConfiguration().
+ getInt("hbase.regions.nobalancing.count", 4);
+ serverMonitorThread = new ServerMonitor(master.metaRescanInterval,
+ master.shutdownRequested);
+ serverMonitorThread.start();
+ }
+
+ /**
+ * Let the server manager know a new regionserver has come online
+ * @param serverInfo
+ * @throws Leases.LeaseStillHeldException
+ */
+ public void regionServerStartup(final HServerInfo serverInfo)
+ throws Leases.LeaseStillHeldException {
+ HServerInfo info = new HServerInfo(serverInfo);
+ String serverName = HServerInfo.getServerName(info);
+ if (serversToServerInfo.containsKey(serverName) ||
+ deadServers.contains(serverName)) {
+ LOG.debug("Server start was rejected: " + serverInfo);
+ LOG.debug("serversToServerInfo.containsKey: " + serversToServerInfo.containsKey(serverName));
+ LOG.debug("deadServers.contains: " + deadServers.contains(serverName));
+ throw new Leases.LeaseStillHeldException(serverName);
+ }
+ Watcher watcher = new ServerExpirer(serverName, info.getServerAddress());
+ zooKeeperWrapper.updateRSLocationGetWatch(info, watcher);
+
+ LOG.info("Received start message from: " + serverName);
+ // Go on to process the regionserver registration.
+ HServerLoad load = serversToLoad.remove(serverName);
+ if (load != null) {
+ // The startup message was from a known server.
+ // Remove stale information about the server's load.
+ synchronized (loadToServers) {
+ Set<String> servers = loadToServers.get(load);
+ if (servers != null) {
+ servers.remove(serverName);
+ loadToServers.put(load, servers);
+ }
+ }
+ }
+ HServerInfo storedInfo = serversToServerInfo.remove(serverName);
+ if (storedInfo != null && !master.closed.get()) {
+ // The startup message was from a known server with the same name.
+ // Timeout the old one right away.
+ HServerAddress root = master.getRootRegionLocation();
+ boolean rootServer = false;
+ if (root != null && root.equals(storedInfo.getServerAddress())) {
+ master.regionManager.unsetRootRegion();
+ rootServer = true;
+ }
+ try {
+ master.toDoQueue.put(
+ new ProcessServerShutdown(master, storedInfo, rootServer));
+ } catch (InterruptedException e) {
+ LOG.error("Insertion into toDoQueue was interrupted", e);
+ }
+ }
+ // record new server
+ load = new HServerLoad();
+ info.setLoad(load);
+ serversToServerInfo.put(serverName, info);
+ serverAddressToServerInfo.put(info.getServerAddress(), info);
+ serversToLoad.put(serverName, load);
+ synchronized (loadToServers) {
+ Set<String> servers = loadToServers.get(load);
+ if (servers == null) {
+ servers = new HashSet<String>();
+ }
+ servers.add(serverName);
+ loadToServers.put(load, servers);
+ }
+ }
+
+ /**
+ * Called to process the messages sent from the region server to the master
+ * along with the heart beat.
+ *
+ * @param serverInfo
+ * @param msgs
+ * @param mostLoadedRegions Array of regions the region server is submitting
+ * as candidates to be rebalanced, should it be overloaded
+ * @return messages from master to region server indicating what region
+ * server should do.
+ *
+ * @throws IOException
+ */
+ public HMsg [] regionServerReport(final HServerInfo serverInfo,
+ final HMsg msgs[], final HRegionInfo[] mostLoadedRegions)
+ throws IOException {
+ HServerInfo info = new HServerInfo(serverInfo);
+ if (isDead(info.getServerName())) {
+ throw new Leases.LeaseStillHeldException(info.getServerName());
+ }
+ if (msgs.length > 0) {
+ if (msgs[0].isType(HMsg.Type.MSG_REPORT_EXITING)) {
+ processRegionServerExit(info, msgs);
+ return EMPTY_HMSG_ARRAY;
+ } else if (msgs[0].isType(HMsg.Type.MSG_REPORT_QUIESCED)) {
+ LOG.info("Region server " + info.getServerName() + " quiesced");
+ quiescedServers.incrementAndGet();
+ }
+ }
+
+ if (master.shutdownRequested.get()) {
+ if(quiescedServers.get() >= serversToServerInfo.size()) {
+ // If the only servers we know about are meta servers, then we can
+ // proceed with shutdown
+ LOG.info("All user tables quiesced. Proceeding with shutdown");
+ master.startShutdown();
+ }
+
+ if (!master.closed.get()) {
+ if (msgs.length > 0 &&
+ msgs[0].isType(HMsg.Type.MSG_REPORT_QUIESCED)) {
+ // Server is already quiesced, but we aren't ready to shut down
+ // return empty response
+ return EMPTY_HMSG_ARRAY;
+ }
+ // Tell the server to stop serving any user regions
+ return new HMsg [] {REGIONSERVER_QUIESCE};
+ }
+ }
+
+ if (master.closed.get()) {
+ // Tell server to shut down if we are shutting down. This should
+ // happen after check of MSG_REPORT_EXITING above, since region server
+ // will send us one of these messages after it gets MSG_REGIONSERVER_STOP
+ return new HMsg [] {REGIONSERVER_STOP};
+ }
+
+ HServerInfo storedInfo = serversToServerInfo.get(info.getServerName());
+ if (storedInfo == null) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("received server report from unknown server: " +
+ info.getServerName());
+ }
+
+ // The HBaseMaster may have been restarted.
+ // Tell the RegionServer to start over and call regionServerStartup()
+ return new HMsg[]{CALL_SERVER_STARTUP};
+ } else if (storedInfo.getStartCode() != info.getStartCode()) {
+ // This state is reachable if:
+ //
+ // 1) RegionServer A started
+ // 2) RegionServer B started on the same machine, then
+ // clobbered A in regionServerStartup.
+ // 3) RegionServer A returns, expecting to work as usual.
+ //
+ // The answer is to ask A to shut down for good.
+
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("region server race condition detected: " +
+ info.getServerName());
+ }
+
+ synchronized (serversToServerInfo) {
+ removeServerInfo(info.getServerName(), info.getServerAddress());
+ serversToServerInfo.notifyAll();
+ }
+
+ return new HMsg[]{REGIONSERVER_STOP};
+ } else {
+ return processRegionServerAllsWell(info, mostLoadedRegions, msgs);
+ }
+ }
+
+ /** Region server is exiting */
+ private void processRegionServerExit(HServerInfo serverInfo, HMsg[] msgs) {
+ synchronized (serversToServerInfo) {
+ try {
+ // HRegionServer is shutting down.
+ if (removeServerInfo(serverInfo.getServerName(),
+ serverInfo.getServerAddress())) {
+ // Only process the exit message if the server still has registered info.
+ // Otherwise we could end up processing the server exit twice.
+ LOG.info("Region server " + serverInfo.getServerName() +
+ ": MSG_REPORT_EXITING");
+ // Get all the regions the server was serving reassigned
+ // (if we are not shutting down).
+ if (!master.closed.get()) {
+ for (int i = 1; i < msgs.length; i++) {
+ LOG.info("Processing " + msgs[i] + " from " +
+ serverInfo.getServerName());
+ HRegionInfo info = msgs[i].getRegionInfo();
+ synchronized (master.regionManager) {
+ if (info.isRootRegion()) {
+ master.regionManager.reassignRootRegion();
+ } else {
+ if (info.isMetaTable()) {
+ master.regionManager.offlineMetaRegion(info.getStartKey());
+ }
+ if (!master.regionManager.isOfflined(
+ info.getRegionNameAsString())) {
+ master.regionManager.setUnassigned(info, true);
+ } else {
+ master.regionManager.removeRegion(info);
+ }
+ }
+ }
+ }
+ }
+ }
+ // We don't need to return anything to the server because it isn't
+ // going to do any more work.
+ } finally {
+ serversToServerInfo.notifyAll();
+ }
+ }
+ }
+
+ /**
+ * RegionServer is checking in, no exceptional circumstances
+ * @param serverInfo
+ * @param mostLoadedRegions
+ * @param msgs
+ * @return
+ * @throws IOException
+ */
+ private HMsg[] processRegionServerAllsWell(HServerInfo serverInfo, HRegionInfo[] mostLoadedRegions, HMsg[] msgs)
+ throws IOException {
+
+ // Refresh the info object and the load information
+ serverAddressToServerInfo.put(serverInfo.getServerAddress(), serverInfo);
+ serversToServerInfo.put(serverInfo.getServerName(), serverInfo);
+
+ HServerLoad load = serversToLoad.get(serverInfo.getServerName());
+ if (load != null) {
+ this.master.getMetrics().incrementRequests(load.getNumberOfRequests());
+ if (!load.equals(serverInfo.getLoad())) {
+ // We have previous information about the load on this server
+ // and the load on this server has changed
+ synchronized (loadToServers) {
+ Set<String> servers = loadToServers.get(load);
+ // Note that servers should never be null because loadToServers
+ // and serversToLoad are manipulated in pairs
+ servers.remove(serverInfo.getServerName());
+ loadToServers.put(load, servers);
+ }
+ }
+ }
+
+ // Set the current load information
+ load = serverInfo.getLoad();
+ serversToLoad.put(serverInfo.getServerName(), load);
+ synchronized (loadToServers) {
+ Set<String> servers = loadToServers.get(load);
+ if (servers == null) {
+ servers = new HashSet<String>();
+ }
+ servers.add(serverInfo.getServerName());
+ loadToServers.put(load, servers);
+ }
+
+ // Next, process messages for this server
+ return processMsgs(serverInfo, mostLoadedRegions, msgs);
+ }
+
+ /**
+ * Process all the incoming messages from a server that's contacted us.
+ *
+ * Note that we never need to update the server's load information because
+ * that has already been done in regionServerReport.
+ */
+ private HMsg[] processMsgs(HServerInfo serverInfo,
+ HRegionInfo[] mostLoadedRegions, HMsg incomingMsgs[])
+ throws IOException {
+ ArrayList<HMsg> returnMsgs = new ArrayList<HMsg>();
+ if (serverInfo.getServerAddress() == null) {
+ throw new NullPointerException("Server address cannot be null; " +
+ "hbase-958 debugging");
+ }
+ // Get reports on what the RegionServer did.
+ int openingCount = 0;
+ for (int i = 0; i < incomingMsgs.length; i++) {
+ HRegionInfo region = incomingMsgs[i].getRegionInfo();
+ LOG.info("Received " + incomingMsgs[i] + " from " +
+ serverInfo.getServerName());
+ switch (incomingMsgs[i].getType()) {
+ case MSG_REPORT_PROCESS_OPEN:
+ openingCount++;
+ break;
+
+ case MSG_REPORT_OPEN:
+ processRegionOpen(serverInfo, region, returnMsgs);
+ break;
+
+ case MSG_REPORT_CLOSE:
+ processRegionClose(region);
+ break;
+
+ case MSG_REPORT_SPLIT:
+ processSplitRegion(region, incomingMsgs[++i], incomingMsgs[++i],
+ returnMsgs);
+ break;
+
+ default:
+ throw new IOException(
+ "Impossible state during message processing. Instruction: " +
+ incomingMsgs[i].getType());
+ }
+ }
+
+ synchronized (master.regionManager) {
+ // Tell the region server to close regions that we have marked for closing.
+ for (HRegionInfo i:
+ master.regionManager.getMarkedToClose(serverInfo.getServerName())) {
+ returnMsgs.add(new HMsg(HMsg.Type.MSG_REGION_CLOSE, i));
+ // Transition the region from toClose to closing state
+ master.regionManager.setPendingClose(i.getRegionNameAsString());
+ }
+
+ // Figure out what the RegionServer ought to do, and write back.
+
+ // Should we tell it close regions because its overloaded? If its
+ // currently opening regions, leave it alone till all are open.
+ if (openingCount < this.nobalancingCount) {
+ this.master.regionManager.assignRegions(serverInfo, mostLoadedRegions,
+ returnMsgs);
+ }
+ // Send any pending table actions.
+ this.master.regionManager.applyActions(serverInfo, returnMsgs);
+ }
+ return returnMsgs.toArray(new HMsg[returnMsgs.size()]);
+ }
+
+ /**
+ * A region has split.
+ *
+ * @param region
+ * @param splitA
+ * @param splitB
+ * @param returnMsgs
+ */
+ private void processSplitRegion(HRegionInfo region, HMsg splitA, HMsg splitB,
+ ArrayList<HMsg> returnMsgs) {
+
+ synchronized (master.regionManager) {
+ // Cancel any actions pending for the affected region.
+ // This prevents the master from sending a SPLIT message if the table
+ // has already split by the region server.
+ master.regionManager.endActions(region.getRegionName());
+
+ HRegionInfo newRegionA = splitA.getRegionInfo();
+ master.regionManager.setUnassigned(newRegionA, false);
+
+ HRegionInfo newRegionB = splitB.getRegionInfo();
+ master.regionManager.setUnassigned(newRegionB, false);
+
+ if (region.isMetaTable()) {
+ // A meta region has split.
+ master.regionManager.offlineMetaRegion(region.getStartKey());
+ master.regionManager.incrementNumMetaRegions();
+ }
+ }
+ }
+
+ /** Region server is reporting that a region is now opened */
+ private void processRegionOpen(HServerInfo serverInfo,
+ HRegionInfo region, ArrayList<HMsg> returnMsgs)
+ throws IOException {
+ boolean duplicateAssignment = false;
+ synchronized (master.regionManager) {
+ if (!master.regionManager.isUnassigned(region) &&
+ !master.regionManager.isPendingOpen(region.getRegionNameAsString())) {
+ if (region.isRootRegion()) {
+ // Root region
+ HServerAddress rootServer = master.getRootRegionLocation();
+ if (rootServer != null) {
+ if (rootServer.compareTo(serverInfo.getServerAddress()) == 0) {
+ // A duplicate open report from the correct server
+ return;
+ }
+ // We received an open report on the root region, but it is
+ // assigned to a different server
+ duplicateAssignment = true;
+ }
+ } else {
+ // Not root region. If it is not a pending region, then we are
+ // going to treat it as a duplicate assignment, although we can't
+ // tell for certain that's the case.
+ if (master.regionManager.isPendingOpen(
+ region.getRegionNameAsString())) {
+ // A duplicate report from the correct server
+ return;
+ }
+ duplicateAssignment = true;
+ }
+ }
+
+ if (duplicateAssignment) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("region server " + serverInfo.getServerAddress().toString()
+ + " should not have opened region " + Bytes.toString(region.getRegionName()));
+ }
+
+ // This Region should not have been opened.
+ // Ask the server to shut it down, but don't report it as closed.
+ // Otherwise the HMaster will think the Region was closed on purpose,
+ // and then try to reopen it elsewhere; that's not what we want.
+ returnMsgs.add(new HMsg(HMsg.Type.MSG_REGION_CLOSE_WITHOUT_REPORT,
+ region, "Duplicate assignment".getBytes()));
+ } else {
+ if (region.isRootRegion()) {
+ // it was assigned, and it's not a duplicate assignment, so take it out
+ // of the unassigned list.
+ master.regionManager.removeRegion(region);
+
+ // Store the Root Region location (in memory)
+ HServerAddress rootServer = serverInfo.getServerAddress();
+ if (master.regionManager.inSafeMode()) {
+ master.connection.setRootRegionLocation(
+ new HRegionLocation(region, rootServer));
+ }
+ master.regionManager.setRootRegionLocation(rootServer);
+ } else {
+ // Note that the table has been assigned and is waiting for the
+ // meta table to be updated.
+ master.regionManager.setOpen(region.getRegionNameAsString());
+ // Queue up an update to note the region location.
+ try {
+ master.toDoQueue.put(
+ new ProcessRegionOpen(master, serverInfo, region));
+ } catch (InterruptedException e) {
+ throw new RuntimeException(
+ "Putting into toDoQueue was interrupted.", e);
+ }
+ }
+ }
+ }
+ }
+
+ private void processRegionClose(HRegionInfo region) {
+ synchronized (master.regionManager) {
+ if (region.isRootRegion()) {
+ // Root region
+ master.regionManager.unsetRootRegion();
+ if (region.isOffline()) {
+ // Can't proceed without root region. Shutdown.
+ LOG.fatal("root region is marked offline");
+ master.shutdown();
+ return;
+ }
+
+ } else if (region.isMetaTable()) {
+ // Region is part of the meta table. Remove it from onlineMetaRegions
+ master.regionManager.offlineMetaRegion(region.getStartKey());
+ }
+
+ boolean offlineRegion =
+ master.regionManager.isOfflined(region.getRegionNameAsString());
+ boolean reassignRegion = !region.isOffline() && !offlineRegion;
+
+ // NOTE: If the region was just being closed and not offlined, we cannot
+ // mark the region unassignedRegions as that changes the ordering of
+ // the messages we've received. In this case, a close could be
+ // processed before an open resulting in the master not agreeing on
+ // the region's state.
+ master.regionManager.setClosed(region.getRegionNameAsString());
+ try {
+ master.toDoQueue.put(new ProcessRegionClose(master, region,
+ offlineRegion, reassignRegion));
+ } catch (InterruptedException e) {
+ throw new RuntimeException("Putting into toDoQueue was interrupted.", e);
+ }
+ }
+ }
+
+ /** Update a server load information because it's shutting down*/
+ private boolean removeServerInfo(final String serverName,
+ final HServerAddress serverAddress) {
+ boolean infoUpdated = false;
+ serverAddressToServerInfo.remove(serverAddress);
+ HServerInfo info = serversToServerInfo.remove(serverName);
+ // Only update load information once.
+ // This method can be called a couple of times during shutdown.
+ if (info != null) {
+ LOG.info("Removing server's info " + serverName);
+ if (master.getRootRegionLocation() != null &&
+ info.getServerAddress().equals(master.getRootRegionLocation())) {
+ master.regionManager.unsetRootRegion();
+ }
+ infoUpdated = true;
+
+ // update load information
+ HServerLoad load = serversToLoad.remove(serverName);
+ if (load != null) {
+ synchronized (loadToServers) {
+ Set<String> servers = loadToServers.get(load);
+ if (servers != null) {
+ servers.remove(serverName);
+ loadToServers.put(load, servers);
+ }
+ }
+ }
+ }
+ return infoUpdated;
+ }
+
+ /**
+ * Compute the average load across all region servers.
+ * Currently, this uses a very naive computation - just uses the number of
+ * regions being served, ignoring stats about number of requests.
+ * @return the average load
+ */
+ public double getAverageLoad() {
+ int totalLoad = 0;
+ int numServers = 0;
+ double averageLoad = 0.0;
+ synchronized (serversToLoad) {
+ numServers = serversToLoad.size();
+ for (HServerLoad load : serversToLoad.values()) {
+ totalLoad += load.getNumberOfRegions();
+ }
+ averageLoad = Math.ceil((double)totalLoad / (double)numServers);
+ }
+ return averageLoad;
+ }
+
+ /** @return the number of active servers */
+ public int numServers() {
+ return serversToServerInfo.size();
+ }
+
+ /**
+ * @param name server name
+ * @return HServerInfo for the given server address
+ */
+ public HServerInfo getServerInfo(String name) {
+ return serversToServerInfo.get(name);
+ }
+
+ /**
+ * @return Read-only map of servers to serverinfo.
+ */
+ public Map<String, HServerInfo> getServersToServerInfo() {
+ synchronized (serversToServerInfo) {
+ return Collections.unmodifiableMap(serversToServerInfo);
+ }
+ }
+
+ public Map<HServerAddress, HServerInfo> getServerAddressToServerInfo() {
+ // we use this one because all the puts to this map are parallel/synced with the other map.
+ synchronized (serversToServerInfo) {
+ return Collections.unmodifiableMap(serverAddressToServerInfo);
+ }
+ }
+
+ /**
+ * @return Read-only map of servers to load.
+ */
+ public Map<String, HServerLoad> getServersToLoad() {
+ synchronized (serversToLoad) {
+ return Collections.unmodifiableMap(serversToLoad);
+ }
+ }
+
+ /**
+ * Wakes up threads waiting on serversToServerInfo
+ */
+ public void notifyServers() {
+ synchronized (serversToServerInfo) {
+ serversToServerInfo.notifyAll();
+ }
+ }
+
+ /*
+ * Wait on regionservers to report in
+ * with {@link #regionServerReport(HServerInfo, HMsg[])} so they get notice
+ * the master is going down. Waits until all region servers come back with
+ * a MSG_REGIONSERVER_STOP.
+ */
+ void letRegionServersShutdown() {
+ if (!master.fsOk) {
+ // Forget waiting for the region servers if the file system has gone
+ // away. Just exit as quickly as possible.
+ return;
+ }
+ synchronized (serversToServerInfo) {
+ while (serversToServerInfo.size() > 0) {
+ LOG.info("Waiting on following regionserver(s) to go down " +
+ serversToServerInfo.values());
+ try {
+ serversToServerInfo.wait(master.threadWakeFrequency);
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ }
+
+ /** Watcher triggered when a RS znode is deleted */
+ private class ServerExpirer implements Watcher {
+ private String server;
+ private HServerAddress serverAddress;
+
+ ServerExpirer(String server, HServerAddress serverAddress) {
+ this.server = server;
+ this.serverAddress = serverAddress;
+ }
+
+ public void process(WatchedEvent event) {
+ if(event.getType().equals(EventType.NodeDeleted)) {
+ LOG.info(server + " znode expired");
+ // Remove the server from the known servers list and update load info
+ serverAddressToServerInfo.remove(serverAddress);
+ HServerInfo info = serversToServerInfo.remove(server);
+ boolean rootServer = false;
+ if (info != null) {
+ HServerAddress root = master.getRootRegionLocation();
+ if (root != null && root.equals(info.getServerAddress())) {
+ // NOTE: If the server was serving the root region, we cannot
+ // reassign
+ // it here because the new server will start serving the root region
+ // before ProcessServerShutdown has a chance to split the log file.
+ master.regionManager.unsetRootRegion();
+ rootServer = true;
+ }
+ String serverName = HServerInfo.getServerName(info);
+ HServerLoad load = serversToLoad.remove(serverName);
+ if (load != null) {
+ synchronized (loadToServers) {
+ Set<String> servers = loadToServers.get(load);
+ if (servers != null) {
+ servers.remove(serverName);
+ loadToServers.put(load, servers);
+ }
+ }
+ }
+ deadServers.add(server);
+ try {
+ master.toDoQueue.put(new ProcessServerShutdown(master, info,
+ rootServer));
+ } catch (InterruptedException e) {
+ LOG.error("insert into toDoQueue was interrupted", e);
+ }
+ }
+ synchronized (serversToServerInfo) {
+ serversToServerInfo.notifyAll();
+ }
+ }
+ }
+ }
+
+ /**
+ * @param serverName
+ */
+ public void removeDeadServer(String serverName) {
+ deadServers.remove(serverName);
+ }
+
+ /**
+ * @param serverName
+ * @return true if server is dead
+ */
+ public boolean isDead(String serverName) {
+ return deadServers.contains(serverName);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/TableDelete.java b/src/java/org/apache/hadoop/hbase/master/TableDelete.java
new file mode 100644
index 0000000..526fe32
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/TableDelete.java
@@ -0,0 +1,74 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.TableNotDisabledException;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Instantiated to delete a table. Table must be offline.
+ */
+class TableDelete extends TableOperation {
+ private final Log LOG = LogFactory.getLog(this.getClass());
+
+ TableDelete(final HMaster master, final byte [] tableName) throws IOException {
+ super(master, tableName);
+ }
+
+ @Override
+ protected void processScanItem(String serverName,
+ final HRegionInfo info) throws IOException {
+ if (isEnabled(info)) {
+ throw new TableNotDisabledException(tableName);
+ }
+ }
+
+ @Override
+ protected void postProcessMeta(MetaRegion m, HRegionInterface server)
+ throws IOException {
+ for (HRegionInfo i: unservedRegions) {
+ if (!Bytes.equals(this.tableName, i.getTableDesc().getName())) {
+ // Don't delete regions that are not from our table.
+ continue;
+ }
+ // Delete the region
+ try {
+ HRegion.removeRegionFromMETA(server, m.getRegionName(), i.getRegionName());
+ HRegion.deleteRegion(this.master.fs, this.master.rootdir, i);
+
+ } catch (IOException e) {
+ LOG.error("failed to delete region " + Bytes.toString(i.getRegionName()),
+ RemoteExceptionHandler.checkIOException(e));
+ }
+ }
+
+ // delete the table's folder from fs.
+ master.fs.delete(new Path(master.rootdir, Bytes.toString(tableName)), true);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/TableOperation.java b/src/java/org/apache/hadoop/hbase/master/TableOperation.java
new file mode 100644
index 0000000..2127bd0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/TableOperation.java
@@ -0,0 +1,176 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * Abstract base class for operations that need to examine all HRegionInfo
+ * objects in a table. (For a table, operate on each of its rows
+ * in .META.).
+ */
+abstract class TableOperation implements HConstants {
+ private final Set<MetaRegion> metaRegions;
+ protected final byte [] tableName;
+ protected final Set<HRegionInfo> unservedRegions = new HashSet<HRegionInfo>();
+ protected HMaster master;
+
+ protected TableOperation(final HMaster master, final byte [] tableName)
+ throws IOException {
+ this.master = master;
+ if (!this.master.isMasterRunning()) {
+ throw new MasterNotRunningException();
+ }
+ // add the delimiters.
+ // TODO maybe check if this is necessary?
+ this.tableName = tableName;
+
+ // Don't wait for META table to come on line if we're enabling it
+ if (!Bytes.equals(HConstants.META_TABLE_NAME, this.tableName)) {
+ // We can not access any meta region if they have not already been
+ // assigned and scanned.
+ if (master.regionManager.metaScannerThread.waitForMetaRegionsOrClose()) {
+ // We're shutting down. Forget it.
+ throw new MasterNotRunningException();
+ }
+ }
+ this.metaRegions = master.regionManager.getMetaRegionsForTable(tableName);
+ }
+
+ private class ProcessTableOperation extends RetryableMetaOperation<Boolean> {
+ ProcessTableOperation(MetaRegion m, HMaster master) {
+ super(m, master);
+ }
+
+ public Boolean call() throws IOException {
+ boolean tableExists = false;
+
+ // Open a scanner on the meta region
+ byte [] tableNameMetaStart =
+ Bytes.toBytes(Bytes.toString(tableName) + ",,");
+
+ long scannerId = server.openScanner(m.getRegionName(),
+ COLUMN_FAMILY_ARRAY, tableNameMetaStart, HConstants.LATEST_TIMESTAMP, null);
+
+ List<byte []> emptyRows = new ArrayList<byte []>();
+ try {
+ while (true) {
+ RowResult values = server.next(scannerId);
+ if(values == null || values.size() == 0) {
+ break;
+ }
+ HRegionInfo info = this.master.getHRegionInfo(values.getRow(), values);
+ if (info == null) {
+ emptyRows.add(values.getRow());
+ LOG.error(Bytes.toString(COL_REGIONINFO) + " not found on " +
+ Bytes.toString(values.getRow()));
+ continue;
+ }
+ String serverAddress = Writables.cellToString(values.get(COL_SERVER));
+ long startCode = Writables.cellToLong(values.get(COL_STARTCODE));
+ String serverName = null;
+ if (serverAddress != null && serverAddress.length() > 0) {
+ serverName = HServerInfo.getServerName(serverAddress, startCode);
+ }
+ if (Bytes.compareTo(info.getTableDesc().getName(), tableName) > 0) {
+ break; // Beyond any more entries for this table
+ }
+
+ tableExists = true;
+ if (!isBeingServed(serverName) || !isEnabled(info)) {
+ unservedRegions.add(info);
+ }
+ processScanItem(serverName, info);
+ }
+ } finally {
+ if (scannerId != -1L) {
+ try {
+ server.close(scannerId);
+ } catch (IOException e) {
+ e = RemoteExceptionHandler.checkIOException(e);
+ LOG.error("closing scanner", e);
+ }
+ }
+ scannerId = -1L;
+ }
+
+ // Get rid of any rows that have a null HRegionInfo
+
+ if (emptyRows.size() > 0) {
+ LOG.warn("Found " + emptyRows.size() +
+ " rows with empty HRegionInfo while scanning meta region " +
+ Bytes.toString(m.getRegionName()));
+ master.deleteEmptyMetaRows(server, m.getRegionName(), emptyRows);
+ }
+
+ if (!tableExists) {
+ throw new TableNotFoundException(Bytes.toString(tableName));
+ }
+
+ postProcessMeta(m, server);
+ unservedRegions.clear();
+
+ return Boolean.TRUE;
+ }
+ }
+
+ void process() throws IOException {
+ // Prevent meta scanner from running
+ synchronized(master.regionManager.metaScannerThread.scannerLock) {
+ for (MetaRegion m: metaRegions) {
+ new ProcessTableOperation(m, master).doWithRetries();
+ }
+ }
+ }
+
+ protected boolean isBeingServed(String serverName) {
+ boolean result = false;
+ if (serverName != null && serverName.length() > 0) {
+ HServerInfo s = master.serverManager.getServerInfo(serverName);
+ result = s != null;
+ }
+ return result;
+ }
+
+ protected boolean isEnabled(HRegionInfo info) {
+ return !info.isOffline();
+ }
+
+ protected abstract void processScanItem(String serverName, HRegionInfo info)
+ throws IOException;
+
+ protected abstract void postProcessMeta(MetaRegion m,
+ HRegionInterface server) throws IOException;
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/master/ZKMasterAddressWatcher.java b/src/java/org/apache/hadoop/hbase/master/ZKMasterAddressWatcher.java
new file mode 100644
index 0000000..35f033c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/ZKMasterAddressWatcher.java
@@ -0,0 +1,75 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.Watcher.Event.EventType;
+
+/**
+ * ZooKeeper watcher for the master address. Used by the HMaster to wait for
+ * the event when master address ZNode gets deleted. When multiple masters are
+ * brought up, they race to become master by writing to write their address to
+ * ZooKeeper. Whoever wins becomes the master, and the rest wait for that
+ * ephemeral node in ZooKeeper to get deleted (meaning the master went down), at
+ * which point they try to write to it again.
+ */
+public class ZKMasterAddressWatcher implements Watcher {
+ private static final Log LOG = LogFactory.getLog(ZKMasterAddressWatcher.class);
+
+ private final ZooKeeperWrapper zooKeeper;
+
+ /**
+ * Create a watcher with a ZooKeeperWrapper instance.
+ * @param zooKeeper ZooKeeperWrapper to use to talk to ZooKeeper.
+ */
+ public ZKMasterAddressWatcher(ZooKeeperWrapper zooKeeper) {
+ this.zooKeeper = zooKeeper;
+ }
+
+ /**
+ * @see org.apache.zookeeper.Watcher#process(org.apache.zookeeper.WatchedEvent)
+ */
+ @Override
+ public synchronized void process(WatchedEvent event) {
+ EventType type = event.getType();
+ if (type.equals(EventType.NodeDeleted)) {
+ LOG.debug("Master address ZNode deleted, notifying waiting masters");
+ notifyAll();
+ }
+ }
+
+ /**
+ * Wait for master address to be available. This sets a watch in ZooKeeper and
+ * blocks until the master address ZNode gets deleted.
+ */
+ public synchronized void waitForMasterAddressAvailability() {
+ while (zooKeeper.readMasterAddress(this) != null) {
+ try {
+ LOG.debug("Waiting for master address ZNode to be deleted");
+ wait();
+ } catch (InterruptedException e) {
+ }
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java b/src/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java
new file mode 100644
index 0000000..249deeb
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.metrics;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.jvm.JvmMetrics;
+import org.apache.hadoop.metrics.util.MetricsIntValue;
+
+/**
+ * This class is for maintaining the various master statistics
+ * and publishing them through the metrics interfaces.
+ * <p>
+ * This class has a number of metrics variables that are publicly accessible;
+ * these variables (objects) have methods to update their values.
+ */
+public class MasterMetrics implements Updater {
+ private final Log LOG = LogFactory.getLog(this.getClass());
+ private final MetricsRecord metricsRecord;
+
+ /*
+ * Count of requests to the cluster since last call to metrics update
+ */
+ private final MetricsIntValue cluster_requests =
+ new MetricsIntValue("cluster_requests");
+
+ public MasterMetrics() {
+ MetricsContext context = MetricsUtil.getContext("hbase");
+ metricsRecord = MetricsUtil.createRecord(context, "master");
+ String name = Thread.currentThread().getName();
+ metricsRecord.setTag("Master", name);
+ context.registerUpdater(this);
+ JvmMetrics.init("Master", name);
+ LOG.info("Initialized");
+ }
+
+ public void shutdown() {
+ // nought to do.
+ }
+
+ /**
+ * Since this object is a registered updater, this method will be called
+ * periodically, e.g. every 5 seconds.
+ * @param unused
+ */
+ public void doUpdates(MetricsContext unused) {
+ synchronized (this) {
+ synchronized(this.cluster_requests) {
+ this.cluster_requests.pushMetric(metricsRecord);
+ // Set requests down to zero again.
+ this.cluster_requests.set(0);
+ }
+ }
+ this.metricsRecord.update();
+ }
+
+ public void resetAllMinMax() {
+ // Nothing to do
+ }
+
+ /**
+ * @return Count of requests.
+ */
+ public int getRequests() {
+ return this.cluster_requests.get();
+ }
+
+ /**
+ * @param inc How much to add to requests.
+ */
+ public void incrementRequests(final int inc) {
+ synchronized(this.cluster_requests) {
+ this.cluster_requests.set(this.cluster_requests.get() + inc);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/metrics/MetricsRate.java b/src/java/org/apache/hadoop/hbase/metrics/MetricsRate.java
new file mode 100644
index 0000000..a649b91
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/metrics/MetricsRate.java
@@ -0,0 +1,73 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Publishes a rate based on a counter - you increment the counter each
+ * time an event occurs (eg: an RPC call) and this publishes a rate.
+ */
+public class MetricsRate {
+ private static final Log LOG = LogFactory.getLog("org.apache.hadoop.hbase.metrics");
+
+ private String name;
+ private int value;
+ private float prevRate;
+ private long ts;
+
+ public MetricsRate(final String name) {
+ this.name = name;
+ this.value = 0;
+ this.prevRate = 0;
+ this.ts = System.currentTimeMillis();
+ }
+
+ public synchronized void inc(final int incr) {
+ value += incr;
+ }
+
+ public synchronized void inc() {
+ value++;
+ }
+
+ private synchronized void intervalHeartBeat() {
+ long now = System.currentTimeMillis();
+ long diff = (now-ts)/1000;
+ if (diff == 0) diff = 1; // sigh this is crap.
+ this.prevRate = (float)value / diff;
+ this.value = 0;
+ this.ts = now;
+ }
+
+ public synchronized void pushMetric(final MetricsRecord mr) {
+ intervalHeartBeat();
+ try {
+ mr.setMetric(name, getPreviousIntervalValue());
+ } catch (Exception e) {
+ LOG.info("pushMetric failed for " + name + "\n" +
+ StringUtils.stringifyException(e));
+ }
+ }
+ public synchronized float getPreviousIntervalValue() {
+ return this.prevRate;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/metrics/file/TimeStampingFileContext.java b/src/java/org/apache/hadoop/hbase/metrics/file/TimeStampingFileContext.java
new file mode 100644
index 0000000..a5ffc6e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/metrics/file/TimeStampingFileContext.java
@@ -0,0 +1,111 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics.file;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.text.SimpleDateFormat;
+import java.util.Date;
+
+import org.apache.hadoop.metrics.ContextFactory;
+import org.apache.hadoop.metrics.file.FileContext;
+import org.apache.hadoop.metrics.spi.OutputRecord;
+
+/**
+ * Add timestamp to {@link org.apache.hadoop.metrics.file.FileContext#emitRecord(String, String, OutputRecord)}.
+ */
+public class TimeStampingFileContext extends FileContext {
+ // Copies bunch of FileContext here because writer and file are private in
+ // superclass.
+ private File file = null;
+ private PrintWriter writer = null;
+ private final SimpleDateFormat sdf;
+
+ public TimeStampingFileContext() {
+ super();
+ this.sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
+ }
+
+ @Override
+ public void init(String contextName, ContextFactory factory) {
+ super.init(contextName, factory);
+ String fileName = getAttribute(FILE_NAME_PROPERTY);
+ if (fileName != null) {
+ file = new File(fileName);
+ }
+ }
+
+ @Override
+ public void startMonitoring() throws IOException {
+ if (file == null) {
+ writer = new PrintWriter(new BufferedOutputStream(System.out));
+ } else {
+ writer = new PrintWriter(new FileWriter(file, true));
+ }
+ super.startMonitoring();
+ }
+
+ @Override
+ public void stopMonitoring() {
+ super.stopMonitoring();
+ if (writer != null) {
+ writer.close();
+ writer = null;
+ }
+ }
+
+ private synchronized String iso8601() {
+ return this.sdf.format(new Date());
+ }
+
+ @Override
+ public void emitRecord(String contextName, String recordName,
+ OutputRecord outRec) {
+ writer.print(iso8601());
+ writer.print(" ");
+ writer.print(contextName);
+ writer.print(".");
+ writer.print(recordName);
+ String separator = ": ";
+ for (String tagName : outRec.getTagNames()) {
+ writer.print(separator);
+ separator = ", ";
+ writer.print(tagName);
+ writer.print("=");
+ writer.print(outRec.getTag(tagName));
+ }
+ for (String metricName : outRec.getMetricNames()) {
+ writer.print(separator);
+ separator = ", ";
+ writer.print(metricName);
+ writer.print("=");
+ writer.print(outRec.getMetric(metricName));
+ }
+ writer.println();
+ }
+
+ @Override
+ public void flush() {
+ writer.flush();
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/ChangedReadersObserver.java b/src/java/org/apache/hadoop/hbase/regionserver/ChangedReadersObserver.java
new file mode 100644
index 0000000..a8369ca
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/ChangedReadersObserver.java
@@ -0,0 +1,35 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+
+/**
+ * If set of MapFile.Readers in Store change, implementors are notified.
+ */
+public interface ChangedReadersObserver {
+ /**
+ * Notify observers.
+ * @throws IOException
+ */
+ void updateReaders() throws IOException;
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java b/src/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
new file mode 100644
index 0000000..6e48bb5
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
@@ -0,0 +1,251 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * Compact region on request and then run split if appropriate
+ *
+ * NOTE: This class extends Thread rather than Chore because the sleep time
+ * can be interrupted when there is something to do, rather than the Chore
+ * sleep time which is invariant.
+ */
+class CompactSplitThread extends Thread implements HConstants {
+ static final Log LOG = LogFactory.getLog(CompactSplitThread.class);
+
+ private HTable root = null;
+ private HTable meta = null;
+ private final long frequency;
+ private final ReentrantLock lock = new ReentrantLock();
+
+ private final HRegionServer server;
+ private final HBaseConfiguration conf;
+
+ private final BlockingQueue<HRegion> compactionQueue =
+ new LinkedBlockingQueue<HRegion>();
+
+ private final HashSet<HRegion> regionsInQueue = new HashSet<HRegion>();
+
+ private volatile int limit = 1;
+
+ /** @param server */
+ public CompactSplitThread(HRegionServer server) {
+ super();
+ this.server = server;
+ this.conf = server.conf;
+ this.frequency =
+ conf.getLong("hbase.regionserver.thread.splitcompactcheckfrequency",
+ 20 * 1000);
+ }
+
+ @Override
+ public void run() {
+ while (!this.server.isStopRequested() && this.server.isInSafeMode()) {
+ try {
+ Thread.sleep(this.frequency);
+ } catch (InterruptedException ex) {
+ continue;
+ }
+ }
+ int count = 0;
+ while (!this.server.isStopRequested()) {
+ HRegion r = null;
+ try {
+ if ((limit > 0) && (++count > limit)) {
+ try {
+ Thread.sleep(this.frequency);
+ } catch (InterruptedException ex) {
+ continue;
+ }
+ count = 0;
+ }
+ r = compactionQueue.poll(this.frequency, TimeUnit.MILLISECONDS);
+ if (r != null && !this.server.isStopRequested()) {
+ synchronized (regionsInQueue) {
+ regionsInQueue.remove(r);
+ }
+ lock.lock();
+ try {
+ // Don't interrupt us while we are working
+ byte [] midKey = r.compactStores();
+ if (midKey != null && !this.server.isStopRequested()) {
+ split(r, midKey);
+ }
+ } finally {
+ lock.unlock();
+ }
+ }
+ } catch (InterruptedException ex) {
+ continue;
+ } catch (IOException ex) {
+ LOG.error("Compaction/Split failed" +
+ (r != null ? (" for region " + Bytes.toString(r.getRegionName())) : ""),
+ RemoteExceptionHandler.checkIOException(ex));
+ if (!server.checkFileSystem()) {
+ break;
+ }
+ } catch (Exception ex) {
+ LOG.error("Compaction failed" +
+ (r != null ? (" for region " + Bytes.toString(r.getRegionName())) : ""),
+ ex);
+ if (!server.checkFileSystem()) {
+ break;
+ }
+ }
+ }
+ regionsInQueue.clear();
+ compactionQueue.clear();
+ LOG.info(getName() + " exiting");
+ }
+
+ /**
+ * @param r HRegion store belongs to
+ * @param why Why compaction requested -- used in debug messages
+ */
+ public synchronized void compactionRequested(final HRegion r,
+ final String why) {
+ compactionRequested(r, false, why);
+ }
+
+ /**
+ * @param r HRegion store belongs to
+ * @param force Whether next compaction should be major
+ * @param why Why compaction requested -- used in debug messages
+ */
+ public synchronized void compactionRequested(final HRegion r,
+ final boolean force, final String why) {
+ if (this.server.stopRequested.get()) {
+ return;
+ }
+ r.setForceMajorCompaction(force);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Compaction " + (force? "(major) ": "") +
+ "requested for region " + Bytes.toString(r.getRegionName()) +
+ "/" + r.getRegionInfo().getEncodedName() +
+ (why != null && !why.isEmpty()? " because: " + why: ""));
+ }
+ synchronized (regionsInQueue) {
+ if (!regionsInQueue.contains(r)) {
+ compactionQueue.add(r);
+ regionsInQueue.add(r);
+ }
+ }
+ }
+
+ private void split(final HRegion region, final byte [] midKey)
+ throws IOException {
+ final HRegionInfo oldRegionInfo = region.getRegionInfo();
+ final long startTime = System.currentTimeMillis();
+ final HRegion[] newRegions = region.splitRegion(midKey);
+ if (newRegions == null) {
+ // Didn't need to be split
+ return;
+ }
+
+ // When a region is split, the META table needs to updated if we're
+ // splitting a 'normal' region, and the ROOT table needs to be
+ // updated if we are splitting a META region.
+ HTable t = null;
+ if (region.getRegionInfo().isMetaTable()) {
+ // We need to update the root region
+ if (this.root == null) {
+ this.root = new HTable(conf, ROOT_TABLE_NAME);
+ }
+ t = root;
+ } else {
+ // For normal regions we need to update the meta region
+ if (meta == null) {
+ meta = new HTable(conf, META_TABLE_NAME);
+ }
+ t = meta;
+ }
+
+ // Mark old region as offline and split in META.
+ // NOTE: there is no need for retry logic here. HTable does it for us.
+ oldRegionInfo.setOffline(true);
+ oldRegionInfo.setSplit(true);
+ // Inform the HRegionServer that the parent HRegion is no-longer online.
+ this.server.removeFromOnlineRegions(oldRegionInfo);
+
+ BatchUpdate update = new BatchUpdate(oldRegionInfo.getRegionName());
+ update.put(COL_REGIONINFO, Writables.getBytes(oldRegionInfo));
+ update.put(COL_SPLITA, Writables.getBytes(newRegions[0].getRegionInfo()));
+ update.put(COL_SPLITB, Writables.getBytes(newRegions[1].getRegionInfo()));
+ t.commit(update);
+
+ // Add new regions to META
+ for (int i = 0; i < newRegions.length; i++) {
+ update = new BatchUpdate(newRegions[i].getRegionName());
+ update.put(COL_REGIONINFO, Writables.getBytes(
+ newRegions[i].getRegionInfo()));
+ t.commit(update);
+ }
+
+ // Now tell the master about the new regions
+ server.reportSplit(oldRegionInfo, newRegions[0].getRegionInfo(),
+ newRegions[1].getRegionInfo());
+ LOG.info("region split, META updated, and report to master all" +
+ " successful. Old region=" + oldRegionInfo.toString() +
+ ", new regions: " + newRegions[0].toString() + ", " +
+ newRegions[1].toString() + ". Split took " +
+ StringUtils.formatTimeDiff(System.currentTimeMillis(), startTime));
+
+ // Do not serve the new regions. Let the Master assign them.
+ }
+
+ /**
+ * Sets the number of compactions allowed per cycle.
+ * @param limit the number of compactions allowed, or -1 to unlimit
+ */
+ void setLimit(int limit) {
+ this.limit = limit;
+ }
+
+ int getLimit() {
+ return this.limit;
+ }
+
+ /**
+ * Only interrupt once it's done with a run through the work loop.
+ */
+ void interruptIfNecessary() {
+ if (lock.tryLock()) {
+ this.interrupt();
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/FailedLogCloseException.java b/src/java/org/apache/hadoop/hbase/regionserver/FailedLogCloseException.java
new file mode 100644
index 0000000..cd2dfaf
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/FailedLogCloseException.java
@@ -0,0 +1,38 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+/**
+ * Thrown when we fail close of the write-ahead-log file.
+ * Package private. Only used inside this package.
+ */
+class FailedLogCloseException extends IOException {
+ private static final long serialVersionUID = 1759152841462990925L;
+
+ public FailedLogCloseException() {
+ super();
+ }
+
+ public FailedLogCloseException(String arg0) {
+ super(arg0);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java b/src/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java
new file mode 100644
index 0000000..8e962a5
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java
@@ -0,0 +1,36 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+/**
+ * Implementors of this interface want to be notified when an HRegion
+ * determines that a cache flush is needed. A FlushRequester (or null)
+ * must be passed to the HRegion constructor so it knows who to call when it
+ * has a filled memcache.
+ */
+public interface FlushRequester {
+ /**
+ * Tell the listener the cache needs to be flushed.
+ *
+ * @param region the HRegion requesting the cache flush
+ */
+ void request(HRegion region);
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/HAbstractScanner.java b/src/java/org/apache/hadoop/hbase/regionserver/HAbstractScanner.java
new file mode 100644
index 0000000..84f42a0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/HAbstractScanner.java
@@ -0,0 +1,214 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.regex.Pattern;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.ColumnNameParseException;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Abstract base class that implements the InternalScanner.
+ */
+public abstract class HAbstractScanner implements InternalScanner {
+ final Log LOG = LogFactory.getLog(this.getClass().getName());
+
+ // Pattern to determine if a column key is a regex
+ static final Pattern isRegexPattern =
+ Pattern.compile("^.*[\\\\+|^&*$\\[\\]\\}{)(]+.*$");
+
+ /** The kind of match we are doing on a column: */
+ private static enum MATCH_TYPE {
+ /** Just check the column family name */
+ FAMILY_ONLY,
+ /** Column family + matches regex */
+ REGEX,
+ /** Literal matching */
+ SIMPLE
+ }
+
+ private final List<ColumnMatcher> matchers = new ArrayList<ColumnMatcher>();
+
+ // True when scanning is done
+ protected volatile boolean scannerClosed = false;
+
+ // The timestamp to match entries against
+ protected final long timestamp;
+
+ private boolean wildcardMatch = false;
+ private boolean multipleMatchers = false;
+
+ /** Constructor for abstract base class */
+ protected HAbstractScanner(final long timestamp,
+ final NavigableSet<byte []> columns)
+ throws IOException {
+ this.timestamp = timestamp;
+ for (byte [] column: columns) {
+ ColumnMatcher matcher = new ColumnMatcher(column);
+ this.wildcardMatch = matcher.isWildCardMatch();
+ matchers.add(matcher);
+ this.multipleMatchers = !matchers.isEmpty();
+ }
+ }
+
+ /**
+ * For a particular column, find all the matchers defined for the column.
+ * Compare the column family and column key using the matchers. The first one
+ * that matches returns true. If no matchers are successful, return false.
+ *
+ * @param kv KeyValue to test
+ * @return true if any of the matchers for the column match the column family
+ * and the column key.
+ *
+ * @throws IOException
+ */
+ protected boolean columnMatch(final KeyValue kv)
+ throws IOException {
+ if (matchers == null) {
+ return false;
+ }
+ for(int m = 0; m < this.matchers.size(); m++) {
+ if (this.matchers.get(m).matches(kv)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ public boolean isWildcardScanner() {
+ return this.wildcardMatch;
+ }
+
+ public boolean isMultipleMatchScanner() {
+ return this.multipleMatchers;
+ }
+
+ public abstract boolean next(List<KeyValue> results)
+ throws IOException;
+
+ /**
+ * This class provides column matching functions that are more sophisticated
+ * than a simple string compare. There are three types of matching:
+ * <ol>
+ * <li>Match on the column family name only</li>
+ * <li>Match on the column family + column key regex</li>
+ * <li>Simple match: compare column family + column key literally</li>
+ * </ul>
+ */
+ private static class ColumnMatcher {
+ private boolean wildCardmatch;
+ private MATCH_TYPE matchType;
+ private byte [] family;
+ private Pattern columnMatcher;
+ // Column without delimiter so easy compare to KeyValue column
+ private byte [] col;
+ private int familylength = 0;
+
+ ColumnMatcher(final byte [] col) throws IOException {
+ byte [][] parse = parseColumn(col);
+ // Make up column without delimiter
+ byte [] columnWithoutDelimiter =
+ new byte [parse[0].length + parse[1].length];
+ System.arraycopy(parse[0], 0, columnWithoutDelimiter, 0, parse[0].length);
+ System.arraycopy(parse[1], 0, columnWithoutDelimiter, parse[0].length,
+ parse[1].length);
+ // First position has family. Second has qualifier.
+ byte [] qualifier = parse[1];
+ try {
+ if (qualifier == null || qualifier.length == 0) {
+ this.matchType = MATCH_TYPE.FAMILY_ONLY;
+ this.family = parse[0];
+ this.wildCardmatch = true;
+ } else if (isRegexPattern.matcher(Bytes.toString(qualifier)).matches()) {
+ this.matchType = MATCH_TYPE.REGEX;
+ this.columnMatcher =
+ Pattern.compile(Bytes.toString(columnWithoutDelimiter));
+ this.wildCardmatch = true;
+ } else {
+ this.matchType = MATCH_TYPE.SIMPLE;
+ this.col = columnWithoutDelimiter;
+ this.familylength = parse[0].length;
+ this.wildCardmatch = false;
+ }
+ } catch(Exception e) {
+ throw new IOException("Column: " + Bytes.toString(col) + ": " +
+ e.getMessage());
+ }
+ }
+
+ /**
+ * @param kv
+ * @return
+ * @throws IOException
+ */
+ boolean matches(final KeyValue kv) throws IOException {
+ if (this.matchType == MATCH_TYPE.SIMPLE) {
+ return kv.matchingColumnNoDelimiter(this.col, this.familylength);
+ } else if(this.matchType == MATCH_TYPE.FAMILY_ONLY) {
+ return kv.matchingFamily(this.family);
+ } else if (this.matchType == MATCH_TYPE.REGEX) {
+ // Pass a column without the delimiter since thats whats we're
+ // expected to match.
+ int o = kv.getColumnOffset();
+ int l = kv.getColumnLength(o);
+ String columnMinusQualifier = Bytes.toString(kv.getBuffer(), o, l);
+ return this.columnMatcher.matcher(columnMinusQualifier).matches();
+ } else {
+ throw new IOException("Invalid match type: " + this.matchType);
+ }
+ }
+
+ boolean isWildCardMatch() {
+ return this.wildCardmatch;
+ }
+
+ /**
+ * @param c Column name
+ * @return Return array of size two whose first element has the family
+ * prefix of passed column <code>c</code> and whose second element is the
+ * column qualifier.
+ * @throws ColumnNameParseException
+ */
+ public static byte [][] parseColumn(final byte [] c)
+ throws ColumnNameParseException {
+ final byte [][] result = new byte [2][];
+ // TODO: Change this so don't do parse but instead use the comparator
+ // inside in KeyValue which just looks at column family.
+ final int index = KeyValue.getFamilyDelimiterIndex(c, 0, c.length);
+ if (index == -1) {
+ throw new ColumnNameParseException("Impossible column name: " + c);
+ }
+ result[0] = new byte [index];
+ System.arraycopy(c, 0, result[0], 0, index);
+ final int len = c.length - (index + 1);
+ result[1] = new byte[len];
+ System.arraycopy(c, index + 1 /*Skip delimiter*/, result[1], 0,
+ len);
+ return result;
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/HLog.java b/src/java/org/apache/hadoop/hbase/regionserver/HLog.java
new file mode 100644
index 0000000..83f2a98
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/HLog.java
@@ -0,0 +1,950 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.EOFException;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.Syncable;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.io.SequenceFile.CompressionType;
+import org.apache.hadoop.io.SequenceFile.Metadata;
+import org.apache.hadoop.io.SequenceFile.Reader;
+import org.apache.hadoop.io.compress.DefaultCodec;
+
+/**
+ * HLog stores all the edits to the HStore.
+ *
+ * It performs logfile-rolling, so external callers are not aware that the
+ * underlying file is being rolled.
+ *
+ * <p>
+ * A single HLog is used by several HRegions simultaneously.
+ *
+ * <p>
+ * Each HRegion is identified by a unique long <code>int</code>. HRegions do
+ * not need to declare themselves before using the HLog; they simply include
+ * their HRegion-id in the <code>append</code> or
+ * <code>completeCacheFlush</code> calls.
+ *
+ * <p>
+ * An HLog consists of multiple on-disk files, which have a chronological order.
+ * As data is flushed to other (better) on-disk structures, the log becomes
+ * obsolete. We can destroy all the log messages for a given HRegion-id up to
+ * the most-recent CACHEFLUSH message from that HRegion.
+ *
+ * <p>
+ * It's only practical to delete entire files. Thus, we delete an entire on-disk
+ * file F when all of the messages in F have a log-sequence-id that's older
+ * (smaller) than the most-recent CACHEFLUSH message for every HRegion that has
+ * a message in F.
+ *
+ * <p>
+ * Synchronized methods can never execute in parallel. However, between the
+ * start of a cache flush and the completion point, appends are allowed but log
+ * rolling is not. To prevent log rolling taking place during this period, a
+ * separate reentrant lock is used.
+ *
+ */
+public class HLog implements HConstants, Syncable {
+ private static final Log LOG = LogFactory.getLog(HLog.class);
+ private static final String HLOG_DATFILE = "hlog.dat.";
+ static final byte [] METACOLUMN = Bytes.toBytes("METACOLUMN:");
+ static final byte [] METAROW = Bytes.toBytes("METAROW");
+ private final FileSystem fs;
+ private final Path dir;
+ private final Configuration conf;
+ private final LogRollListener listener;
+ private final int maxlogentries;
+ private final long optionalFlushInterval;
+ private final long blocksize;
+ private final int flushlogentries;
+ private final AtomicInteger unflushedEntries = new AtomicInteger(0);
+ private volatile long lastLogFlushTime;
+
+ /*
+ * Current log file.
+ */
+ SequenceFile.Writer writer;
+
+ /*
+ * Map of all log files but the current one.
+ */
+ final SortedMap<Long, Path> outputfiles =
+ Collections.synchronizedSortedMap(new TreeMap<Long, Path>());
+
+ /*
+ * Map of region to last sequence/edit id.
+ */
+ private final ConcurrentSkipListMap<byte [], Long> lastSeqWritten =
+ new ConcurrentSkipListMap<byte [], Long>(Bytes.BYTES_COMPARATOR);
+
+ private volatile boolean closed = false;
+
+ private final AtomicLong logSeqNum = new AtomicLong(0);
+
+ private volatile long filenum = 0;
+ private volatile long old_filenum = -1;
+
+ private final AtomicInteger numEntries = new AtomicInteger(0);
+
+ // This lock prevents starting a log roll during a cache flush.
+ // synchronized is insufficient because a cache flush spans two method calls.
+ private final Lock cacheFlushLock = new ReentrantLock();
+
+ // We synchronize on updateLock to prevent updates and to prevent a log roll
+ // during an update
+ private final Object updateLock = new Object();
+
+ /*
+ * If more than this many logs, force flush of oldest region to oldest edit
+ * goes to disk. If too many and we crash, then will take forever replaying.
+ * Keep the number of logs tidy.
+ */
+ private final int maxLogs;
+
+ static byte [] COMPLETE_CACHE_FLUSH;
+ static {
+ try {
+ COMPLETE_CACHE_FLUSH = "HBASE::CACHEFLUSH".getBytes(UTF8_ENCODING);
+ } catch (UnsupportedEncodingException e) {
+ assert(false);
+ }
+ }
+
+ /**
+ * Create an edit log at the given <code>dir</code> location.
+ *
+ * You should never have to load an existing log. If there is a log at
+ * startup, it should have already been processed and deleted by the time the
+ * HLog object is started up.
+ *
+ * @param fs
+ * @param dir
+ * @param conf
+ * @param listener
+ * @throws IOException
+ */
+ public HLog(final FileSystem fs, final Path dir, final Configuration conf,
+ final LogRollListener listener)
+ throws IOException {
+ super();
+ this.fs = fs;
+ this.dir = dir;
+ this.conf = conf;
+ this.listener = listener;
+ this.maxlogentries =
+ conf.getInt("hbase.regionserver.maxlogentries", 100000);
+ this.flushlogentries =
+ conf.getInt("hbase.regionserver.flushlogentries", 100);
+ this.blocksize = conf.getLong("hbase.regionserver.hlog.blocksize",
+ this.fs.getDefaultBlockSize());
+ this.optionalFlushInterval =
+ conf.getLong("hbase.regionserver.optionallogflushinterval", 10 * 1000);
+ this.lastLogFlushTime = System.currentTimeMillis();
+ if (fs.exists(dir)) {
+ throw new IOException("Target HLog directory already exists: " + dir);
+ }
+ fs.mkdirs(dir);
+ this.maxLogs = conf.getInt("hbase.regionserver.maxlogs", 64);
+ LOG.info("HLog configuration: blocksize=" + this.blocksize +
+ ", maxlogentries=" + this.maxlogentries + ", flushlogentries=" +
+ this.flushlogentries + ", optionallogflushinternal=" +
+ this.optionalFlushInterval + "ms");
+ rollWriter();
+ }
+
+ /**
+ * Accessor for tests. Not a part of the public API.
+ * @return Current state of the monotonically increasing file id.
+ */
+ public long getFilenum() {
+ return this.filenum;
+ }
+
+ /**
+ * Get the compression type for the hlog files.
+ * Commit logs SHOULD NOT be compressed. You'll lose edits if the compression
+ * record is not complete. In gzip, record is 32k so you could lose up to
+ * 32k of edits (All of this is moot till we have sync/flush in hdfs but
+ * still...).
+ * @param c Configuration to use.
+ * @return the kind of compression to use
+ */
+ private static CompressionType getCompressionType(final Configuration c) {
+ String name = c.get("hbase.io.seqfile.compression.type");
+ return name == null? CompressionType.NONE: CompressionType.valueOf(name);
+ }
+
+ /**
+ * Called by HRegionServer when it opens a new region to ensure that log
+ * sequence numbers are always greater than the latest sequence number of the
+ * region being brought on-line.
+ *
+ * @param newvalue We'll set log edit/sequence number to this value if it
+ * is greater than the current value.
+ */
+ void setSequenceNumber(final long newvalue) {
+ for (long id = this.logSeqNum.get(); id < newvalue &&
+ !this.logSeqNum.compareAndSet(id, newvalue); id = this.logSeqNum.get()) {
+ // This could spin on occasion but better the occasional spin than locking
+ // every increment of sequence number.
+ LOG.debug("Change sequence number from " + logSeqNum + " to " + newvalue);
+ }
+ }
+
+ /**
+ * @return log sequence number
+ */
+ public long getSequenceNumber() {
+ return logSeqNum.get();
+ }
+
+ /**
+ * Roll the log writer. That is, start writing log messages to a new file.
+ *
+ * Because a log cannot be rolled during a cache flush, and a cache flush
+ * spans two method calls, a special lock needs to be obtained so that a cache
+ * flush cannot start when the log is being rolled and the log cannot be
+ * rolled during a cache flush.
+ *
+ * <p>Note that this method cannot be synchronized because it is possible that
+ * startCacheFlush runs, obtaining the cacheFlushLock, then this method could
+ * start which would obtain the lock on this but block on obtaining the
+ * cacheFlushLock and then completeCacheFlush could be called which would wait
+ * for the lock on this and consequently never release the cacheFlushLock
+ *
+ * @return If lots of logs, flush the returned region so next time through
+ * we can clean logs. Returns null if nothing to flush.
+ * @throws FailedLogCloseException
+ * @throws IOException
+ */
+ public byte [] rollWriter() throws FailedLogCloseException, IOException {
+ byte [] regionToFlush = null;
+ this.cacheFlushLock.lock();
+ try {
+ if (closed) {
+ return regionToFlush;
+ }
+ synchronized (updateLock) {
+ // Clean up current writer.
+ Path oldFile = cleanupCurrentWriter();
+ // Create a new one.
+ this.old_filenum = this.filenum;
+ this.filenum = System.currentTimeMillis();
+ Path newPath = computeFilename(this.filenum);
+
+ this.writer = SequenceFile.createWriter(this.fs, this.conf, newPath,
+ HLogKey.class, KeyValue.class,
+ fs.getConf().getInt("io.file.buffer.size", 4096),
+ fs.getDefaultReplication(), this.blocksize,
+ SequenceFile.CompressionType.NONE, new DefaultCodec(), null,
+ new Metadata());
+
+ LOG.info((oldFile != null?
+ "Closed " + oldFile + ", entries=" + this.numEntries.get() + ". ": "") +
+ "New log writer: " + FSUtils.getPath(newPath));
+
+ // Can we delete any of the old log files?
+ if (this.outputfiles.size() > 0) {
+ if (this.lastSeqWritten.size() <= 0) {
+ LOG.debug("Last sequence written is empty. Deleting all old hlogs");
+ // If so, then no new writes have come in since all regions were
+ // flushed (and removed from the lastSeqWritten map). Means can
+ // remove all but currently open log file.
+ for (Map.Entry<Long, Path> e : this.outputfiles.entrySet()) {
+ deleteLogFile(e.getValue(), e.getKey());
+ }
+ this.outputfiles.clear();
+ } else {
+ regionToFlush = cleanOldLogs();
+ }
+ }
+ this.numEntries.set(0);
+ updateLock.notifyAll();
+ }
+ } finally {
+ this.cacheFlushLock.unlock();
+ }
+ return regionToFlush;
+ }
+
+ /*
+ * Clean up old commit logs.
+ * @return If lots of logs, flush the returned region so next time through
+ * we can clean logs. Returns null if nothing to flush.
+ * @throws IOException
+ */
+ private byte [] cleanOldLogs() throws IOException {
+ byte [] regionToFlush = null;
+ Long oldestOutstandingSeqNum = getOldestOutstandingSeqNum();
+ // Get the set of all log files whose final ID is older than or
+ // equal to the oldest pending region operation
+ TreeSet<Long> sequenceNumbers =
+ new TreeSet<Long>(this.outputfiles.headMap(
+ (Long.valueOf(oldestOutstandingSeqNum.longValue() + 1L))).keySet());
+ // Now remove old log files (if any)
+ byte [] oldestRegion = null;
+ if (LOG.isDebugEnabled()) {
+ // Find region associated with oldest key -- helps debugging.
+ oldestRegion = getOldestRegion(oldestOutstandingSeqNum);
+ LOG.debug("Found " + sequenceNumbers.size() + " logs to remove " +
+ " out of total " + this.outputfiles.size() + "; " +
+ "oldest outstanding seqnum is " + oldestOutstandingSeqNum +
+ " from region " + Bytes.toString(oldestRegion));
+ }
+ if (sequenceNumbers.size() > 0) {
+ for (Long seq : sequenceNumbers) {
+ deleteLogFile(this.outputfiles.remove(seq), seq);
+ }
+ }
+ int countOfLogs = this.outputfiles.size() - sequenceNumbers.size();
+ if (countOfLogs > this.maxLogs) {
+ regionToFlush = oldestRegion != null?
+ oldestRegion: getOldestRegion(oldestOutstandingSeqNum);
+ LOG.info("Too many logs: logs=" + countOfLogs + ", maxlogs=" +
+ this.maxLogs + "; forcing flush of region with oldest edits: " +
+ Bytes.toString(regionToFlush));
+ }
+ return regionToFlush;
+ }
+
+ /*
+ * @return Logs older than this id are safe to remove.
+ */
+ private Long getOldestOutstandingSeqNum() {
+ return Collections.min(this.lastSeqWritten.values());
+ }
+
+ private byte [] getOldestRegion(final Long oldestOutstandingSeqNum) {
+ byte [] oldestRegion = null;
+ for (Map.Entry<byte [], Long> e: this.lastSeqWritten.entrySet()) {
+ if (e.getValue().longValue() == oldestOutstandingSeqNum.longValue()) {
+ oldestRegion = e.getKey();
+ break;
+ }
+ }
+ return oldestRegion;
+ }
+
+ /*
+ * Cleans up current writer closing and adding to outputfiles.
+ * Presumes we're operating inside an updateLock scope.
+ * @return Path to current writer or null if none.
+ * @throws IOException
+ */
+ private Path cleanupCurrentWriter() throws IOException {
+ Path oldFile = null;
+ if (this.writer != null) {
+ // Close the current writer, get a new one.
+ try {
+ this.writer.close();
+ } catch (IOException e) {
+ // Failed close of log file. Means we're losing edits. For now,
+ // shut ourselves down to minimize loss. Alternative is to try and
+ // keep going. See HBASE-930.
+ FailedLogCloseException flce =
+ new FailedLogCloseException("#" + this.filenum);
+ flce.initCause(e);
+ throw e;
+ }
+ oldFile = computeFilename(old_filenum);
+ if (filenum > 0) {
+ this.outputfiles.put(Long.valueOf(this.logSeqNum.get() - 1), oldFile);
+ }
+ }
+ return oldFile;
+ }
+
+ private void deleteLogFile(final Path p, final Long seqno) throws IOException {
+ LOG.info("removing old log file " + FSUtils.getPath(p) +
+ " whose highest sequence/edit id is " + seqno);
+ this.fs.delete(p, true);
+ }
+
+ /**
+ * This is a convenience method that computes a new filename with a given
+ * file-number.
+ * @param fn
+ * @return Path
+ */
+ public Path computeFilename(final long fn) {
+ return new Path(dir, HLOG_DATFILE + fn);
+ }
+
+ /**
+ * Shut down the log and delete the log directory
+ *
+ * @throws IOException
+ */
+ public void closeAndDelete() throws IOException {
+ close();
+ fs.delete(dir, true);
+ }
+
+ /**
+ * Shut down the log.
+ *
+ * @throws IOException
+ */
+ public void close() throws IOException {
+ cacheFlushLock.lock();
+ try {
+ synchronized (updateLock) {
+ this.closed = true;
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("closing log writer in " + this.dir.toString());
+ }
+ this.writer.close();
+ updateLock.notifyAll();
+ }
+ } finally {
+ cacheFlushLock.unlock();
+ }
+ }
+
+
+ /** Append an entry without a row to the log.
+ *
+ * @param regionInfo
+ * @param logEdit
+ * @throws IOException
+ */
+ public void append(HRegionInfo regionInfo, KeyValue logEdit)
+ throws IOException {
+ this.append(regionInfo, new byte[0], logEdit);
+ }
+
+ /** Append an entry to the log.
+ *
+ * @param regionInfo
+ * @param row
+ * @param logEdit
+ * @throws IOException
+ */
+ public void append(HRegionInfo regionInfo, byte [] row, KeyValue logEdit)
+ throws IOException {
+ if (this.closed) {
+ throw new IOException("Cannot append; log is closed");
+ }
+ byte [] regionName = regionInfo.getRegionName();
+ byte [] tableName = regionInfo.getTableDesc().getName();
+ synchronized (updateLock) {
+ long seqNum = obtainSeqNum();
+ // The 'lastSeqWritten' map holds the sequence number of the oldest
+ // write for each region. When the cache is flushed, the entry for the
+ // region being flushed is removed if the sequence number of the flush
+ // is greater than or equal to the value in lastSeqWritten.
+ this.lastSeqWritten.putIfAbsent(regionName, Long.valueOf(seqNum));
+ HLogKey logKey = new HLogKey(regionName, tableName, seqNum);
+ boolean sync = regionInfo.isMetaRegion() || regionInfo.isRootRegion();
+ doWrite(logKey, logEdit, sync);
+ this.numEntries.incrementAndGet();
+ updateLock.notifyAll();
+ }
+
+ if (this.numEntries.get() > this.maxlogentries) {
+ if (listener != null) {
+ listener.logRollRequested();
+ }
+ }
+ }
+
+ /**
+ * Append a set of edits to the log. Log edits are keyed by regionName,
+ * rowname, and log-sequence-id.
+ *
+ * Later, if we sort by these keys, we obtain all the relevant edits for a
+ * given key-range of the HRegion (TODO). Any edits that do not have a
+ * matching {@link HConstants#COMPLETE_CACHEFLUSH} message can be discarded.
+ *
+ * <p>
+ * Logs cannot be restarted once closed, or once the HLog process dies. Each
+ * time the HLog starts, it must create a new log. This means that other
+ * systems should process the log appropriately upon each startup (and prior
+ * to initializing HLog).
+ *
+ * synchronized prevents appends during the completion of a cache flush or for
+ * the duration of a log roll.
+ *
+ * @param regionName
+ * @param tableName
+ * @param edits
+ * @param sync
+ * @throws IOException
+ */
+ void append(byte [] regionName, byte [] tableName, List<KeyValue> edits,
+ boolean sync)
+ throws IOException {
+ if (this.closed) {
+ throw new IOException("Cannot append; log is closed");
+ }
+ long seqNum [] = obtainSeqNum(edits.size());
+ synchronized (this.updateLock) {
+ // The 'lastSeqWritten' map holds the sequence number of the oldest
+ // write for each region. When the cache is flushed, the entry for the
+ // region being flushed is removed if the sequence number of the flush
+ // is greater than or equal to the value in lastSeqWritten.
+ this.lastSeqWritten.putIfAbsent(regionName, Long.valueOf(seqNum[0]));
+ int counter = 0;
+ for (KeyValue kv: edits) {
+ HLogKey logKey = new HLogKey(regionName, tableName, seqNum[counter++]);
+ doWrite(logKey, kv, sync);
+ this.numEntries.incrementAndGet();
+ }
+ updateLock.notifyAll();
+ }
+ if (this.numEntries.get() > this.maxlogentries) {
+ requestLogRoll();
+ }
+ }
+
+ public void sync() throws IOException {
+ lastLogFlushTime = System.currentTimeMillis();
+ this.writer.sync();
+ this.unflushedEntries.set(0);
+ }
+
+ void optionalSync() {
+ if (!this.closed) {
+ long now = System.currentTimeMillis();
+ synchronized (updateLock) {
+ if (((now - this.optionalFlushInterval) >
+ this.lastLogFlushTime) && this.unflushedEntries.get() > 0) {
+ try {
+ sync();
+ } catch (IOException e) {
+ LOG.error("Error flushing HLog", e);
+ }
+ }
+ }
+ long took = System.currentTimeMillis() - now;
+ if (took > 1000) {
+ LOG.warn(Thread.currentThread().getName() + " took " + took +
+ "ms optional sync'ing HLog; editcount=" + this.numEntries.get());
+ }
+ }
+ }
+
+ private void requestLogRoll() {
+ if (this.listener != null) {
+ this.listener.logRollRequested();
+ }
+ }
+
+ private void doWrite(HLogKey logKey, KeyValue logEdit, boolean sync)
+ throws IOException {
+ try {
+ long now = System.currentTimeMillis();
+ this.writer.append(logKey, logEdit);
+ if (sync || this.unflushedEntries.incrementAndGet() >= flushlogentries) {
+ sync();
+ }
+ long took = System.currentTimeMillis() - now;
+ if (took > 1000) {
+ LOG.warn(Thread.currentThread().getName() + " took " + took +
+ "ms appending an edit to HLog; editcount=" + this.numEntries.get());
+ }
+ } catch (IOException e) {
+ LOG.fatal("Could not append. Requesting close of log", e);
+ requestLogRoll();
+ throw e;
+ }
+ }
+
+ /** @return How many items have been added to the log */
+ int getNumEntries() {
+ return numEntries.get();
+ }
+
+ /**
+ * Obtain a log sequence number.
+ */
+ private long obtainSeqNum() {
+ return this.logSeqNum.incrementAndGet();
+ }
+
+ /** @return the number of log files in use */
+ int getNumLogFiles() {
+ return outputfiles.size();
+ }
+
+ /*
+ * Obtain a specified number of sequence numbers
+ *
+ * @param num number of sequence numbers to obtain
+ * @return array of sequence numbers
+ */
+ private long [] obtainSeqNum(int num) {
+ long [] results = new long[num];
+ for (int i = 0; i < num; i++) {
+ results[i] = this.logSeqNum.incrementAndGet();
+ }
+ return results;
+ }
+
+ /**
+ * By acquiring a log sequence ID, we can allow log messages to continue while
+ * we flush the cache.
+ *
+ * Acquire a lock so that we do not roll the log between the start and
+ * completion of a cache-flush. Otherwise the log-seq-id for the flush will
+ * not appear in the correct logfile.
+ *
+ * @return sequence ID to pass {@link #completeCacheFlush(Text, Text, long)}
+ * @see #completeCacheFlush(Text, Text, long)
+ * @see #abortCacheFlush()
+ */
+ long startCacheFlush() {
+ this.cacheFlushLock.lock();
+ return obtainSeqNum();
+ }
+
+ /**
+ * Complete the cache flush
+ *
+ * Protected by cacheFlushLock
+ *
+ * @param regionName
+ * @param tableName
+ * @param logSeqId
+ * @throws IOException
+ */
+ void completeCacheFlush(final byte [] regionName, final byte [] tableName,
+ final long logSeqId)
+ throws IOException {
+ try {
+ if (this.closed) {
+ return;
+ }
+ synchronized (updateLock) {
+ this.writer.append(new HLogKey(regionName, tableName, logSeqId),
+ completeCacheFlushLogEdit());
+ this.numEntries.incrementAndGet();
+ Long seq = this.lastSeqWritten.get(regionName);
+ if (seq != null && logSeqId >= seq.longValue()) {
+ this.lastSeqWritten.remove(regionName);
+ }
+ updateLock.notifyAll();
+ }
+ } finally {
+ this.cacheFlushLock.unlock();
+ }
+ }
+
+ private KeyValue completeCacheFlushLogEdit() {
+ return new KeyValue(METAROW, METACOLUMN, System.currentTimeMillis(),
+ COMPLETE_CACHE_FLUSH);
+ }
+
+ /**
+ * Abort a cache flush.
+ * Call if the flush fails. Note that the only recovery for an aborted flush
+ * currently is a restart of the regionserver so the snapshot content dropped
+ * by the failure gets restored to the memcache.
+ */
+ void abortCacheFlush() {
+ this.cacheFlushLock.unlock();
+ }
+
+ /**
+ * @param column
+ * @return true if the column is a meta column
+ */
+ public static boolean isMetaColumn(byte [] column) {
+ return Bytes.equals(METACOLUMN, column);
+ }
+
+ /**
+ * Split up a bunch of regionserver commit log files that are no longer
+ * being written to, into new files, one per region for region to replay on
+ * startup. Delete the old log files when finished.
+ *
+ * @param rootDir qualified root directory of the HBase instance
+ * @param srcDir Directory of log files to split: e.g.
+ * <code>${ROOTDIR}/log_HOST_PORT</code>
+ * @param fs FileSystem
+ * @param conf HBaseConfiguration
+ * @throws IOException
+ */
+ public static void splitLog(final Path rootDir, final Path srcDir,
+ final FileSystem fs, final Configuration conf)
+ throws IOException {
+ if (!fs.exists(srcDir)) {
+ // Nothing to do
+ return;
+ }
+ FileStatus [] logfiles = fs.listStatus(srcDir);
+ if (logfiles == null || logfiles.length == 0) {
+ // Nothing to do
+ return;
+ }
+ LOG.info("Splitting " + logfiles.length + " log(s) in " +
+ srcDir.toString());
+ splitLog(rootDir, logfiles, fs, conf);
+ try {
+ fs.delete(srcDir, true);
+ } catch (IOException e) {
+ e = RemoteExceptionHandler.checkIOException(e);
+ IOException io = new IOException("Cannot delete: " + srcDir);
+ io.initCause(e);
+ throw io;
+ }
+ LOG.info("log file splitting completed for " + srcDir.toString());
+ }
+
+ /*
+ * @param rootDir
+ * @param logfiles
+ * @param fs
+ * @param conf
+ * @throws IOException
+ */
+ private static void splitLog(final Path rootDir, final FileStatus [] logfiles,
+ final FileSystem fs, final Configuration conf)
+ throws IOException {
+ Map<byte [], SequenceFile.Writer> logWriters =
+ new TreeMap<byte [], SequenceFile.Writer>(Bytes.BYTES_COMPARATOR);
+ try {
+ for (int i = 0; i < logfiles.length; i++) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Splitting " + (i + 1) + " of " + logfiles.length + ": " +
+ logfiles[i].getPath() + ", length=" + logfiles[i].getLen());
+ }
+ // Check for possibly empty file. With appends, currently Hadoop reports
+ // a zero length even if the file has been sync'd. Revisit if
+ // HADOOP-4751 is committed.
+ long length = logfiles[i].getLen();
+ HLogKey key = new HLogKey();
+ KeyValue val = new KeyValue();
+ try {
+ SequenceFile.Reader in =
+ new SequenceFile.Reader(fs, logfiles[i].getPath(), conf);
+ try {
+ int count = 0;
+ for (; in.next(key, val); count++) {
+ byte [] tableName = key.getTablename();
+ byte [] regionName = key.getRegionName();
+ SequenceFile.Writer w = logWriters.get(regionName);
+ if (w == null) {
+ Path logfile = new Path(
+ HRegion.getRegionDir(
+ HTableDescriptor.getTableDir(rootDir, tableName),
+ HRegionInfo.encodeRegionName(regionName)),
+ HREGION_OLDLOGFILE_NAME);
+ Path oldlogfile = null;
+ SequenceFile.Reader old = null;
+ if (fs.exists(logfile)) {
+ LOG.warn("Old log file " + logfile +
+ " already exists. Copying existing file to new file");
+ oldlogfile = new Path(logfile.toString() + ".old");
+ fs.rename(logfile, oldlogfile);
+ old = new SequenceFile.Reader(fs, oldlogfile, conf);
+ }
+ w = SequenceFile.createWriter(fs, conf, logfile, HLogKey.class,
+ KeyValue.class, getCompressionType(conf));
+ // Use copy of regionName; regionName object is reused inside in
+ // HStoreKey.getRegionName so its content changes as we iterate.
+ logWriters.put(regionName, w);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Creating new log file writer for path " + logfile +
+ " and region " + Bytes.toString(regionName));
+ }
+
+ if (old != null) {
+ // Copy from existing log file
+ HLogKey oldkey = new HLogKey();
+ KeyValue oldval = new KeyValue();
+ for (; old.next(oldkey, oldval); count++) {
+ if (LOG.isDebugEnabled() && count > 0 && count % 10000 == 0) {
+ LOG.debug("Copied " + count + " edits");
+ }
+ w.append(oldkey, oldval);
+ }
+ old.close();
+ fs.delete(oldlogfile, true);
+ }
+ }
+ w.append(key, val);
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Applied " + count + " total edits from " +
+ logfiles[i].getPath().toString());
+ }
+ } catch (IOException e) {
+ e = RemoteExceptionHandler.checkIOException(e);
+ if (!(e instanceof EOFException)) {
+ LOG.warn("Exception processing " + logfiles[i].getPath() +
+ " -- continuing. Possible DATA LOSS!", e);
+ }
+ } finally {
+ try {
+ in.close();
+ } catch (IOException e) {
+ LOG.warn("Close in finally threw exception -- continuing", e);
+ }
+ // Delete the input file now so we do not replay edits. We could
+ // have gotten here because of an exception. If so, probably
+ // nothing we can do about it. Replaying it, it could work but we
+ // could be stuck replaying for ever. Just continue though we
+ // could have lost some edits.
+ fs.delete(logfiles[i].getPath(), true);
+ }
+ } catch (IOException e) {
+ if (length <= 0) {
+ LOG.warn("Empty log, continuing: " + logfiles[i]);
+ continue;
+ }
+ throw e;
+ }
+ }
+ } finally {
+ for (SequenceFile.Writer w : logWriters.values()) {
+ w.close();
+ }
+ }
+ }
+
+ /**
+ * Construct the HLog directory name
+ *
+ * @param info HServerInfo for server
+ * @return the HLog directory name
+ */
+ public static String getHLogDirectoryName(HServerInfo info) {
+ return getHLogDirectoryName(HServerInfo.getServerName(info));
+ }
+
+ /**
+ * Construct the HLog directory name
+ *
+ * @param serverAddress
+ * @param startCode
+ * @return the HLog directory name
+ */
+ public static String getHLogDirectoryName(String serverAddress,
+ long startCode) {
+ if (serverAddress == null || serverAddress.length() == 0) {
+ return null;
+ }
+ return getHLogDirectoryName(
+ HServerInfo.getServerName(serverAddress, startCode));
+ }
+
+ /**
+ * Construct the HLog directory name
+ *
+ * @param serverName
+ * @return the HLog directory name
+ */
+ public static String getHLogDirectoryName(String serverName) {
+ StringBuilder dirName = new StringBuilder(HConstants.HREGION_LOGDIR_NAME);
+ dirName.append("/");
+ dirName.append(serverName);
+ return dirName.toString();
+ }
+
+ private static void usage() {
+ System.err.println("Usage: java org.apache.hbase.HLog" +
+ " {--dump <logfile>... | --split <logdir>...}");
+ }
+
+ /**
+ * Pass one or more log file names and it will either dump out a text version
+ * on <code>stdout</code> or split the specified log files.
+ *
+ * @param args
+ * @throws IOException
+ */
+ public static void main(String[] args) throws IOException {
+ if (args.length < 2) {
+ usage();
+ System.exit(-1);
+ }
+ boolean dump = true;
+ if (args[0].compareTo("--dump") != 0) {
+ if (args[0].compareTo("--split") == 0) {
+ dump = false;
+
+ } else {
+ usage();
+ System.exit(-1);
+ }
+ }
+ Configuration conf = new HBaseConfiguration();
+ FileSystem fs = FileSystem.get(conf);
+ Path baseDir = new Path(conf.get(HBASE_DIR));
+
+ for (int i = 1; i < args.length; i++) {
+ Path logPath = new Path(args[i]);
+ if (!fs.exists(logPath)) {
+ throw new FileNotFoundException(args[i] + " does not exist");
+ }
+ if (dump) {
+ if (!fs.isFile(logPath)) {
+ throw new IOException(args[i] + " is not a file");
+ }
+ Reader log = new SequenceFile.Reader(fs, logPath, conf);
+ try {
+ HLogKey key = new HLogKey();
+ KeyValue val = new KeyValue();
+ while (log.next(key, val)) {
+ System.out.println(key.toString() + " " + val.toString());
+ }
+ } finally {
+ log.close();
+ }
+ } else {
+ if (!fs.getFileStatus(logPath).isDir()) {
+ throw new IOException(args[i] + " is not a directory");
+ }
+ splitLog(baseDir, logPath, fs, conf);
+ }
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/HLogKey.java b/src/java/org/apache/hadoop/hbase/regionserver/HLogKey.java
new file mode 100644
index 0000000..8e6aded
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/HLogKey.java
@@ -0,0 +1,137 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.*;
+
+import java.io.*;
+
+/**
+ * A Key for an entry in the change log.
+ *
+ * The log intermingles edits to many tables and rows, so each log entry
+ * identifies the appropriate table and row. Within a table and row, they're
+ * also sorted.
+ *
+ * <p>Some Transactional edits (START, COMMIT, ABORT) will not have an
+ * associated row.
+ */
+public class HLogKey implements WritableComparable<HLogKey> {
+ private byte [] regionName;
+ private byte [] tablename;
+ private long logSeqNum;
+
+ /** Create an empty key useful when deserializing */
+ public HLogKey() {
+ this(null, null, 0L);
+ }
+
+ /**
+ * Create the log key!
+ * We maintain the tablename mainly for debugging purposes.
+ * A regionName is always a sub-table object.
+ *
+ * @param regionName - name of region
+ * @param tablename - name of table
+ * @param logSeqNum - log sequence number
+ */
+ public HLogKey(final byte [] regionName, final byte [] tablename,
+ long logSeqNum) {
+ this.regionName = regionName;
+ this.tablename = tablename;
+ this.logSeqNum = logSeqNum;
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // A bunch of accessors
+ //////////////////////////////////////////////////////////////////////////////
+
+ /** @return region name */
+ public byte [] getRegionName() {
+ return regionName;
+ }
+
+ /** @return table name */
+ public byte [] getTablename() {
+ return tablename;
+ }
+
+ /** @return log sequence number */
+ public long getLogSeqNum() {
+ return logSeqNum;
+ }
+
+ @Override
+ public String toString() {
+ return Bytes.toString(tablename) + "/" + Bytes.toString(regionName) + "/" +
+ logSeqNum;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null || getClass() != obj.getClass()) {
+ return false;
+ }
+ return compareTo((HLogKey)obj) == 0;
+ }
+
+ @Override
+ public int hashCode() {
+ int result = this.regionName.hashCode();
+ result ^= this.logSeqNum;
+ return result;
+ }
+
+ //
+ // Comparable
+ //
+
+ public int compareTo(HLogKey o) {
+ int result = Bytes.compareTo(this.regionName, o.regionName);
+ if(result == 0) {
+ if (this.logSeqNum < o.logSeqNum) {
+ result = -1;
+ } else if (this.logSeqNum > o.logSeqNum) {
+ result = 1;
+ }
+ }
+ return result;
+ }
+
+ //
+ // Writable
+ //
+
+ public void write(DataOutput out) throws IOException {
+ Bytes.writeByteArray(out, this.regionName);
+ Bytes.writeByteArray(out, this.tablename);
+ out.writeLong(logSeqNum);
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ this.regionName = Bytes.readByteArray(in);
+ this.tablename = Bytes.readByteArray(in);
+ this.logSeqNum = in.readLong();
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/HRegion.java b/src/java/org/apache/hadoop/hbase/regionserver/HRegion.java
new file mode 100644
index 0000000..5aff1b3
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -0,0 +1,2693 @@
+ /**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+import java.util.regex.Pattern;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.DroppedSnapshotException;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.RegionHistorian;
+import org.apache.hadoop.hbase.ValueOverMaxLengthException;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.BatchOperation;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.io.Reference.Range;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.util.Progressable;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * HRegion stores data for a certain region of a table. It stores all columns
+ * for each row. A given table consists of one or more HRegions.
+ *
+ * <p>We maintain multiple HStores for a single HRegion.
+ *
+ * <p>An HStore is a set of rows with some column data; together,
+ * they make up all the data for the rows.
+ *
+ * <p>Each HRegion has a 'startKey' and 'endKey'.
+ * <p>The first is inclusive, the second is exclusive (except for
+ * the final region) The endKey of region 0 is the same as
+ * startKey for region 1 (if it exists). The startKey for the
+ * first region is null. The endKey for the final region is null.
+ *
+ * <p>Locking at the HRegion level serves only one purpose: preventing the
+ * region from being closed (and consequently split) while other operations
+ * are ongoing. Each row level operation obtains both a row lock and a region
+ * read lock for the duration of the operation. While a scanner is being
+ * constructed, getScanner holds a read lock. If the scanner is successfully
+ * constructed, it holds a read lock until it is closed. A close takes out a
+ * write lock and consequently will block for ongoing operations and will block
+ * new operations from starting while the close is in progress.
+ *
+ * <p>An HRegion is defined by its table and its key extent.
+ *
+ * <p>It consists of at least one HStore. The number of HStores should be
+ * configurable, so that data which is accessed together is stored in the same
+ * HStore. Right now, we approximate that by building a single HStore for
+ * each column family. (This config info will be communicated via the
+ * tabledesc.)
+ *
+ * <p>The HTableDescriptor contains metainfo about the HRegion's table.
+ * regionName is a unique identifier for this HRegion. (startKey, endKey]
+ * defines the keyspace for this HRegion.
+ */
+public class HRegion implements HConstants {
+ static final Log LOG = LogFactory.getLog(HRegion.class);
+ static final String SPLITDIR = "splits";
+ static final String MERGEDIR = "merges";
+ final AtomicBoolean closed = new AtomicBoolean(false);
+ /* Closing can take some time; use the closing flag if there is stuff we don't want
+ * to do while in closing state; e.g. like offer this region up to the master as a region
+ * to close if the carrying regionserver is overloaded. Once set, it is never cleared.
+ */
+ private final AtomicBoolean closing = new AtomicBoolean(false);
+ private final RegionHistorian historian;
+
+ //////////////////////////////////////////////////////////////////////////////
+ // Members
+ //////////////////////////////////////////////////////////////////////////////
+
+ private final Map<Integer, byte []> locksToRows =
+ new ConcurrentHashMap<Integer, byte []>();
+ protected final Map<byte [], Store> stores =
+ new ConcurrentSkipListMap<byte [], Store>(KeyValue.FAMILY_COMPARATOR);
+ final AtomicLong memcacheSize = new AtomicLong(0);
+
+ // This is the table subdirectory.
+ final Path basedir;
+ final HLog log;
+ final FileSystem fs;
+ final HBaseConfiguration conf;
+ final HRegionInfo regionInfo;
+ final Path regiondir;
+ private final Path regionCompactionDir;
+ KeyValue.KVComparator comparator;
+ private KeyValue.KVComparator comparatorIgnoreTimestamp;
+
+ /*
+ * Set this when scheduling compaction if want the next compaction to be a
+ * major compaction. Cleared each time through compaction code.
+ */
+ private volatile boolean forceMajorCompaction = false;
+
+ /*
+ * Data structure of write state flags used coordinating flushes,
+ * compactions and closes.
+ */
+ static class WriteState {
+ // Set while a memcache flush is happening.
+ volatile boolean flushing = false;
+ // Set when a flush has been requested.
+ volatile boolean flushRequested = false;
+ // Set while a compaction is running.
+ volatile boolean compacting = false;
+ // Gets set in close. If set, cannot compact or flush again.
+ volatile boolean writesEnabled = true;
+ // Set if region is read-only
+ volatile boolean readOnly = false;
+
+ /**
+ * Set flags that make this region read-only.
+ */
+ synchronized void setReadOnly(final boolean onOff) {
+ this.writesEnabled = !onOff;
+ this.readOnly = onOff;
+ }
+
+ boolean isReadOnly() {
+ return this.readOnly;
+ }
+
+ boolean isFlushRequested() {
+ return this.flushRequested;
+ }
+ }
+
+ private volatile WriteState writestate = new WriteState();
+
+ final int memcacheFlushSize;
+ private volatile long lastFlushTime;
+ final FlushRequester flushListener;
+ private final int blockingMemcacheSize;
+ final long threadWakeFrequency;
+ // Used to guard splits and closes
+ private final ReentrantReadWriteLock splitsAndClosesLock =
+ new ReentrantReadWriteLock();
+ private final ReentrantReadWriteLock newScannerLock =
+ new ReentrantReadWriteLock();
+
+ // Stop updates lock
+ private final ReentrantReadWriteLock updatesLock =
+ new ReentrantReadWriteLock();
+ private final Object splitLock = new Object();
+ private long minSequenceId;
+ final AtomicInteger activeScannerCount = new AtomicInteger(0);
+
+ /**
+ * Name of the region info file that resides just under the region directory.
+ */
+ public final static String REGIONINFO_FILE = ".regioninfo";
+
+ /**
+ * REGIONINFO_FILE as byte array.
+ */
+ public final static byte [] REGIONINFO_FILE_BYTES =
+ Bytes.toBytes(REGIONINFO_FILE);
+
+ /**
+ * HRegion constructor.
+ *
+ * @param basedir qualified path of directory where region should be located,
+ * usually the table directory.
+ * @param log The HLog is the outbound log for any updates to the HRegion
+ * (There's a single HLog for all the HRegions on a single HRegionServer.)
+ * The log file is a logfile from the previous execution that's
+ * custom-computed for this HRegion. The HRegionServer computes and sorts the
+ * appropriate log info for this HRegion. If there is a previous log file
+ * (implying that the HRegion has been written-to before), then read it from
+ * the supplied path.
+ * @param fs is the filesystem.
+ * @param conf is global configuration settings.
+ * @param regionInfo - HRegionInfo that describes the region
+ * is new), then read them from the supplied path.
+ * @param flushListener an object that implements CacheFlushListener or null
+ * making progress to master -- otherwise master might think region deploy
+ * failed. Can be null.
+ * @throws IOException
+ */
+ public HRegion(Path basedir, HLog log, FileSystem fs, HBaseConfiguration conf,
+ HRegionInfo regionInfo, FlushRequester flushListener) {
+ this.basedir = basedir;
+ this.comparator = regionInfo.getComparator();
+ this.comparatorIgnoreTimestamp =
+ this.comparator.getComparatorIgnoringTimestamps();
+ this.log = log;
+ this.fs = fs;
+ this.conf = conf;
+ this.regionInfo = regionInfo;
+ this.flushListener = flushListener;
+ this.threadWakeFrequency = conf.getLong(THREAD_WAKE_FREQUENCY, 10 * 1000);
+ String encodedNameStr = Integer.toString(this.regionInfo.getEncodedName());
+ this.regiondir = new Path(basedir, encodedNameStr);
+ this.historian = RegionHistorian.getInstance();
+ if (LOG.isDebugEnabled()) {
+ // Write out region name as string and its encoded name.
+ LOG.debug("Opening region " + this + ", encoded=" +
+ this.regionInfo.getEncodedName());
+ }
+ this.regionCompactionDir =
+ new Path(getCompactionDir(basedir), encodedNameStr);
+ int flushSize = regionInfo.getTableDesc().getMemcacheFlushSize();
+ if (flushSize == HTableDescriptor.DEFAULT_MEMCACHE_FLUSH_SIZE) {
+ flushSize = conf.getInt("hbase.hregion.memcache.flush.size",
+ HTableDescriptor.DEFAULT_MEMCACHE_FLUSH_SIZE);
+ }
+ this.memcacheFlushSize = flushSize;
+ this.blockingMemcacheSize = this.memcacheFlushSize *
+ conf.getInt("hbase.hregion.memcache.block.multiplier", 1);
+ }
+
+ /**
+ * Initialize this region and get it ready to roll.
+ * Called after construction.
+ *
+ * @param initialFiles
+ * @param reporter
+ * @throws IOException
+ */
+ public void initialize(Path initialFiles, final Progressable reporter)
+ throws IOException {
+ Path oldLogFile = new Path(regiondir, HREGION_OLDLOGFILE_NAME);
+
+ // Move prefab HStore files into place (if any). This picks up split files
+ // and any merges from splits and merges dirs.
+ if (initialFiles != null && fs.exists(initialFiles)) {
+ fs.rename(initialFiles, this.regiondir);
+ }
+
+ // Write HRI to a file in case we need to recover .META.
+ checkRegioninfoOnFilesystem();
+
+ // Load in all the HStores.
+ long maxSeqId = -1;
+ long minSeqId = Integer.MAX_VALUE;
+ for (HColumnDescriptor c : this.regionInfo.getTableDesc().getFamilies()) {
+ Store store = instantiateHStore(this.basedir, c, oldLogFile, reporter);
+ this.stores.put(c.getName(), store);
+ long storeSeqId = store.getMaxSequenceId();
+ if (storeSeqId > maxSeqId) {
+ maxSeqId = storeSeqId;
+ }
+ if (storeSeqId < minSeqId) {
+ minSeqId = storeSeqId;
+ }
+ }
+
+ // Play log if one. Delete when done.
+ doReconstructionLog(oldLogFile, minSeqId, maxSeqId, reporter);
+ if (fs.exists(oldLogFile)) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Deleting old log file: " + oldLogFile);
+ }
+ fs.delete(oldLogFile, false);
+ }
+
+ // Add one to the current maximum sequence id so new edits are beyond.
+ this.minSequenceId = maxSeqId + 1;
+
+ // Get rid of any splits or merges that were lost in-progress
+ FSUtils.deleteDirectory(this.fs, new Path(regiondir, SPLITDIR));
+ FSUtils.deleteDirectory(this.fs, new Path(regiondir, MERGEDIR));
+
+ // See if region is meant to run read-only.
+ if (this.regionInfo.getTableDesc().isReadOnly()) {
+ this.writestate.setReadOnly(true);
+ }
+
+ // HRegion is ready to go!
+ this.writestate.compacting = false;
+ this.lastFlushTime = System.currentTimeMillis();
+ LOG.info("region " + this + "/" + this.regionInfo.getEncodedName() +
+ " available; sequence id is " + this.minSequenceId);
+ }
+
+ /*
+ * Write out an info file under the region directory. Useful recovering
+ * mangled regions.
+ * @throws IOException
+ */
+ private void checkRegioninfoOnFilesystem() throws IOException {
+ // Name of this file has two leading and trailing underscores so it doesn't
+ // clash w/ a store/family name. There is possibility, but assumption is
+ // that its slim (don't want to use control character in filename because
+ //
+ Path regioninfo = new Path(this.regiondir, REGIONINFO_FILE);
+ if (this.fs.exists(regioninfo) &&
+ this.fs.getFileStatus(regioninfo).getLen() > 0) {
+ return;
+ }
+ FSDataOutputStream out = this.fs.create(regioninfo, true);
+ try {
+ this.regionInfo.write(out);
+ out.write('\n');
+ out.write('\n');
+ out.write(Bytes.toBytes(this.regionInfo.toString()));
+ } finally {
+ out.close();
+ }
+ }
+
+ /**
+ * @return Updates to this region need to have a sequence id that is >= to
+ * the this number.
+ */
+ long getMinSequenceId() {
+ return this.minSequenceId;
+ }
+
+ /** @return a HRegionInfo object for this region */
+ public HRegionInfo getRegionInfo() {
+ return this.regionInfo;
+ }
+
+ /** @return true if region is closed */
+ public boolean isClosed() {
+ return this.closed.get();
+ }
+
+ /**
+ * @return True if closing process has started.
+ */
+ public boolean isClosing() {
+ return this.closing.get();
+ }
+
+ /**
+ * Close down this HRegion. Flush the cache, shut down each HStore, don't
+ * service any more calls.
+ *
+ * <p>This method could take some time to execute, so don't call it from a
+ * time-sensitive thread.
+ *
+ * @return Vector of all the storage files that the HRegion's component
+ * HStores make use of. It's a list of all HStoreFile objects. Returns empty
+ * vector if already closed and null if judged that it should not close.
+ *
+ * @throws IOException
+ */
+ public List<StoreFile> close() throws IOException {
+ return close(false);
+ }
+
+ /**
+ * Close down this HRegion. Flush the cache unless abort parameter is true,
+ * Shut down each HStore, don't service any more calls.
+ *
+ * This method could take some time to execute, so don't call it from a
+ * time-sensitive thread.
+ *
+ * @param abort true if server is aborting (only during testing)
+ * @return Vector of all the storage files that the HRegion's component
+ * HStores make use of. It's a list of HStoreFile objects. Can be null if
+ * we are not to close at this time or we are already closed.
+ *
+ * @throws IOException
+ */
+ List<StoreFile> close(final boolean abort) throws IOException {
+ if (isClosed()) {
+ LOG.warn("region " + this + " already closed");
+ return null;
+ }
+ this.closing.set(true);
+ synchronized (splitLock) {
+ synchronized (writestate) {
+ // Disable compacting and flushing by background threads for this
+ // region.
+ writestate.writesEnabled = false;
+ LOG.debug("Closing " + this + ": compactions & flushes disabled ");
+ while (writestate.compacting || writestate.flushing) {
+ LOG.debug("waiting for" +
+ (writestate.compacting ? " compaction" : "") +
+ (writestate.flushing ?
+ (writestate.compacting ? "," : "") + " cache flush" :
+ "") + " to complete for region " + this);
+ try {
+ writestate.wait();
+ } catch (InterruptedException iex) {
+ // continue
+ }
+ }
+ }
+ newScannerLock.writeLock().lock();
+ try {
+ // Wait for active scanners to finish. The write lock we hold will
+ // prevent new scanners from being created.
+ synchronized (activeScannerCount) {
+ while (activeScannerCount.get() != 0) {
+ LOG.debug("waiting for " + activeScannerCount.get() +
+ " scanners to finish");
+ try {
+ activeScannerCount.wait();
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ splitsAndClosesLock.writeLock().lock();
+ LOG.debug("Updates disabled for region, no outstanding scanners on " +
+ this);
+ try {
+ // Write lock means no more row locks can be given out. Wait on
+ // outstanding row locks to come in before we close so we do not drop
+ // outstanding updates.
+ waitOnRowLocks();
+ LOG.debug("No more row locks outstanding on region " + this);
+
+ // Don't flush the cache if we are aborting
+ if (!abort) {
+ internalFlushcache();
+ }
+
+ List<StoreFile> result = new ArrayList<StoreFile>();
+ for (Store store: stores.values()) {
+ result.addAll(store.close());
+ }
+ this.closed.set(true);
+ LOG.info("Closed " + this);
+ return result;
+ } finally {
+ splitsAndClosesLock.writeLock().unlock();
+ }
+ } finally {
+ newScannerLock.writeLock().unlock();
+ }
+ }
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // HRegion accessors
+ //////////////////////////////////////////////////////////////////////////////
+
+ /** @return start key for region */
+ public byte [] getStartKey() {
+ return this.regionInfo.getStartKey();
+ }
+
+ /** @return end key for region */
+ public byte [] getEndKey() {
+ return this.regionInfo.getEndKey();
+ }
+
+ /** @return region id */
+ public long getRegionId() {
+ return this.regionInfo.getRegionId();
+ }
+
+ /** @return region name */
+ public byte [] getRegionName() {
+ return this.regionInfo.getRegionName();
+ }
+
+ /** @return HTableDescriptor for this region */
+ public HTableDescriptor getTableDesc() {
+ return this.regionInfo.getTableDesc();
+ }
+
+ /** @return HLog in use for this region */
+ public HLog getLog() {
+ return this.log;
+ }
+
+ /** @return Configuration object */
+ public HBaseConfiguration getConf() {
+ return this.conf;
+ }
+
+ /** @return region directory Path */
+ public Path getRegionDir() {
+ return this.regiondir;
+ }
+
+ /** @return FileSystem being used by this region */
+ public FileSystem getFilesystem() {
+ return this.fs;
+ }
+
+ /** @return the last time the region was flushed */
+ public long getLastFlushTime() {
+ return this.lastFlushTime;
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // HRegion maintenance.
+ //
+ // These methods are meant to be called periodically by the HRegionServer for
+ // upkeep.
+ //////////////////////////////////////////////////////////////////////////////
+
+ /** @return returns size of largest HStore. */
+ public long getLargestHStoreSize() {
+ long size = 0;
+ for (Store h: stores.values()) {
+ long storeSize = h.getSize();
+ if (storeSize > size) {
+ size = storeSize;
+ }
+ }
+ return size;
+ }
+
+ /*
+ * Split the HRegion to create two brand-new ones. This also closes
+ * current HRegion. Split should be fast since we don't rewrite store files
+ * but instead create new 'reference' store files that read off the top and
+ * bottom ranges of parent store files.
+ * @param splitRow row on which to split region
+ * @return two brand-new (and open) HRegions or null if a split is not needed
+ * @throws IOException
+ */
+ HRegion [] splitRegion(final byte [] splitRow) throws IOException {
+ synchronized (splitLock) {
+ if (closed.get()) {
+ return null;
+ }
+ // Add start/end key checking: hbase-428.
+ byte [] startKey = this.regionInfo.getStartKey();
+ byte [] endKey = this.regionInfo.getEndKey();
+ if (this.comparator.matchingRows(startKey, 0, startKey.length,
+ splitRow, 0, splitRow.length)) {
+ LOG.debug("Startkey and midkey are same, not splitting");
+ return null;
+ }
+ if (this.comparator.matchingRows(splitRow, 0, splitRow.length,
+ endKey, 0, endKey.length)) {
+ LOG.debug("Endkey and midkey are same, not splitting");
+ return null;
+ }
+ LOG.info("Starting split of region " + this);
+ Path splits = new Path(this.regiondir, SPLITDIR);
+ if(!this.fs.exists(splits)) {
+ this.fs.mkdirs(splits);
+ }
+ // Calculate regionid to use. Can't be less than that of parent else
+ // it'll insert into wrong location over in .META. table: HBASE-710.
+ long rid = System.currentTimeMillis();
+ if (rid < this.regionInfo.getRegionId()) {
+ LOG.warn("Clock skew; parent regions id is " +
+ this.regionInfo.getRegionId() + " but current time here is " + rid);
+ rid = this.regionInfo.getRegionId() + 1;
+ }
+ HRegionInfo regionAInfo = new HRegionInfo(this.regionInfo.getTableDesc(),
+ startKey, splitRow, false, rid);
+ Path dirA =
+ new Path(splits, Integer.toString(regionAInfo.getEncodedName()));
+ if(fs.exists(dirA)) {
+ throw new IOException("Cannot split; target file collision at " + dirA);
+ }
+ HRegionInfo regionBInfo = new HRegionInfo(this.regionInfo.getTableDesc(),
+ splitRow, endKey, false, rid);
+ Path dirB =
+ new Path(splits, Integer.toString(regionBInfo.getEncodedName()));
+ if(this.fs.exists(dirB)) {
+ throw new IOException("Cannot split; target file collision at " + dirB);
+ }
+
+ // Now close the HRegion. Close returns all store files or null if not
+ // supposed to close (? What to do in this case? Implement abort of close?)
+ // Close also does wait on outstanding rows and calls a flush just-in-case.
+ List<StoreFile> hstoreFilesToSplit = close(false);
+ if (hstoreFilesToSplit == null) {
+ LOG.warn("Close came back null (Implement abort of close?)");
+ throw new RuntimeException("close returned empty vector of HStoreFiles");
+ }
+
+ // Split each store file.
+ for(StoreFile h: hstoreFilesToSplit) {
+ StoreFile.split(fs,
+ Store.getStoreHomedir(splits, regionAInfo.getEncodedName(),
+ h.getFamily()),
+ h, splitRow, Range.bottom);
+ StoreFile.split(fs,
+ Store.getStoreHomedir(splits, regionBInfo.getEncodedName(),
+ h.getFamily()),
+ h, splitRow, Range.top);
+ }
+
+ // Done!
+ // Opening the region copies the splits files from the splits directory
+ // under each region.
+ HRegion regionA = new HRegion(basedir, log, fs, conf, regionAInfo, null);
+ regionA.initialize(dirA, null);
+ regionA.close();
+ HRegion regionB = new HRegion(basedir, log, fs, conf, regionBInfo, null);
+ regionB.initialize(dirB, null);
+ regionB.close();
+
+ // Cleanup
+ boolean deleted = fs.delete(splits, true); // Get rid of splits directory
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Cleaned up " + FSUtils.getPath(splits) + " " + deleted);
+ }
+ HRegion regions[] = new HRegion [] {regionA, regionB};
+ this.historian.addRegionSplit(this.regionInfo,
+ regionA.getRegionInfo(), regionB.getRegionInfo());
+ return regions;
+ }
+ }
+
+ /*
+ * @param dir
+ * @return compaction directory for the passed in <code>dir</code>
+ */
+ static Path getCompactionDir(final Path dir) {
+ return new Path(dir, HREGION_COMPACTIONDIR_NAME);
+ }
+
+ /*
+ * Do preparation for pending compaction.
+ * Clean out any vestiges of previous failed compactions.
+ * @throws IOException
+ */
+ private void doRegionCompactionPrep() throws IOException {
+ doRegionCompactionCleanup();
+ }
+
+ /*
+ * Removes the compaction directory for this Store.
+ * @throws IOException
+ */
+ private void doRegionCompactionCleanup() throws IOException {
+ FSUtils.deleteDirectory(this.fs, this.regionCompactionDir);
+ }
+
+ void setForceMajorCompaction(final boolean b) {
+ this.forceMajorCompaction = b;
+ }
+
+ boolean getForceMajorCompaction() {
+ return this.forceMajorCompaction;
+ }
+
+ /**
+ * Called by compaction thread and after region is opened to compact the
+ * HStores if necessary.
+ *
+ * <p>This operation could block for a long time, so don't call it from a
+ * time-sensitive thread.
+ *
+ * Note that no locking is necessary at this level because compaction only
+ * conflicts with a region split, and that cannot happen because the region
+ * server does them sequentially and not in parallel.
+ *
+ * @return mid key if split is needed
+ * @throws IOException
+ */
+ public byte [] compactStores() throws IOException {
+ boolean majorCompaction = this.forceMajorCompaction;
+ this.forceMajorCompaction = false;
+ return compactStores(majorCompaction);
+ }
+
+ /*
+ * Called by compaction thread and after region is opened to compact the
+ * HStores if necessary.
+ *
+ * <p>This operation could block for a long time, so don't call it from a
+ * time-sensitive thread.
+ *
+ * Note that no locking is necessary at this level because compaction only
+ * conflicts with a region split, and that cannot happen because the region
+ * server does them sequentially and not in parallel.
+ *
+ * @param majorCompaction True to force a major compaction regardless of thresholds
+ * @return split row if split is needed
+ * @throws IOException
+ */
+ byte [] compactStores(final boolean majorCompaction)
+ throws IOException {
+ splitsAndClosesLock.readLock().lock();
+ try {
+ byte [] splitRow = null;
+ if (this.closed.get()) {
+ return splitRow;
+ }
+ try {
+ synchronized (writestate) {
+ if (!writestate.compacting && writestate.writesEnabled) {
+ writestate.compacting = true;
+ } else {
+ LOG.info("NOT compacting region " + this +
+ ": compacting=" + writestate.compacting + ", writesEnabled=" +
+ writestate.writesEnabled);
+ return splitRow;
+ }
+ }
+ LOG.info("Starting" + (majorCompaction? " major " : " ") +
+ "compaction on region " + this);
+ long startTime = System.currentTimeMillis();
+ doRegionCompactionPrep();
+ long maxSize = -1;
+ for (Store store: stores.values()) {
+ final Store.StoreSize ss = store.compact(majorCompaction);
+ if (ss != null && ss.getSize() > maxSize) {
+ maxSize = ss.getSize();
+ splitRow = ss.getSplitRow();
+ }
+ }
+ doRegionCompactionCleanup();
+ String timeTaken = StringUtils.formatTimeDiff(System.currentTimeMillis(),
+ startTime);
+ LOG.info("compaction completed on region " + this + " in " + timeTaken);
+ this.historian.addRegionCompaction(regionInfo, timeTaken);
+ } finally {
+ synchronized (writestate) {
+ writestate.compacting = false;
+ writestate.notifyAll();
+ }
+ }
+ return splitRow;
+ } finally {
+ splitsAndClosesLock.readLock().unlock();
+ }
+ }
+
+ /**
+ * Flush the cache.
+ *
+ * When this method is called the cache will be flushed unless:
+ * <ol>
+ * <li>the cache is empty</li>
+ * <li>the region is closed.</li>
+ * <li>a flush is already in progress</li>
+ * <li>writes are disabled</li>
+ * </ol>
+ *
+ * <p>This method may block for some time, so it should not be called from a
+ * time-sensitive thread.
+ *
+ * @return true if cache was flushed
+ *
+ * @throws IOException
+ * @throws DroppedSnapshotException Thrown when replay of hlog is required
+ * because a Snapshot was not properly persisted.
+ */
+ public boolean flushcache() throws IOException {
+ if (this.closed.get()) {
+ return false;
+ }
+ synchronized (writestate) {
+ if (!writestate.flushing && writestate.writesEnabled) {
+ this.writestate.flushing = true;
+ } else {
+ if(LOG.isDebugEnabled()) {
+ LOG.debug("NOT flushing memcache for region " + this +
+ ", flushing=" +
+ writestate.flushing + ", writesEnabled=" +
+ writestate.writesEnabled);
+ }
+ return false;
+ }
+ }
+ try {
+ // Prevent splits and closes
+ splitsAndClosesLock.readLock().lock();
+ try {
+ return internalFlushcache();
+ } finally {
+ splitsAndClosesLock.readLock().unlock();
+ }
+ } finally {
+ synchronized (writestate) {
+ writestate.flushing = false;
+ this.writestate.flushRequested = false;
+ writestate.notifyAll();
+ }
+ }
+ }
+
+ /**
+ * Flushing the cache is a little tricky. We have a lot of updates in the
+ * HMemcache, all of which have also been written to the log. We need to
+ * write those updates in the HMemcache out to disk, while being able to
+ * process reads/writes as much as possible during the flush operation. Also,
+ * the log has to state clearly the point in time at which the HMemcache was
+ * flushed. (That way, during recovery, we know when we can rely on the
+ * on-disk flushed structures and when we have to recover the HMemcache from
+ * the log.)
+ *
+ * <p>So, we have a three-step process:
+ *
+ * <ul><li>A. Flush the memcache to the on-disk stores, noting the current
+ * sequence ID for the log.<li>
+ *
+ * <li>B. Write a FLUSHCACHE-COMPLETE message to the log, using the sequence
+ * ID that was current at the time of memcache-flush.</li>
+ *
+ * <li>C. Get rid of the memcache structures that are now redundant, as
+ * they've been flushed to the on-disk HStores.</li>
+ * </ul>
+ * <p>This method is protected, but can be accessed via several public
+ * routes.
+ *
+ * <p> This method may block for some time.
+ *
+ * @return true if the region needs compacting
+ *
+ * @throws IOException
+ * @throws DroppedSnapshotException Thrown when replay of hlog is required
+ * because a Snapshot was not properly persisted.
+ */
+ private boolean internalFlushcache() throws IOException {
+ final long startTime = System.currentTimeMillis();
+ // Clear flush flag.
+ // Record latest flush time
+ this.lastFlushTime = startTime;
+ // If nothing to flush, return and avoid logging start/stop flush.
+ if (this.memcacheSize.get() <= 0) {
+ return false;
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Started memcache flush for region " + this +
+ ". Current region memcache size " +
+ StringUtils.humanReadableInt(this.memcacheSize.get()));
+ }
+
+ // Stop updates while we snapshot the memcache of all stores. We only have
+ // to do this for a moment. Its quick. The subsequent sequence id that
+ // goes into the HLog after we've flushed all these snapshots also goes
+ // into the info file that sits beside the flushed files.
+ // We also set the memcache size to zero here before we allow updates
+ // again so its value will represent the size of the updates received
+ // during the flush
+ long sequenceId = -1L;
+ long completeSequenceId = -1L;
+ this.updatesLock.writeLock().lock();
+ // Get current size of memcaches.
+ final long currentMemcacheSize = this.memcacheSize.get();
+ try {
+ for (Store s: stores.values()) {
+ s.snapshot();
+ }
+ sequenceId = log.startCacheFlush();
+ completeSequenceId = this.getCompleteCacheFlushSequenceId(sequenceId);
+ } finally {
+ this.updatesLock.writeLock().unlock();
+ }
+
+ // Any failure from here on out will be catastrophic requiring server
+ // restart so hlog content can be replayed and put back into the memcache.
+ // Otherwise, the snapshot content while backed up in the hlog, it will not
+ // be part of the current running servers state.
+ boolean compactionRequested = false;
+ try {
+ // A. Flush memcache to all the HStores.
+ // Keep running vector of all store files that includes both old and the
+ // just-made new flush store file.
+ for (Store hstore: stores.values()) {
+ boolean needsCompaction = hstore.flushCache(completeSequenceId);
+ if (needsCompaction) {
+ compactionRequested = true;
+ }
+ }
+ // Set down the memcache size by amount of flush.
+ this.memcacheSize.addAndGet(-currentMemcacheSize);
+ } catch (Throwable t) {
+ // An exception here means that the snapshot was not persisted.
+ // The hlog needs to be replayed so its content is restored to memcache.
+ // Currently, only a server restart will do this.
+ // We used to only catch IOEs but its possible that we'd get other
+ // exceptions -- e.g. HBASE-659 was about an NPE -- so now we catch
+ // all and sundry.
+ this.log.abortCacheFlush();
+ DroppedSnapshotException dse = new DroppedSnapshotException("region: " +
+ Bytes.toString(getRegionName()));
+ dse.initCause(t);
+ throw dse;
+ }
+
+ // If we get to here, the HStores have been written. If we get an
+ // error in completeCacheFlush it will release the lock it is holding
+
+ // B. Write a FLUSHCACHE-COMPLETE message to the log.
+ // This tells future readers that the HStores were emitted correctly,
+ // and that all updates to the log for this regionName that have lower
+ // log-sequence-ids can be safely ignored.
+ this.log.completeCacheFlush(getRegionName(),
+ regionInfo.getTableDesc().getName(), completeSequenceId);
+
+ // C. Finally notify anyone waiting on memcache to clear:
+ // e.g. checkResources().
+ synchronized (this) {
+ notifyAll();
+ }
+
+ if (LOG.isDebugEnabled()) {
+ long now = System.currentTimeMillis();
+ String timeTaken = StringUtils.formatTimeDiff(now, startTime);
+ LOG.debug("Finished memcache flush of ~" +
+ StringUtils.humanReadableInt(currentMemcacheSize) + " for region " +
+ this + " in " + (now - startTime) + "ms, sequence id=" + sequenceId +
+ ", compaction requested=" + compactionRequested);
+ if (!regionInfo.isMetaRegion()) {
+ this.historian.addRegionFlush(regionInfo, timeTaken);
+ }
+ }
+ return compactionRequested;
+ }
+
+ /**
+ * Get the sequence number to be associated with this cache flush. Used by
+ * TransactionalRegion to not complete pending transactions.
+ *
+ *
+ * @param currentSequenceId
+ * @return sequence id to complete the cache flush with
+ */
+ protected long getCompleteCacheFlushSequenceId(long currentSequenceId) {
+ return currentSequenceId;
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // get() methods for client use.
+ //////////////////////////////////////////////////////////////////////////////
+
+ /**
+ * Fetch multiple versions of a single data item, with timestamp.
+ *
+ * @param row
+ * @param column
+ * @param ts
+ * @param nv
+ * @return Results or null if none.
+ * @throws IOException
+ */
+ public List<KeyValue> get(final byte[] row, final byte[] column, final long ts,
+ final int nv)
+ throws IOException {
+ long timestamp = ts == -1? HConstants.LATEST_TIMESTAMP : ts;
+ int numVersions = nv == -1? 1 : nv;
+ splitsAndClosesLock.readLock().lock();
+ try {
+ if (this.closed.get()) {
+ throw new IOException("Region " + this + " closed");
+ }
+ // Make sure this is a valid row and valid column
+ checkRow(row);
+ checkColumn(column);
+ // Don't need a row lock for a simple get
+ List<KeyValue> result = getStore(column).
+ get(KeyValue.createFirstOnRow(row, column, timestamp), numVersions);
+ // Guarantee that we return null instead of a zero-length array,
+ // if there are no results to return.
+ return (result == null || result.isEmpty())? null : result;
+ } finally {
+ splitsAndClosesLock.readLock().unlock();
+ }
+ }
+
+ /**
+ * Data structure with a counter that is accessible rather than create a
+ * new Integer every time we want to up the counter. Initializes at count 1.
+ */
+ static class Counter {
+ int counter = 1;
+ }
+
+ /*
+ * Check to see if we've not gone over threshold for this particular
+ * column.
+ * @param kv
+ * @param versions
+ * @param versionsCount
+ * @return True if its ok to add current value.
+ */
+ static boolean okToAddResult(final KeyValue kv, final int versions,
+ final Map<KeyValue, HRegion.Counter> versionsCount) {
+ if (versionsCount == null) {
+ return true;
+ }
+ if (versionsCount.containsKey(kv)) {
+ if (versionsCount.get(kv).counter < versions) {
+ return true;
+ }
+ } else {
+ return true;
+ }
+ return false;
+ }
+
+ /*
+ * Used adding item found to list of results getting.
+ * @param kv
+ * @param versionsCount
+ * @param results
+ */
+ static void addResult(final KeyValue kv,
+ final Map<KeyValue, HRegion.Counter> versionsCount,
+ final List<KeyValue> results) {
+ // Don't add if already present; i.e. ignore second entry.
+ if (results.contains(kv)) return;
+ results.add(kv);
+ if (versionsCount == null) {
+ return;
+ }
+ if (!versionsCount.containsKey(kv)) {
+ versionsCount.put(kv, new HRegion.Counter());
+ } else {
+ versionsCount.get(kv).counter++;
+ }
+ }
+
+ /*
+ * @param versions Number of versions to get.
+ * @param versionsCount May be null.
+ * @param columns Columns we want to fetch.
+ * @return True if has enough versions.
+ */
+ static boolean hasEnoughVersions(final int versions,
+ final Map<KeyValue, HRegion.Counter> versionsCount,
+ final Set<byte []> columns) {
+ if (columns == null || versionsCount == null) {
+ // Wants all columns so just keep going
+ return false;
+ }
+ if (columns.size() > versionsCount.size()) {
+ return false;
+ }
+ if (versions == 1) {
+ return true;
+ }
+ // Need to look at each to make sure at least versions.
+ for (Map.Entry<KeyValue, HRegion.Counter> e: versionsCount.entrySet()) {
+ if (e.getValue().counter < versions) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ /**
+ * Fetch all the columns for the indicated row at a specified timestamp.
+ * Returns a HbaseMapWritable that maps column names to values.
+ *
+ * We should eventually use Bloom filters here, to reduce running time. If
+ * the database has many column families and is very sparse, then we could be
+ * checking many files needlessly. A small Bloom for each row would help us
+ * determine which column groups are useful for that row. That would let us
+ * avoid a bunch of disk activity.
+ *
+ * @param row
+ * @param columns Array of columns you'd like to retrieve. When null, get all.
+ * @param ts
+ * @param numVersions number of versions to retrieve
+ * @param lockid
+ * @return HbaseMapWritable<columnName, Cell> values
+ * @throws IOException
+ */
+ public HbaseMapWritable<byte [], Cell> getFull(final byte [] row,
+ final NavigableSet<byte []> columns, final long ts,
+ final int numVersions, final Integer lockid)
+ throws IOException {
+ // Check columns passed
+ if (columns != null) {
+ for (byte [] column: columns) {
+ checkColumn(column);
+ }
+ }
+ List<KeyValue> keyvalues = new ArrayList<KeyValue>();
+ Map<KeyValue, Counter> versionCounter =
+ new TreeMap<KeyValue, Counter>(this.comparatorIgnoreTimestamp);
+ Integer lid = getLock(lockid,row);
+ HashSet<Store> storeSet = new HashSet<Store>();
+ try {
+ // Get the concerned columns or all of them
+ if (columns != null) {
+ for (byte[] bs : columns) {
+ Store store = stores.get(bs);
+ if (store != null) {
+ storeSet.add(store);
+ }
+ }
+ } else {
+ storeSet.addAll(stores.values());
+ }
+ long timestamp =
+ (ts == HConstants.LATEST_TIMESTAMP)? System.currentTimeMillis(): ts;
+ KeyValue key = KeyValue.createFirstOnRow(row, timestamp);
+ // For each column name that is just a column family, open the store
+ // related to it and fetch everything for that row. HBASE-631
+ // Also remove each store from storeSet so that these stores
+ // won't be opened for no reason. HBASE-783
+ if (columns != null) {
+ for (byte [] bs : columns) {
+ // TODO: Fix so we use comparator in KeyValue that looks at
+ // column family portion only.
+ if (KeyValue.getFamilyDelimiterIndex(bs, 0, bs.length) == (bs.length - 1)) {
+ Store store = stores.get(bs);
+ store.getFull(key, null, null, numVersions, versionCounter,
+ keyvalues, timestamp);
+ storeSet.remove(store);
+ }
+ }
+ }
+ for (Store targetStore: storeSet) {
+ targetStore.getFull(key, columns, null, numVersions, versionCounter,
+ keyvalues, timestamp);
+ }
+
+ return Cell.createCells(keyvalues);
+ } finally {
+ if(lockid == null) releaseRowLock(lid);
+ }
+ }
+
+ /**
+ * Return all the data for the row that matches <i>row</i> exactly,
+ * or the one that immediately preceeds it, at or immediately before
+ * <i>ts</i>.
+ *
+ * @param row row key
+ * @return map of values
+ * @throws IOException
+ */
+ RowResult getClosestRowBefore(final byte [] row)
+ throws IOException{
+ return getClosestRowBefore(row, HConstants.COLUMN_FAMILY);
+ }
+
+ /**
+ * Return all the data for the row that matches <i>row</i> exactly,
+ * or the one that immediately preceeds it, at or immediately before
+ * <i>ts</i>.
+ *
+ * @param row row key
+ * @param columnFamily Must include the column family delimiter character.
+ * @return map of values
+ * @throws IOException
+ */
+ public RowResult getClosestRowBefore(final byte [] row,
+ final byte [] columnFamily)
+ throws IOException{
+ // look across all the HStores for this region and determine what the
+ // closest key is across all column families, since the data may be sparse
+ KeyValue key = null;
+ checkRow(row);
+ splitsAndClosesLock.readLock().lock();
+ try {
+ Store store = getStore(columnFamily);
+ KeyValue kv = new KeyValue(row, HConstants.LATEST_TIMESTAMP);
+ // get the closest key. (HStore.getRowKeyAtOrBefore can return null)
+ key = store.getRowKeyAtOrBefore(kv);
+ if (key == null) {
+ return null;
+ }
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ // This will get all results for this store. TODO: Do I have to make a
+ // new key?
+ if (!this.comparator.matchingRows(kv, key)) {
+ kv = new KeyValue(key.getRow(), HConstants.LATEST_TIMESTAMP);
+ }
+ store.getFull(kv, null, null, 1, null, results, System.currentTimeMillis());
+ // Convert to RowResult. TODO: Remove need to do this.
+ return RowResult.createRowResult(results);
+ } finally {
+ splitsAndClosesLock.readLock().unlock();
+ }
+ }
+
+ /**
+ * Return an iterator that scans over the HRegion, returning the indicated
+ * columns for only the rows that match the data filter. This Iterator must
+ * be closed by the caller.
+ *
+ * @param cols columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier. A column qualifier is judged to
+ * be a regex if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param firstRow row which is the starting point of the scan
+ * @param timestamp only return rows whose timestamp is <= this value
+ * @param filter row filter
+ * @return InternalScanner
+ * @throws IOException
+ */
+ public InternalScanner getScanner(byte[][] cols, byte [] firstRow,
+ long timestamp, RowFilterInterface filter)
+ throws IOException {
+ newScannerLock.readLock().lock();
+ try {
+ if (this.closed.get()) {
+ throw new IOException("Region " + this + " closed");
+ }
+ HashSet<Store> storeSet = new HashSet<Store>();
+ NavigableSet<byte []> columns =
+ new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ // Below we make up set of stores we want scanners on and we fill out the
+ // list of columns.
+ for (int i = 0; i < cols.length; i++) {
+ columns.add(cols[i]);
+ Store s = stores.get(cols[i]);
+ if (s != null) {
+ storeSet.add(s);
+ }
+ }
+ return new HScanner(columns, firstRow, timestamp,
+ storeSet.toArray(new Store [storeSet.size()]), filter);
+ } finally {
+ newScannerLock.readLock().unlock();
+ }
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // set() methods for client use.
+ //////////////////////////////////////////////////////////////////////////////
+
+ /**
+ * @param b
+ * @throws IOException
+ */
+ public void batchUpdate(BatchUpdate b) throws IOException {
+ this.batchUpdate(b, null, true);
+ }
+
+ /**
+ * @param b
+ * @param writeToWAL
+ * @throws IOException
+ */
+ public void batchUpdate(BatchUpdate b, boolean writeToWAL) throws IOException {
+ this.batchUpdate(b, null, writeToWAL);
+ }
+
+
+ /**
+ * @param b
+ * @param lockid
+ * @throws IOException
+ */
+ public void batchUpdate(BatchUpdate b, Integer lockid) throws IOException {
+ this.batchUpdate(b, lockid, true);
+ }
+
+ /**
+ * @param b
+ * @param lockid
+ * @param writeToWAL if true, then we write this update to the log
+ * @throws IOException
+ */
+ public void batchUpdate(BatchUpdate b, Integer lockid, boolean writeToWAL)
+ throws IOException {
+ checkReadOnly();
+ validateValuesLength(b);
+
+ // Do a rough check that we have resources to accept a write. The check is
+ // 'rough' in that between the resource check and the call to obtain a
+ // read lock, resources may run out. For now, the thought is that this
+ // will be extremely rare; we'll deal with it when it happens.
+ checkResources();
+ splitsAndClosesLock.readLock().lock();
+ try {
+ // We obtain a per-row lock, so other clients will block while one client
+ // performs an update. The read lock is released by the client calling
+ // #commit or #abort or if the HRegionServer lease on the lock expires.
+ // See HRegionServer#RegionListener for how the expire on HRegionServer
+ // invokes a HRegion#abort.
+ byte [] row = b.getRow();
+ // If we did not pass an existing row lock, obtain a new one
+ Integer lid = getLock(lockid, row);
+ long now = System.currentTimeMillis();
+ long commitTime = b.getTimestamp() == LATEST_TIMESTAMP?
+ now: b.getTimestamp();
+ Set<byte []> latestTimestampDeletes = null;
+ List<KeyValue> edits = new ArrayList<KeyValue>();
+ try {
+ for (BatchOperation op: b) {
+ byte [] column = op.getColumn();
+ checkColumn(column);
+ KeyValue kv = null;
+ if (op.isPut()) {
+ kv = new KeyValue(row, column, commitTime, op.getValue());
+ } else {
+ // Its a delete.
+ if (b.getTimestamp() == LATEST_TIMESTAMP) {
+ // Save off these deletes of the most recent thing added on the
+ // family.
+ if (latestTimestampDeletes == null) {
+ latestTimestampDeletes =
+ new TreeSet<byte []>(Bytes.BYTES_RAWCOMPARATOR);
+ }
+ latestTimestampDeletes.add(op.getColumn());
+ continue;
+ }
+ // Its an explicit timestamp delete
+ kv = new KeyValue(row, column, commitTime, KeyValue.Type.Delete,
+ HConstants.EMPTY_BYTE_ARRAY);
+ }
+ edits.add(kv);
+ }
+ if (!edits.isEmpty()) {
+ update(edits, writeToWAL);
+ }
+ if (latestTimestampDeletes != null &&
+ !latestTimestampDeletes.isEmpty()) {
+ // We have some LATEST_TIMESTAMP deletes to run. Can't do them inline
+ // as edits. Need to do individually after figuring which is latest
+ // timestamp to delete.
+ for (byte [] column: latestTimestampDeletes) {
+ deleteMultiple(row, column, LATEST_TIMESTAMP, 1);
+ }
+ }
+ } finally {
+ if(lockid == null) releaseRowLock(lid);
+ }
+ } finally {
+ splitsAndClosesLock.readLock().unlock();
+ }
+ }
+
+ /**
+ * Performs an atomic check and save operation. Checks if
+ * the specified expected values have changed, and if not
+ * applies the update.
+ *
+ * @param b the update to apply
+ * @param expectedValues the expected values to check
+ * @param lockid
+ * @param writeToWAL whether or not to write to the write ahead log
+ * @return true if update was applied
+ * @throws IOException
+ */
+ public boolean checkAndSave(BatchUpdate b,
+ HbaseMapWritable<byte[], byte[]> expectedValues, Integer lockid,
+ boolean writeToWAL)
+ throws IOException {
+ // This is basically a copy of batchUpdate with the atomic check and save
+ // added in. So you should read this method with batchUpdate. I will
+ // comment the areas that I have changed where I have not changed, you
+ // should read the comments from the batchUpdate method
+ boolean success = true;
+ checkReadOnly();
+ validateValuesLength(b);
+ checkResources();
+ splitsAndClosesLock.readLock().lock();
+ try {
+ byte[] row = b.getRow();
+ Integer lid = getLock(lockid,row);
+ try {
+ NavigableSet<byte []> keySet =
+ new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ keySet.addAll(expectedValues.keySet());
+ Map<byte[],Cell> actualValues = getFull(row, keySet,
+ HConstants.LATEST_TIMESTAMP, 1,lid);
+ for (byte[] key : keySet) {
+ // If test fails exit
+ if(!Bytes.equals(actualValues.get(key).getValue(),
+ expectedValues.get(key))) {
+ success = false;
+ break;
+ }
+ }
+ if (success) {
+ long commitTime = (b.getTimestamp() == LATEST_TIMESTAMP)?
+ System.currentTimeMillis(): b.getTimestamp();
+ Set<byte []> latestTimestampDeletes = null;
+ List<KeyValue> edits = new ArrayList<KeyValue>();
+ for (BatchOperation op: b) {
+ byte [] column = op.getColumn();
+ KeyValue kv = null;
+ if (op.isPut()) {
+ kv = new KeyValue(row, column, commitTime, op.getValue());
+ } else {
+ // Its a delete.
+ if (b.getTimestamp() == LATEST_TIMESTAMP) {
+ // Save off these deletes of the most recent thing added on
+ // the family.
+ if (latestTimestampDeletes == null) {
+ latestTimestampDeletes =
+ new TreeSet<byte []>(Bytes.BYTES_RAWCOMPARATOR);
+ }
+ latestTimestampDeletes.add(op.getColumn());
+ } else {
+ // Its an explicit timestamp delete
+ kv = new KeyValue(row, column, commitTime,
+ KeyValue.Type.Delete, HConstants.EMPTY_BYTE_ARRAY);
+ }
+ }
+ edits.add(kv);
+ }
+ if (!edits.isEmpty()) {
+ update(edits, writeToWAL);
+ }
+ if (latestTimestampDeletes != null &&
+ !latestTimestampDeletes.isEmpty()) {
+ // We have some LATEST_TIMESTAMP deletes to run. Can't do them inline
+ // as edits. Need to do individually after figuring which is latest
+ // timestamp to delete.
+ for (byte [] column: latestTimestampDeletes) {
+ deleteMultiple(row, column, LATEST_TIMESTAMP, 1);
+ }
+ }
+ }
+ } finally {
+ if(lockid == null) releaseRowLock(lid);
+ }
+ } finally {
+ splitsAndClosesLock.readLock().unlock();
+ }
+ return success;
+ }
+
+ /*
+ * Utility method to verify values length
+ * @param batchUpdate The update to verify
+ * @throws IOException Thrown if a value is too long
+ */
+ private void validateValuesLength(BatchUpdate batchUpdate)
+ throws IOException {
+ for (Iterator<BatchOperation> iter =
+ batchUpdate.iterator(); iter.hasNext();) {
+ BatchOperation operation = iter.next();
+ if (operation.getValue() != null) {
+ HColumnDescriptor fam = this.regionInfo.getTableDesc().
+ getFamily(operation.getColumn());
+ if (fam != null) {
+ int maxLength = fam.getMaxValueLength();
+ if (operation.getValue().length > maxLength) {
+ throw new ValueOverMaxLengthException("Value in column "
+ + Bytes.toString(operation.getColumn()) + " is too long. "
+ + operation.getValue().length + " instead of " + maxLength);
+ }
+ }
+ }
+ }
+ }
+
+ /*
+ * Check if resources to support an update.
+ *
+ * Here we synchronize on HRegion, a broad scoped lock. Its appropriate
+ * given we're figuring in here whether this region is able to take on
+ * writes. This is only method with a synchronize (at time of writing),
+ * this and the synchronize on 'this' inside in internalFlushCache to send
+ * the notify.
+ */
+ private void checkResources() {
+ boolean blocked = false;
+ while (this.memcacheSize.get() > this.blockingMemcacheSize) {
+ requestFlush();
+ if (!blocked) {
+ LOG.info("Blocking updates for '" + Thread.currentThread().getName() +
+ "' on region " + Bytes.toString(getRegionName()) +
+ ": Memcache size " +
+ StringUtils.humanReadableInt(this.memcacheSize.get()) +
+ " is >= than blocking " +
+ StringUtils.humanReadableInt(this.blockingMemcacheSize) + " size");
+ }
+ blocked = true;
+ synchronized(this) {
+ try {
+ wait(threadWakeFrequency);
+ } catch (InterruptedException e) {
+ // continue;
+ }
+ }
+ }
+ if (blocked) {
+ LOG.info("Unblocking updates for region " + this + " '"
+ + Thread.currentThread().getName() + "'");
+ }
+ }
+
+ /**
+ * Delete all cells of the same age as the passed timestamp or older.
+ * @param row
+ * @param column
+ * @param ts Delete all entries that have this timestamp or older
+ * @param lockid Row lock
+ * @throws IOException
+ */
+ public void deleteAll(final byte [] row, final byte [] column, final long ts,
+ final Integer lockid)
+ throws IOException {
+ checkColumn(column);
+ checkReadOnly();
+ Integer lid = getLock(lockid,row);
+ try {
+ // Delete ALL versions rather than column family VERSIONS. If we just did
+ // VERSIONS, then if 2* VERSION cells, subsequent gets would get old stuff.
+ deleteMultiple(row, column, ts, ALL_VERSIONS);
+ } finally {
+ if(lockid == null) releaseRowLock(lid);
+ }
+ }
+
+ /**
+ * Delete all cells of the same age as the passed timestamp or older.
+ * @param row
+ * @param ts Delete all entries that have this timestamp or older
+ * @param lockid Row lock
+ * @throws IOException
+ */
+ public void deleteAll(final byte [] row, final long ts, final Integer lockid)
+ throws IOException {
+ checkReadOnly();
+ Integer lid = getLock(lockid, row);
+ long time = ts;
+ if (ts == HConstants.LATEST_TIMESTAMP) {
+ time = System.currentTimeMillis();
+ }
+ KeyValue kv = KeyValue.createFirstOnRow(row, time);
+ try {
+ for (Store store : stores.values()) {
+ List<KeyValue> keyvalues = new ArrayList<KeyValue>();
+ store.getFull(kv, null, null, ALL_VERSIONS, null, keyvalues, time);
+ List<KeyValue> edits = new ArrayList<KeyValue>();
+ for (KeyValue key: keyvalues) {
+ // This is UGLY. COPY OF KEY PART OF KeyValue.
+ edits.add(key.cloneDelete());
+ }
+ update(edits);
+ }
+ } finally {
+ if (lockid == null) releaseRowLock(lid);
+ }
+ }
+
+ /**
+ * Delete all cells for a row with matching columns with timestamps
+ * less than or equal to <i>timestamp</i>.
+ *
+ * @param row The row to operate on
+ * @param columnRegex The column regex
+ * @param timestamp Timestamp to match
+ * @param lockid Row lock
+ * @throws IOException
+ */
+ public void deleteAllByRegex(final byte [] row, final String columnRegex,
+ final long timestamp, final Integer lockid) throws IOException {
+ checkReadOnly();
+ Pattern columnPattern = Pattern.compile(columnRegex);
+ Integer lid = getLock(lockid, row);
+ long now = System.currentTimeMillis();
+ KeyValue kv = new KeyValue(row, timestamp);
+ try {
+ for (Store store : stores.values()) {
+ List<KeyValue> keyvalues = new ArrayList<KeyValue>();
+ store.getFull(kv, null, columnPattern, ALL_VERSIONS, null, keyvalues,
+ now);
+ List<KeyValue> edits = new ArrayList<KeyValue>();
+ for (KeyValue key: keyvalues) {
+ edits.add(key.cloneDelete());
+ }
+ update(edits);
+ }
+ } finally {
+ if(lockid == null) releaseRowLock(lid);
+ }
+ }
+
+ /**
+ * Delete all cells for a row with matching column family with timestamps
+ * less than or equal to <i>timestamp</i>.
+ *
+ * @param row The row to operate on
+ * @param family The column family to match
+ * @param timestamp Timestamp to match
+ * @param lockid Row lock
+ * @throws IOException
+ */
+ public void deleteFamily(byte [] row, byte [] family, long timestamp,
+ final Integer lockid)
+ throws IOException{
+ checkReadOnly();
+ Integer lid = getLock(lockid, row);
+ long now = System.currentTimeMillis();
+ try {
+ // find the HStore for the column family
+ Store store = getStore(family);
+ // find all the keys that match our criteria
+ List<KeyValue> keyvalues = new ArrayList<KeyValue>();
+ store.getFull(new KeyValue(row, timestamp), null, null, ALL_VERSIONS,
+ null, keyvalues, now);
+ // delete all the cells
+ List<KeyValue> edits = new ArrayList<KeyValue>();
+ for (KeyValue kv: keyvalues) {
+ edits.add(kv.cloneDelete());
+ }
+ update(edits);
+ } finally {
+ if(lockid == null) releaseRowLock(lid);
+ }
+ }
+
+ /**
+ * Delete all cells for a row with all the matching column families by
+ * familyRegex with timestamps less than or equal to <i>timestamp</i>.
+ *
+ * @param row The row to operate on
+ * @param familyRegex The column family regex for matching. This regex
+ * expression just match the family name, it didn't include <code>:<code>
+ * @param timestamp Timestamp to match
+ * @param lockid Row lock
+ * @throws IOException
+ */
+ public void deleteFamilyByRegex(byte [] row, String familyRegex,
+ final long timestamp, final Integer lockid)
+ throws IOException {
+ checkReadOnly();
+ // construct the family regex pattern
+ Pattern familyPattern = Pattern.compile(familyRegex);
+ Integer lid = getLock(lockid, row);
+ long now = System.currentTimeMillis();
+ KeyValue kv = new KeyValue(row, timestamp);
+ try {
+ for(Store store: stores.values()) {
+ String familyName = Bytes.toString(store.getFamily().getName());
+ // check the family name match the family pattern.
+ if(!(familyPattern.matcher(familyName).matches()))
+ continue;
+
+ List<KeyValue> keyvalues = new ArrayList<KeyValue>();
+ store.getFull(kv, null, null, ALL_VERSIONS, null, keyvalues, now);
+ List<KeyValue> edits = new ArrayList<KeyValue>();
+ for (KeyValue k: keyvalues) {
+ edits.add(k.cloneDelete());
+ }
+ update(edits);
+ }
+ } finally {
+ if(lockid == null) releaseRowLock(lid);
+ }
+ }
+
+ /*
+ * Delete one or many cells.
+ * Used to support {@link #deleteAll(byte [], byte [], long)} and deletion of
+ * latest cell.
+ * @param row
+ * @param column
+ * @param ts Timestamp to start search on.
+ * @param versions How many versions to delete. Pass
+ * {@link HConstants#ALL_VERSIONS} to delete all.
+ * @throws IOException
+ */
+ private void deleteMultiple(final byte [] row, final byte [] column,
+ final long ts, final int versions)
+ throws IOException {
+ checkReadOnly();
+ // We used to have a getKeys method that purportedly only got the keys and
+ // not the keys and values. We now just do getFull. For memcache values,
+ // shouldn't matter if we get key and value since it'll be the entry that
+ // is in memcache. For the keyvalues from storefile, could be saving if
+ // we only returned key component. TODO.
+ List<KeyValue> keys = get(row, column, ts, versions);
+ if (keys != null && keys.size() > 0) {
+ // I think the below edits don't have to be storted. Its deletes.
+ // hey don't have to go in in exact sorted order (we don't have to worry
+ // about the meta or root sort comparator here).
+ List<KeyValue> edits = new ArrayList<KeyValue>();
+ for (KeyValue key: keys) {
+ edits.add(key.cloneDelete());
+ }
+ update(edits);
+ }
+ }
+
+ /**
+ * Tests for the existence of any cells for a given coordinate.
+ *
+ * @param row the row
+ * @param column the column, or null
+ * @param timestamp the timestamp, or HConstants.LATEST_VERSION for any
+ * @param lockid the existing lock, or null
+ * @return true if cells exist for the row, false otherwise
+ * @throws IOException
+ */
+ public boolean exists(final byte[] row, final byte[] column,
+ final long timestamp, final Integer lockid)
+ throws IOException {
+ checkRow(row);
+ Integer lid = getLock(lockid, row);
+ try {
+ NavigableSet<byte []> columns = null;
+ if (column != null) {
+ columns = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ columns.add(column);
+ }
+ return !getFull(row, columns, timestamp, 1, lid).isEmpty();
+ } finally {
+ if (lockid == null) releaseRowLock(lid);
+ }
+ }
+
+ /**
+ * @throws IOException Throws exception if region is in read-only mode.
+ */
+ protected void checkReadOnly() throws IOException {
+ if (this.writestate.isReadOnly()) {
+ throw new IOException("region is read only");
+ }
+ }
+
+ /**
+ * Add updates first to the hlog and then add values to memcache.
+ * Warning: Assumption is caller has lock on passed in row.
+ * @param edits Cell updates by column
+ * @throws IOException
+ */
+ private void update(final List<KeyValue> edits) throws IOException {
+ this.update(edits, true);
+ }
+
+ /**
+ * Add updates first to the hlog (if writeToWal) and then add values to memcache.
+ * Warning: Assumption is caller has lock on passed in row.
+ * @param writeToWAL if true, then we should write to the log
+ * @param updatesByColumn Cell updates by column
+ * @throws IOException
+ */
+ private void update(final List<KeyValue> edits, boolean writeToWAL)
+ throws IOException {
+ if (edits == null || edits.isEmpty()) {
+ return;
+ }
+ boolean flush = false;
+ this.updatesLock.readLock().lock();
+ try {
+ if (writeToWAL) {
+ this.log.append(regionInfo.getRegionName(),
+ regionInfo.getTableDesc().getName(), edits,
+ (regionInfo.isMetaRegion() || regionInfo.isRootRegion()));
+ }
+ long size = 0;
+ for (KeyValue kv: edits) {
+ // TODO: Fix -- do I have to do a getColumn here?
+ size = this.memcacheSize.addAndGet(getStore(kv.getColumn()).add(kv));
+ }
+ flush = isFlushSize(size);
+ } finally {
+ this.updatesLock.readLock().unlock();
+ }
+ if (flush) {
+ // Request a cache flush. Do it outside update lock.
+ requestFlush();
+ }
+ }
+
+ private void requestFlush() {
+ if (this.flushListener == null) {
+ return;
+ }
+ synchronized (writestate) {
+ if (this.writestate.isFlushRequested()) {
+ return;
+ }
+ writestate.flushRequested = true;
+ }
+ // Make request outside of synchronize block; HBASE-818.
+ this.flushListener.request(this);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Flush requested on " + this);
+ }
+ }
+
+ /*
+ * @param size
+ * @return True if size is over the flush threshold
+ */
+ private boolean isFlushSize(final long size) {
+ return size > this.memcacheFlushSize;
+ }
+
+ // Do any reconstruction needed from the log
+ @SuppressWarnings("unused")
+ protected void doReconstructionLog(Path oldLogFile, long minSeqId, long maxSeqId,
+ Progressable reporter)
+ throws UnsupportedEncodingException, IOException {
+ // Nothing to do (Replaying is done in HStores)
+ }
+
+ protected Store instantiateHStore(Path baseDir,
+ HColumnDescriptor c, Path oldLogFile, Progressable reporter)
+ throws IOException {
+ return new Store(baseDir, this.regionInfo, c, this.fs, oldLogFile,
+ this.conf, reporter);
+ }
+
+ /**
+ * Return HStore instance.
+ * Use with caution. Exposed for use of fixup utilities.
+ * @param column Name of column family hosted by this region.
+ * @return Store that goes with the family on passed <code>column</code>.
+ * TODO: Make this lookup faster.
+ */
+ public Store getStore(final byte [] column) {
+ return this.stores.get(column);
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // Support code
+ //////////////////////////////////////////////////////////////////////////////
+
+ /** Make sure this is a valid row for the HRegion */
+ private void checkRow(final byte [] row) throws IOException {
+ if(!rowIsInRange(regionInfo, row)) {
+ throw new WrongRegionException("Requested row out of range for " +
+ "HRegion " + this + ", startKey='" +
+ Bytes.toString(regionInfo.getStartKey()) + "', getEndKey()='" +
+ Bytes.toString(regionInfo.getEndKey()) + "', row='" +
+ Bytes.toString(row) + "'");
+ }
+ }
+
+ /*
+ * Make sure this is a valid column for the current table
+ * @param columnName
+ * @throws NoSuchColumnFamilyException
+ */
+ private void checkColumn(final byte [] column)
+ throws NoSuchColumnFamilyException {
+ if (column == null) {
+ return;
+ }
+ if (!regionInfo.getTableDesc().hasFamily(column)) {
+ throw new NoSuchColumnFamilyException("Column family on " +
+ Bytes.toString(column) + " does not exist in region " + this
+ + " in table " + regionInfo.getTableDesc());
+ }
+ }
+
+ /**
+ * Obtain a lock on the given row. Blocks until success.
+ *
+ * I know it's strange to have two mappings:
+ * <pre>
+ * ROWS ==> LOCKS
+ * </pre>
+ * as well as
+ * <pre>
+ * LOCKS ==> ROWS
+ * </pre>
+ *
+ * But it acts as a guard on the client; a miswritten client just can't
+ * submit the name of a row and start writing to it; it must know the correct
+ * lockid, which matches the lock list in memory.
+ *
+ * <p>It would be more memory-efficient to assume a correctly-written client,
+ * which maybe we'll do in the future.
+ *
+ * @param row Name of row to lock.
+ * @throws IOException
+ * @return The id of the held lock.
+ */
+ Integer obtainRowLock(final byte [] row) throws IOException {
+ checkRow(row);
+ splitsAndClosesLock.readLock().lock();
+ try {
+ if (this.closed.get()) {
+ throw new NotServingRegionException("Region " + this + " closed");
+ }
+ Integer key = Bytes.mapKey(row);
+ synchronized (locksToRows) {
+ while (locksToRows.containsKey(key)) {
+ try {
+ locksToRows.wait();
+ } catch (InterruptedException ie) {
+ // Empty
+ }
+ }
+ locksToRows.put(key, row);
+ locksToRows.notifyAll();
+ return key;
+ }
+ } finally {
+ splitsAndClosesLock.readLock().unlock();
+ }
+ }
+
+ /**
+ * Used by unit tests.
+ * @param lockid
+ * @return Row that goes with <code>lockid</code>
+ */
+ byte [] getRowFromLock(final Integer lockid) {
+ return locksToRows.get(lockid);
+ }
+
+ /**
+ * Release the row lock!
+ * @param row Name of row whose lock we are to release
+ */
+ void releaseRowLock(final Integer lockid) {
+ synchronized (locksToRows) {
+ locksToRows.remove(lockid);
+ locksToRows.notifyAll();
+ }
+ }
+
+ /**
+ * See if row is currently locked.
+ * @param lockid
+ * @return boolean
+ */
+ private boolean isRowLocked(final Integer lockid) {
+ synchronized (locksToRows) {
+ if(locksToRows.containsKey(lockid)) {
+ return true;
+ }
+ return false;
+ }
+ }
+
+ /**
+ * Returns existing row lock if found, otherwise
+ * obtains a new row lock and returns it.
+ * @param lockid
+ * @return lockid
+ */
+ private Integer getLock(Integer lockid, byte [] row)
+ throws IOException {
+ Integer lid = null;
+ if (lockid == null) {
+ lid = obtainRowLock(row);
+ } else {
+ if (!isRowLocked(lockid)) {
+ throw new IOException("Invalid row lock");
+ }
+ lid = lockid;
+ }
+ return lid;
+ }
+
+ private void waitOnRowLocks() {
+ synchronized (locksToRows) {
+ while (this.locksToRows.size() > 0) {
+ LOG.debug("waiting for " + this.locksToRows.size() + " row locks");
+ try {
+ this.locksToRows.wait();
+ } catch (InterruptedException e) {
+ // Catch. Let while test determine loop-end.
+ }
+ }
+ }
+ }
+
+ @Override
+ public boolean equals(Object o) {
+ return this.hashCode() == ((HRegion)o).hashCode();
+ }
+
+ @Override
+ public int hashCode() {
+ return this.regionInfo.getRegionName().hashCode();
+ }
+
+ @Override
+ public String toString() {
+ return this.regionInfo.getRegionNameAsString();
+ }
+
+ /** @return Path of region base directory */
+ public Path getBaseDir() {
+ return this.basedir;
+ }
+
+ /**
+ * HScanner is an iterator through a bunch of rows in an HRegion.
+ */
+ private class HScanner implements InternalScanner {
+ private InternalScanner[] scanners;
+ private List<KeyValue> [] resultSets;
+ private RowFilterInterface filter;
+
+ /** Create an HScanner with a handle on many HStores. */
+ @SuppressWarnings("unchecked")
+ HScanner(final NavigableSet<byte []> columns, byte [] firstRow,
+ long timestamp, final Store [] stores, final RowFilterInterface filter)
+ throws IOException {
+ this.filter = filter;
+ this.scanners = new InternalScanner[stores.length];
+ try {
+ for (int i = 0; i < stores.length; i++) {
+ // Only pass relevant columns to each store
+ NavigableSet<byte[]> columnSubset =
+ new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+ for (byte [] c: columns) {
+ if (KeyValue.FAMILY_COMPARATOR.compare(stores[i].storeName, c) == 0) {
+ columnSubset.add(c);
+ }
+ }
+ RowFilterInterface f = filter;
+ if (f != null) {
+ // Need to replicate filters.
+ // At least WhileMatchRowFilter will mess up the scan if only
+ // one shared across many rows. See HADOOP-2467.
+ f = (RowFilterInterface) WritableUtils.clone(filter, conf);
+ }
+ scanners[i] = stores[i].getScanner(timestamp, columnSubset, firstRow, f);
+ }
+ } catch (IOException e) {
+ for (int i = 0; i < this.scanners.length; i++) {
+ if (scanners[i] != null) {
+ closeScanner(i);
+ }
+ }
+ throw e;
+ }
+
+ // Advance to the first key in each store.
+ // All results will match the required column-set and scanTime.
+ this.resultSets = new List[scanners.length];
+ for (int i = 0; i < scanners.length; i++) {
+ resultSets[i] = new ArrayList<KeyValue>();
+ if(scanners[i] != null && !scanners[i].next(resultSets[i])) {
+ closeScanner(i);
+ }
+ }
+
+ // As we have now successfully completed initialization, increment the
+ // activeScanner count.
+ activeScannerCount.incrementAndGet();
+ }
+
+ public boolean next(List<KeyValue> results)
+ throws IOException {
+ boolean moreToFollow = false;
+ boolean filtered = false;
+ do {
+ // Find the lowest key across all stores.
+ KeyValue chosen = null;
+ long chosenTimestamp = -1;
+ for (int i = 0; i < this.scanners.length; i++) {
+ if (this.resultSets[i] == null || this.resultSets[i].isEmpty()) {
+ continue;
+ }
+ KeyValue kv = this.resultSets[i].get(0);
+ if (chosen == null ||
+ (comparator.compareRows(kv, chosen) < 0) ||
+ ((comparator.compareRows(kv, chosen) == 0) &&
+ (kv.getTimestamp() > chosenTimestamp))) {
+ chosen = kv;
+ chosenTimestamp = chosen.getTimestamp();
+ }
+ }
+
+ // Store results from each sub-scanner.
+ if (chosenTimestamp >= 0) {
+ for (int i = 0; i < scanners.length; i++) {
+ if (this.resultSets[i] == null || this.resultSets[i].isEmpty()) {
+ continue;
+ }
+ KeyValue kv = this.resultSets[i].get(0);
+ if (comparator.compareRows(kv, chosen) == 0) {
+ results.addAll(this.resultSets[i]);
+ resultSets[i].clear();
+ if (!scanners[i].next(resultSets[i])) {
+ closeScanner(i);
+ }
+ }
+ }
+ }
+
+ moreToFollow = chosenTimestamp >= 0;
+ if (results == null || results.size() <= 0) {
+ // If we got no results, then there is no more to follow.
+ moreToFollow = false;
+ }
+
+ filtered = filter == null ? false : filter.filterRow(results);
+ if (filter != null && filter.filterAllRemaining()) {
+ moreToFollow = false;
+ }
+
+ if (moreToFollow) {
+ if (filter != null) {
+ filter.rowProcessed(filtered, chosen.getBuffer(), chosen.getRowOffset(),
+ chosen.getRowLength());
+ }
+ if (filtered) {
+ results.clear();
+ }
+ }
+ } while(filtered && moreToFollow);
+
+ // Make sure scanners closed if no more results
+ if (!moreToFollow) {
+ for (int i = 0; i < scanners.length; i++) {
+ if (null != scanners[i]) {
+ closeScanner(i);
+ }
+ }
+ }
+
+ return moreToFollow;
+ }
+
+ /** Shut down a single scanner */
+ void closeScanner(int i) {
+ try {
+ try {
+ scanners[i].close();
+ } catch (IOException e) {
+ LOG.warn("Failed closing scanner " + i, e);
+ }
+ } finally {
+ scanners[i] = null;
+ // These data members can be null if exception in constructor
+ if (resultSets != null) {
+ resultSets[i] = null;
+ }
+ }
+ }
+
+ public void close() {
+ try {
+ for(int i = 0; i < scanners.length; i++) {
+ if(scanners[i] != null) {
+ closeScanner(i);
+ }
+ }
+ } finally {
+ synchronized (activeScannerCount) {
+ int count = activeScannerCount.decrementAndGet();
+ if (count < 0) {
+ LOG.error("active scanner count less than zero: " + count +
+ " resetting to zero");
+ activeScannerCount.set(0);
+ count = 0;
+ }
+ if (count == 0) {
+ activeScannerCount.notifyAll();
+ }
+ }
+ }
+ }
+
+ public boolean isWildcardScanner() {
+ throw new UnsupportedOperationException("Unimplemented on HScanner");
+ }
+
+ public boolean isMultipleMatchScanner() {
+ throw new UnsupportedOperationException("Unimplemented on HScanner");
+ }
+ }
+
+ // Utility methods
+
+ /**
+ * Convenience method creating new HRegions. Used by createTable and by the
+ * bootstrap code in the HMaster constructor.
+ * Note, this method creates an {@link HLog} for the created region. It
+ * needs to be closed explicitly. Use {@link HRegion#getLog()} to get
+ * access.
+ * @param info Info for region to create.
+ * @param rootDir Root directory for HBase instance
+ * @param conf
+ * @return new HRegion
+ *
+ * @throws IOException
+ */
+ public static HRegion createHRegion(final HRegionInfo info, final Path rootDir,
+ final HBaseConfiguration conf)
+ throws IOException {
+ Path tableDir =
+ HTableDescriptor.getTableDir(rootDir, info.getTableDesc().getName());
+ Path regionDir = HRegion.getRegionDir(tableDir, info.getEncodedName());
+ FileSystem fs = FileSystem.get(conf);
+ fs.mkdirs(regionDir);
+ // Note in historian the creation of new region.
+ if (!info.isMetaRegion()) {
+ RegionHistorian.getInstance().addRegionCreation(info);
+ }
+ HRegion region = new HRegion(tableDir,
+ new HLog(fs, new Path(regionDir, HREGION_LOGDIR_NAME), conf, null),
+ fs, conf, info, null);
+ region.initialize(null, null);
+ return region;
+ }
+
+ /**
+ * Convenience method to open a HRegion outside of an HRegionServer context.
+ * @param info Info for region to be opened.
+ * @param rootDir Root directory for HBase instance
+ * @param log HLog for region to use. This method will call
+ * HLog#setSequenceNumber(long) passing the result of the call to
+ * HRegion#getMinSequenceId() to ensure the log id is properly kept
+ * up. HRegionStore does this every time it opens a new region.
+ * @param conf
+ * @return new HRegion
+ *
+ * @throws IOException
+ */
+ public static HRegion openHRegion(final HRegionInfo info, final Path rootDir,
+ final HLog log, final HBaseConfiguration conf)
+ throws IOException {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Opening region: " + info);
+ }
+ if (info == null) {
+ throw new NullPointerException("Passed region info is null");
+ }
+ HRegion r = new HRegion(
+ HTableDescriptor.getTableDir(rootDir, info.getTableDesc().getName()),
+ log, FileSystem.get(conf), conf, info, null);
+ r.initialize(null, null);
+ if (log != null) {
+ log.setSequenceNumber(r.getMinSequenceId());
+ }
+ return r;
+ }
+
+ /**
+ * Inserts a new region's meta information into the passed
+ * <code>meta</code> region. Used by the HMaster bootstrap code adding
+ * new table to ROOT table.
+ *
+ * @param meta META HRegion to be updated
+ * @param r HRegion to add to <code>meta</code>
+ *
+ * @throws IOException
+ */
+ public static void addRegionToMETA(HRegion meta, HRegion r)
+ throws IOException {
+ meta.checkResources();
+ // The row key is the region name
+ byte [] row = r.getRegionName();
+ Integer lid = meta.obtainRowLock(row);
+ try {
+ List<KeyValue> edits = new ArrayList<KeyValue>();
+ edits.add(new KeyValue(row, COL_REGIONINFO, System.currentTimeMillis(),
+ Writables.getBytes(r.getRegionInfo())));
+ meta.update(edits);
+ } finally {
+ meta.releaseRowLock(lid);
+ }
+ }
+
+ /**
+ * Delete a region's meta information from the passed
+ * <code>meta</code> region. Removes content in the 'info' column family.
+ * Does not remove region historian info.
+ *
+ * @param srvr META server to be updated
+ * @param metaRegionName Meta region name
+ * @param regionName HRegion to remove from <code>meta</code>
+ *
+ * @throws IOException
+ */
+ public static void removeRegionFromMETA(final HRegionInterface srvr,
+ final byte [] metaRegionName, final byte [] regionName)
+ throws IOException {
+ srvr.deleteFamily(metaRegionName, regionName, HConstants.COLUMN_FAMILY,
+ HConstants.LATEST_TIMESTAMP, -1L);
+ }
+
+ /**
+ * Utility method used by HMaster marking regions offlined.
+ * @param srvr META server to be updated
+ * @param metaRegionName Meta region name
+ * @param info HRegion to update in <code>meta</code>
+ *
+ * @throws IOException
+ */
+ public static void offlineRegionInMETA(final HRegionInterface srvr,
+ final byte [] metaRegionName, final HRegionInfo info)
+ throws IOException {
+ BatchUpdate b = new BatchUpdate(info.getRegionName());
+ info.setOffline(true);
+ b.put(COL_REGIONINFO, Writables.getBytes(info));
+ b.delete(COL_SERVER);
+ b.delete(COL_STARTCODE);
+ // If carrying splits, they'll be in place when we show up on new
+ // server.
+ srvr.batchUpdate(metaRegionName, b, -1L);
+ }
+
+ /**
+ * Clean COL_SERVER and COL_STARTCODE for passed <code>info</code> in
+ * <code>.META.</code>
+ * @param srvr
+ * @param metaRegionName
+ * @param info
+ * @throws IOException
+ */
+ public static void cleanRegionInMETA(final HRegionInterface srvr,
+ final byte [] metaRegionName, final HRegionInfo info)
+ throws IOException {
+ BatchUpdate b = new BatchUpdate(info.getRegionName());
+ b.delete(COL_SERVER);
+ b.delete(COL_STARTCODE);
+ // If carrying splits, they'll be in place when we show up on new
+ // server.
+ srvr.batchUpdate(metaRegionName, b, LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Deletes all the files for a HRegion
+ *
+ * @param fs the file system object
+ * @param rootdir qualified path of HBase root directory
+ * @param info HRegionInfo for region to be deleted
+ * @throws IOException
+ */
+ public static void deleteRegion(FileSystem fs, Path rootdir, HRegionInfo info)
+ throws IOException {
+ deleteRegion(fs, HRegion.getRegionDir(rootdir, info));
+ }
+
+ private static void deleteRegion(FileSystem fs, Path regiondir)
+ throws IOException {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("DELETING region " + regiondir.toString());
+ }
+ if (!fs.delete(regiondir, true)) {
+ LOG.warn("Failed delete of " + regiondir);
+ }
+ }
+
+ /**
+ * Computes the Path of the HRegion
+ *
+ * @param tabledir qualified path for table
+ * @param name ENCODED region name
+ * @return Path of HRegion directory
+ */
+ public static Path getRegionDir(final Path tabledir, final int name) {
+ return new Path(tabledir, Integer.toString(name));
+ }
+
+ /**
+ * Computes the Path of the HRegion
+ *
+ * @param rootdir qualified path of HBase root directory
+ * @param info HRegionInfo for the region
+ * @return qualified path of region directory
+ */
+ public static Path getRegionDir(final Path rootdir, final HRegionInfo info) {
+ return new Path(
+ HTableDescriptor.getTableDir(rootdir, info.getTableDesc().getName()),
+ Integer.toString(info.getEncodedName()));
+ }
+
+ /**
+ * Determines if the specified row is within the row range specified by the
+ * specified HRegionInfo
+ *
+ * @param info HRegionInfo that specifies the row range
+ * @param row row to be checked
+ * @return true if the row is within the range specified by the HRegionInfo
+ */
+ public static boolean rowIsInRange(HRegionInfo info, final byte [] row) {
+ return ((info.getStartKey().length == 0) ||
+ (Bytes.compareTo(info.getStartKey(), row) <= 0)) &&
+ ((info.getEndKey().length == 0) ||
+ (Bytes.compareTo(info.getEndKey(), row) > 0));
+ }
+
+ /**
+ * Make the directories for a specific column family
+ *
+ * @param fs the file system
+ * @param tabledir base directory where region will live (usually the table dir)
+ * @param hri
+ * @param colFamily the column family
+ * @throws IOException
+ */
+ public static void makeColumnFamilyDirs(FileSystem fs, Path tabledir,
+ final HRegionInfo hri, byte [] colFamily)
+ throws IOException {
+ Path dir = Store.getStoreHomedir(tabledir, hri.getEncodedName(), colFamily);
+ if (!fs.mkdirs(dir)) {
+ LOG.warn("Failed to create " + dir);
+ }
+ }
+
+ /**
+ * Merge two HRegions. The regions must be adjacent and must not overlap.
+ *
+ * @param srcA
+ * @param srcB
+ * @return new merged HRegion
+ * @throws IOException
+ */
+ public static HRegion mergeAdjacent(final HRegion srcA, final HRegion srcB)
+ throws IOException {
+ HRegion a = srcA;
+ HRegion b = srcB;
+
+ // Make sure that srcA comes first; important for key-ordering during
+ // write of the merged file.
+ if (srcA.getStartKey() == null) {
+ if (srcB.getStartKey() == null) {
+ throw new IOException("Cannot merge two regions with null start key");
+ }
+ // A's start key is null but B's isn't. Assume A comes before B
+ } else if ((srcB.getStartKey() == null) ||
+ (Bytes.compareTo(srcA.getStartKey(), srcB.getStartKey()) > 0)) {
+ a = srcB;
+ b = srcA;
+ }
+
+ if (!(Bytes.compareTo(a.getEndKey(), b.getStartKey()) == 0)) {
+ throw new IOException("Cannot merge non-adjacent regions");
+ }
+ return merge(a, b);
+ }
+
+ /**
+ * Merge two regions whether they are adjacent or not.
+ *
+ * @param a region a
+ * @param b region b
+ * @return new merged region
+ * @throws IOException
+ */
+ public static HRegion merge(HRegion a, HRegion b) throws IOException {
+ if (!a.getRegionInfo().getTableDesc().getNameAsString().equals(
+ b.getRegionInfo().getTableDesc().getNameAsString())) {
+ throw new IOException("Regions do not belong to the same table");
+ }
+
+ FileSystem fs = a.getFilesystem();
+
+ // Make sure each region's cache is empty
+
+ a.flushcache();
+ b.flushcache();
+
+ // Compact each region so we only have one store file per family
+
+ a.compactStores(true);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Files for region: " + a);
+ listPaths(fs, a.getRegionDir());
+ }
+ b.compactStores(true);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Files for region: " + b);
+ listPaths(fs, b.getRegionDir());
+ }
+
+ HBaseConfiguration conf = a.getConf();
+ HTableDescriptor tabledesc = a.getTableDesc();
+ HLog log = a.getLog();
+ Path basedir = a.getBaseDir();
+ // Presume both are of same region type -- i.e. both user or catalog
+ // table regions. This way can use comparator.
+ final byte [] startKey = a.comparator.matchingRows(a.getStartKey(), 0,
+ a.getStartKey().length,
+ EMPTY_BYTE_ARRAY, 0, EMPTY_BYTE_ARRAY.length) ||
+ b.comparator.matchingRows(b.getStartKey(), 0, b.getStartKey().length,
+ EMPTY_BYTE_ARRAY, 0, EMPTY_BYTE_ARRAY.length)?
+ EMPTY_BYTE_ARRAY:
+ a.comparator.compareRows(a.getStartKey(), 0, a.getStartKey().length,
+ b.getStartKey(), 0, b.getStartKey().length) <= 0?
+ a.getStartKey(): b.getStartKey();
+ final byte [] endKey = a.comparator.matchingRows(a.getEndKey(), 0,
+ a.getEndKey().length, EMPTY_BYTE_ARRAY, 0, EMPTY_BYTE_ARRAY.length) ||
+ a.comparator.matchingRows(b.getEndKey(), 0, b.getEndKey().length,
+ EMPTY_BYTE_ARRAY, 0, EMPTY_BYTE_ARRAY.length)?
+ EMPTY_BYTE_ARRAY:
+ a.comparator.compareRows(a.getEndKey(), 0, a.getEndKey().length,
+ b.getEndKey(), 0, b.getEndKey().length) <= 0?
+ b.getEndKey(): a.getEndKey();
+
+ HRegionInfo newRegionInfo = new HRegionInfo(tabledesc, startKey, endKey);
+ LOG.info("Creating new region " + newRegionInfo.toString());
+ int encodedName = newRegionInfo.getEncodedName();
+ Path newRegionDir = HRegion.getRegionDir(a.getBaseDir(), encodedName);
+ if(fs.exists(newRegionDir)) {
+ throw new IOException("Cannot merge; target file collision at " +
+ newRegionDir);
+ }
+ fs.mkdirs(newRegionDir);
+
+ LOG.info("starting merge of regions: " + a + " and " + b +
+ " into new region " + newRegionInfo.toString() +
+ " with start key <" + Bytes.toString(startKey) + "> and end key <" +
+ Bytes.toString(endKey) + ">");
+
+ // Move HStoreFiles under new region directory
+ Map<byte [], List<StoreFile>> byFamily =
+ new TreeMap<byte [], List<StoreFile>>(Bytes.BYTES_COMPARATOR);
+ byFamily = filesByFamily(byFamily, a.close());
+ byFamily = filesByFamily(byFamily, b.close());
+ for (Map.Entry<byte [], List<StoreFile>> es : byFamily.entrySet()) {
+ byte [] colFamily = es.getKey();
+ makeColumnFamilyDirs(fs, basedir, newRegionInfo, colFamily);
+ // Because we compacted the source regions we should have no more than two
+ // HStoreFiles per family and there will be no reference store
+ List<StoreFile> srcFiles = es.getValue();
+ if (srcFiles.size() == 2) {
+ long seqA = srcFiles.get(0).getMaxSequenceId();
+ long seqB = srcFiles.get(1).getMaxSequenceId();
+ if (seqA == seqB) {
+ // Can't have same sequenceid since on open of a store, this is what
+ // distingushes the files (see the map of stores how its keyed by
+ // sequenceid).
+ throw new IOException("Files have same sequenceid: " + seqA);
+ }
+ }
+ for (StoreFile hsf: srcFiles) {
+ StoreFile.rename(fs, hsf.getPath(),
+ StoreFile.getUniqueFile(fs, Store.getStoreHomedir(basedir,
+ newRegionInfo.getEncodedName(), colFamily)));
+ }
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Files for new region");
+ listPaths(fs, newRegionDir);
+ }
+ HRegion dstRegion = new HRegion(basedir, log, fs, conf, newRegionInfo, null);
+ dstRegion.initialize(null, null);
+ dstRegion.compactStores();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Files for new region");
+ listPaths(fs, dstRegion.getRegionDir());
+ }
+ deleteRegion(fs, a.getRegionDir());
+ deleteRegion(fs, b.getRegionDir());
+
+ LOG.info("merge completed. New region is " + dstRegion);
+
+ return dstRegion;
+ }
+
+ /*
+ * Fills a map with a vector of store files keyed by column family.
+ * @param byFamily Map to fill.
+ * @param storeFiles Store files to process.
+ * @param family
+ * @return Returns <code>byFamily</code>
+ */
+ private static Map<byte [], List<StoreFile>> filesByFamily(
+ Map<byte [], List<StoreFile>> byFamily, List<StoreFile> storeFiles) {
+ for (StoreFile src: storeFiles) {
+ byte [] family = src.getFamily();
+ List<StoreFile> v = byFamily.get(family);
+ if (v == null) {
+ v = new ArrayList<StoreFile>();
+ byFamily.put(family, v);
+ }
+ v.add(src);
+ }
+ return byFamily;
+ }
+
+ /**
+ * @return True if needs a mojor compaction.
+ * @throws IOException
+ */
+ boolean isMajorCompaction() throws IOException {
+ for (Store store: this.stores.values()) {
+ if (store.isMajorCompaction()) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /*
+ * List the files under the specified directory
+ *
+ * @param fs
+ * @param dir
+ * @throws IOException
+ */
+ private static void listPaths(FileSystem fs, Path dir) throws IOException {
+ if (LOG.isDebugEnabled()) {
+ FileStatus[] stats = fs.listStatus(dir);
+ if (stats == null || stats.length == 0) {
+ return;
+ }
+ for (int i = 0; i < stats.length; i++) {
+ String path = stats[i].getPath().toString();
+ if (stats[i].isDir()) {
+ LOG.debug("d " + path);
+ listPaths(fs, stats[i].getPath());
+ } else {
+ LOG.debug("f " + path + " size=" + stats[i].getLen());
+ }
+ }
+ }
+ }
+
+ public long incrementColumnValue(byte[] row, byte[] column, long amount)
+ throws IOException {
+ checkRow(row);
+ checkColumn(column);
+
+ Integer lid = obtainRowLock(row);
+ splitsAndClosesLock.readLock().lock();
+ try {
+ KeyValue kv = new KeyValue(row, column);
+ long ts = System.currentTimeMillis();
+ byte [] value = null;
+
+ Store store = getStore(column);
+
+ List<KeyValue> c;
+ // Try the memcache first.
+ store.lock.readLock().lock();
+ try {
+ c = store.memcache.get(kv, 1);
+ } finally {
+ store.lock.readLock().unlock();
+ }
+ // Pick the latest value out of List<Cell> c:
+ if (c.size() >= 1) {
+ // Use the memcache timestamp value.
+ LOG.debug("Overwriting the memcache value for " + Bytes.toString(row) +
+ "/" + Bytes.toString(column));
+ ts = c.get(0).getTimestamp();
+ value = c.get(0).getValue();
+ }
+
+ if (value == null) {
+ // Check the store (including disk) for the previous value.
+ c = store.get(kv, 1);
+ if (c != null && c.size() == 1) {
+ LOG.debug("Using HFile previous value for " + Bytes.toString(row) +
+ "/" + Bytes.toString(column));
+ value = c.get(0).getValue();
+ } else if (c != null && c.size() > 1) {
+ throw new DoNotRetryIOException("more than 1 value returned in " +
+ "incrementColumnValue from Store");
+ }
+ }
+
+ if (value == null) {
+ // Doesn't exist
+ LOG.debug("Creating new counter value for " + Bytes.toString(row) +
+ "/"+ Bytes.toString(column));
+ value = Bytes.toBytes(amount);
+ } else {
+ if (amount == 0) return Bytes.toLong(value);
+ value = Bytes.incrementBytes(value, amount);
+ }
+
+ BatchUpdate b = new BatchUpdate(row, ts);
+ b.put(column, value);
+ batchUpdate(b, lid, true);
+ return Bytes.toLong(value);
+ } finally {
+ splitsAndClosesLock.readLock().unlock();
+ releaseRowLock(lid);
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java b/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
new file mode 100644
index 0000000..fdace88
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -0,0 +1,2456 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.lang.Thread.UncaughtExceptionHandler;
+import java.lang.management.ManagementFactory;
+import java.lang.management.MemoryUsage;
+import java.lang.management.RuntimeMXBean;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.Field;
+import java.net.BindException;
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Random;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.SortedSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HServerLoad;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.LeaseListener;
+import org.apache.hadoop.hbase.Leases;
+import org.apache.hadoop.hbase.LocalHBaseCluster;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.RegionHistorian;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.UnknownRowLockException;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.HMsg.Type;
+import org.apache.hadoop.hbase.Leases.LeaseStillHeldException;
+import org.apache.hadoop.hbase.client.ServerConnection;
+import org.apache.hadoop.hbase.client.ServerConnectionManager;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.ipc.HBaseRPC;
+import org.apache.hadoop.hbase.ipc.HBaseRPCErrorHandler;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+import org.apache.hadoop.hbase.ipc.HBaseServer;
+import org.apache.hadoop.hbase.ipc.HMasterRegionInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.InfoServer;
+import org.apache.hadoop.hbase.util.Sleeper;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.net.DNS;
+import org.apache.hadoop.util.Progressable;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.Watcher.Event.EventType;
+import org.apache.zookeeper.Watcher.Event.KeeperState;
+
+/**
+ * HRegionServer makes a set of HRegions available to clients. It checks in with
+ * the HMaster. There are many HRegionServers in a single HBase deployment.
+ */
+public class HRegionServer implements HConstants, HRegionInterface,
+ HBaseRPCErrorHandler, Runnable, Watcher {
+ static final Log LOG = LogFactory.getLog(HRegionServer.class);
+ private static final HMsg REPORT_EXITING = new HMsg(Type.MSG_REPORT_EXITING);
+ private static final HMsg REPORT_QUIESCED = new HMsg(Type.MSG_REPORT_QUIESCED);
+
+ // Set when a report to the master comes back with a message asking us to
+ // shutdown. Also set by call to stop when debugging or running unit tests
+ // of HRegionServer in isolation. We use AtomicBoolean rather than
+ // plain boolean so we can pass a reference to Chore threads. Otherwise,
+ // Chore threads need to know about the hosting class.
+ protected final AtomicBoolean stopRequested = new AtomicBoolean(false);
+
+ protected final AtomicBoolean quiesced = new AtomicBoolean(false);
+
+ protected final AtomicBoolean safeMode = new AtomicBoolean(true);
+
+ // Go down hard. Used if file system becomes unavailable and also in
+ // debugging and unit tests.
+ protected volatile boolean abortRequested;
+
+ // If false, the file system has become unavailable
+ protected volatile boolean fsOk;
+
+ protected HServerInfo serverInfo;
+ protected final HBaseConfiguration conf;
+
+ private final ServerConnection connection;
+ protected final AtomicBoolean haveRootRegion = new AtomicBoolean(false);
+ private FileSystem fs;
+ private Path rootDir;
+ private final Random rand = new Random();
+
+ // Key is Bytes.hashCode of region name byte array and the value is HRegion
+ // in both of the maps below. Use Bytes.mapKey(byte []) generating key for
+ // below maps.
+ protected final Map<Integer, HRegion> onlineRegions =
+ new ConcurrentHashMap<Integer, HRegion>();
+
+ protected final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+ private final List<HMsg> outboundMsgs =
+ Collections.synchronizedList(new ArrayList<HMsg>());
+
+ final int numRetries;
+ protected final int threadWakeFrequency;
+ private final int msgInterval;
+ private final int serverLeaseTimeout;
+
+ protected final int numRegionsToReport;
+
+ // Remote HMaster
+ private HMasterRegionInterface hbaseMaster;
+
+ // Server to handle client requests. Default access so can be accessed by
+ // unit tests.
+ HBaseServer server;
+
+ // Leases
+ private Leases leases;
+
+ // Request counter
+ private volatile AtomicInteger requestCount = new AtomicInteger();
+
+ // Info server. Default access so can be used by unit tests. REGIONSERVER
+ // is name of the webapp and the attribute name used stuffing this instance
+ // into web context.
+ InfoServer infoServer;
+
+ /** region server process name */
+ public static final String REGIONSERVER = "regionserver";
+
+ /*
+ * Space is reserved in HRS constructor and then released when aborting
+ * to recover from an OOME. See HBASE-706. TODO: Make this percentage of the
+ * heap or a minimum.
+ */
+ private final LinkedList<byte[]> reservedSpace = new LinkedList<byte []>();
+
+ private RegionServerMetrics metrics;
+
+ // Compactions
+ CompactSplitThread compactSplitThread;
+
+ // Cache flushing
+ MemcacheFlusher cacheFlusher;
+
+ /* Check for major compactions.
+ */
+ Chore majorCompactionChecker;
+
+ // HLog and HLog roller. log is protected rather than private to avoid
+ // eclipse warning when accessed by inner classes
+ protected volatile HLog log;
+ LogRoller logRoller;
+ LogFlusher logFlusher;
+
+ // limit compactions while starting up
+ CompactionLimitThread compactionLimitThread;
+
+ // flag set after we're done setting up server threads (used for testing)
+ protected volatile boolean isOnline;
+
+ final Map<String, InternalScanner> scanners =
+ new ConcurrentHashMap<String, InternalScanner>();
+
+ private ZooKeeperWrapper zooKeeperWrapper;
+
+ // A sleeper that sleeps for msgInterval.
+ private final Sleeper sleeper;
+
+ private final long rpcTimeout;
+
+ // Address passed in to constructor.
+ private final HServerAddress address;
+
+ // The main region server thread.
+ private Thread regionServerThread;
+
+ // Run HDFS shutdown thread on exit if this is set. We clear this out when
+ // doing a restart() to prevent closing of HDFS.
+ private final AtomicBoolean shutdownHDFS = new AtomicBoolean(true);
+
+ /**
+ * Starts a HRegionServer at the default location
+ * @param conf
+ * @throws IOException
+ */
+ public HRegionServer(HBaseConfiguration conf) throws IOException {
+ this(new HServerAddress(conf.get(REGIONSERVER_ADDRESS,
+ DEFAULT_REGIONSERVER_ADDRESS)), conf);
+ }
+
+ /**
+ * Starts a HRegionServer at the specified location
+ * @param address
+ * @param conf
+ * @throws IOException
+ */
+ public HRegionServer(HServerAddress address, HBaseConfiguration conf)
+ throws IOException {
+ this.address = address;
+ this.abortRequested = false;
+ this.fsOk = true;
+ this.conf = conf;
+ this.connection = ServerConnectionManager.getConnection(conf);
+
+ this.isOnline = false;
+
+ // Config'ed params
+ this.numRetries = conf.getInt("hbase.client.retries.number", 2);
+ this.threadWakeFrequency = conf.getInt(THREAD_WAKE_FREQUENCY, 10 * 1000);
+ this.msgInterval = conf.getInt("hbase.regionserver.msginterval", 3 * 1000);
+ this.serverLeaseTimeout =
+ conf.getInt("hbase.master.lease.period", 120 * 1000);
+
+ sleeper = new Sleeper(this.msgInterval, this.stopRequested);
+
+ // Task thread to process requests from Master
+ this.worker = new Worker();
+
+ this.numRegionsToReport =
+ conf.getInt("hbase.regionserver.numregionstoreport", 10);
+
+ this.rpcTimeout = conf.getLong("hbase.regionserver.lease.period", 60000);
+
+ reinitialize();
+ }
+
+ /**
+ * Creates all of the state that needs to be reconstructed in case we are
+ * doing a restart. This is shared between the constructor and restart().
+ * @throws IOException
+ */
+ private void reinitialize() throws IOException {
+ abortRequested = false;
+ stopRequested.set(false);
+ shutdownHDFS.set(true);
+
+ // Server to handle client requests
+ this.server = HBaseRPC.getServer(this, address.getBindAddress(),
+ address.getPort(), conf.getInt("hbase.regionserver.handler.count", 10),
+ false, conf);
+ this.server.setErrorHandler(this);
+ String machineName = DNS.getDefaultHost(
+ conf.get("hbase.regionserver.dns.interface","default"),
+ conf.get("hbase.regionserver.dns.nameserver","default"));
+ // Address is givin a default IP for the moment. Will be changed after
+ // calling the master.
+ this.serverInfo = new HServerInfo(new HServerAddress(
+ new InetSocketAddress(address.getBindAddress(),
+ this.server.getListenerAddress().getPort())), System.currentTimeMillis(),
+ this.conf.getInt("hbase.regionserver.info.port", 60030), machineName);
+ if (this.serverInfo.getServerAddress() == null) {
+ throw new NullPointerException("Server address cannot be null; " +
+ "hbase-958 debugging");
+ }
+
+ reinitializeThreads();
+
+ reinitializeZooKeeper();
+
+ int nbBlocks = conf.getInt("hbase.regionserver.nbreservationblocks", 4);
+ for(int i = 0; i < nbBlocks; i++) {
+ reservedSpace.add(new byte[DEFAULT_SIZE_RESERVATION_BLOCK]);
+ }
+ }
+
+ private void reinitializeZooKeeper() throws IOException {
+ zooKeeperWrapper = new ZooKeeperWrapper(conf);
+ watchMasterAddress();
+
+ boolean startCodeOk = false;
+ while(!startCodeOk) {
+ serverInfo.setStartCode(System.currentTimeMillis());
+ startCodeOk = zooKeeperWrapper.writeRSLocation(serverInfo);
+ if(!startCodeOk) {
+ LOG.debug("Start code already taken, trying another one");
+ }
+ }
+ }
+
+ private void reinitializeThreads() {
+ this.workerThread = new Thread(worker);
+
+ // Cache flushing thread.
+ this.cacheFlusher = new MemcacheFlusher(conf, this);
+
+ // Compaction thread
+ this.compactSplitThread = new CompactSplitThread(this);
+
+ // Log rolling thread
+ this.logRoller = new LogRoller(this);
+
+ // Log flushing thread
+ this.logFlusher =
+ new LogFlusher(this.threadWakeFrequency, this.stopRequested);
+
+ // Background thread to check for major compactions; needed if region
+ // has not gotten updates in a while. Make it run at a lesser frequency.
+ int multiplier = this.conf.getInt(THREAD_WAKE_FREQUENCY +
+ ".multiplier", 1000);
+ this.majorCompactionChecker = new MajorCompactionChecker(this,
+ this.threadWakeFrequency * multiplier, this.stopRequested);
+
+ this.leases = new Leases(
+ conf.getInt("hbase.regionserver.lease.period", 60 * 1000),
+ this.threadWakeFrequency);
+ }
+
+ /**
+ * We register ourselves as a watcher on the master address ZNode. This is
+ * called by ZooKeeper when we get an event on that ZNode. When this method
+ * is called it means either our master has died, or a new one has come up.
+ * Either way we need to update our knowledge of the master.
+ * @param event WatchedEvent from ZooKeeper.
+ */
+ public void process(WatchedEvent event) {
+ EventType type = event.getType();
+ KeeperState state = event.getState();
+ LOG.info("Got ZooKeeper event, state: " + state + ", type: " +
+ type + ", path: " + event.getPath());
+
+ // Ignore events if we're shutting down.
+ if (stopRequested.get()) {
+ LOG.debug("Ignoring ZooKeeper event while shutting down");
+ return;
+ }
+
+ if (state == KeeperState.Expired) {
+ LOG.error("ZooKeeper session expired");
+ restart();
+ } else if (type == EventType.NodeCreated) {
+ getMaster();
+
+ // ZooKeeper watches are one time only, so we need to re-register our watch.
+ watchMasterAddress();
+ }
+ }
+
+ private void watchMasterAddress() {
+ while (!stopRequested.get() && !zooKeeperWrapper.watchMasterAddress(this)) {
+ LOG.warn("Unable to set watcher on ZooKeeper master address. Retrying.");
+ sleeper.sleep();
+ }
+ }
+
+ private void restart() {
+ LOG.info("Restarting Region Server");
+
+ shutdownHDFS.set(false);
+ abort();
+ Threads.shutdown(regionServerThread);
+
+ boolean done = false;
+ while (!done) {
+ try {
+ reinitialize();
+ done = true;
+ } catch (IOException e) {
+ LOG.debug("Error trying to reinitialize ZooKeeper", e);
+ }
+ }
+
+ Thread t = new Thread(this);
+ String name = regionServerThread.getName();
+ t.setName(name);
+ t.start();
+ }
+
+ /** @return ZooKeeperWrapper used by RegionServer. */
+ public ZooKeeperWrapper getZooKeeperWrapper() {
+ return zooKeeperWrapper;
+ }
+
+ /**
+ * The HRegionServer sticks in this loop until closed. It repeatedly checks
+ * in with the HMaster, sending heartbeats & reports, and receiving HRegion
+ * load/unload instructions.
+ */
+ public void run() {
+ regionServerThread = Thread.currentThread();
+ boolean quiesceRequested = false;
+ try {
+ init(reportForDuty());
+ long lastMsg = 0;
+ // Now ask master what it wants us to do and tell it what we have done
+ for (int tries = 0; !stopRequested.get() && isHealthy();) {
+ // Try to get the root region location from the master.
+ if (!haveRootRegion.get()) {
+ HServerAddress rootServer = zooKeeperWrapper.readRootRegionLocation();
+ if (rootServer != null) {
+ // By setting the root region location, we bypass the wait imposed on
+ // HTable for all regions being assigned.
+ this.connection.setRootRegionLocation(
+ new HRegionLocation(HRegionInfo.ROOT_REGIONINFO, rootServer));
+ haveRootRegion.set(true);
+ }
+ }
+ long now = System.currentTimeMillis();
+ if (lastMsg != 0 && (now - lastMsg) >= serverLeaseTimeout) {
+ // It has been way too long since we last reported to the master.
+ LOG.warn("unable to report to master for " + (now - lastMsg) +
+ " milliseconds - retrying");
+ }
+ if ((now - lastMsg) >= msgInterval) {
+ HMsg outboundArray[] = null;
+ synchronized(this.outboundMsgs) {
+ outboundArray =
+ this.outboundMsgs.toArray(new HMsg[outboundMsgs.size()]);
+ this.outboundMsgs.clear();
+ }
+ try {
+ doMetrics();
+ MemoryUsage memory =
+ ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
+ HServerLoad hsl = new HServerLoad(requestCount.get(),
+ (int)(memory.getUsed()/1024/1024),
+ (int)(memory.getMax()/1024/1024));
+ for (HRegion r: onlineRegions.values()) {
+ hsl.addRegionInfo(createRegionLoad(r));
+ }
+ this.serverInfo.setLoad(hsl);
+ this.requestCount.set(0);
+ HMsg msgs[] = hbaseMaster.regionServerReport(
+ serverInfo, outboundArray, getMostLoadedRegions());
+ lastMsg = System.currentTimeMillis();
+ if (this.quiesced.get() && onlineRegions.size() == 0) {
+ // We've just told the master we're exiting because we aren't
+ // serving any regions. So set the stop bit and exit.
+ LOG.info("Server quiesced and not serving any regions. " +
+ "Starting shutdown");
+ stopRequested.set(true);
+ this.outboundMsgs.clear();
+ continue;
+ }
+
+ // Queue up the HMaster's instruction stream for processing
+ boolean restart = false;
+ for(int i = 0;
+ !restart && !stopRequested.get() && i < msgs.length;
+ i++) {
+ LOG.info(msgs[i].toString());
+ if (safeMode.get()) {
+ if (zooKeeperWrapper.checkOutOfSafeMode()) {
+ this.connection.unsetRootRegionLocation();
+ synchronized (safeMode) {
+ safeMode.set(false);
+ safeMode.notifyAll();
+ }
+ }
+ }
+ switch(msgs[i].getType()) {
+ case MSG_CALL_SERVER_STARTUP:
+ // We the MSG_CALL_SERVER_STARTUP on startup but we can also
+ // get it when the master is panicking because for instance
+ // the HDFS has been yanked out from under it. Be wary of
+ // this message.
+ if (checkFileSystem()) {
+ closeAllRegions();
+ try {
+ log.closeAndDelete();
+ } catch (Exception e) {
+ LOG.error("error closing and deleting HLog", e);
+ }
+ try {
+ serverInfo.setStartCode(System.currentTimeMillis());
+ log = setupHLog();
+ this.logFlusher.setHLog(log);
+ } catch (IOException e) {
+ this.abortRequested = true;
+ this.stopRequested.set(true);
+ e = RemoteExceptionHandler.checkIOException(e);
+ LOG.fatal("error restarting server", e);
+ break;
+ }
+ reportForDuty();
+ restart = true;
+ } else {
+ LOG.fatal("file system available check failed. " +
+ "Shutting down server.");
+ }
+ break;
+
+ case MSG_REGIONSERVER_STOP:
+ stopRequested.set(true);
+ break;
+
+ case MSG_REGIONSERVER_QUIESCE:
+ if (!quiesceRequested) {
+ try {
+ toDo.put(new ToDoEntry(msgs[i]));
+ } catch (InterruptedException e) {
+ throw new RuntimeException("Putting into msgQueue was " +
+ "interrupted.", e);
+ }
+ quiesceRequested = true;
+ }
+ break;
+
+ default:
+ if (fsOk) {
+ try {
+ toDo.put(new ToDoEntry(msgs[i]));
+ } catch (InterruptedException e) {
+ throw new RuntimeException("Putting into msgQueue was " +
+ "interrupted.", e);
+ }
+ }
+ }
+ }
+ // Reset tries count if we had a successful transaction.
+ tries = 0;
+
+ if (restart || this.stopRequested.get()) {
+ toDo.clear();
+ continue;
+ }
+ } catch (Exception e) {
+ if (e instanceof IOException) {
+ e = RemoteExceptionHandler.checkIOException((IOException) e);
+ }
+ if (tries < this.numRetries) {
+ LOG.warn("Processing message (Retry: " + tries + ")", e);
+ tries++;
+ } else {
+ LOG.error("Exceeded max retries: " + this.numRetries, e);
+ if (checkFileSystem()) {
+ // Filesystem is OK. Something is up w/ ZK or master. Sleep
+ // a little while if only to stop our logging many times a
+ // millisecond.
+ Thread.sleep(1000);
+ }
+ }
+ if (this.stopRequested.get()) {
+ LOG.info("Stop was requested, clearing the toDo " +
+ "despite of the exception");
+ toDo.clear();
+ continue;
+ }
+ }
+ }
+ // Do some housekeeping before going to sleep
+ housekeeping();
+ sleeper.sleep(lastMsg);
+ } // for
+ } catch (Throwable t) {
+ if (!checkOOME(t)) {
+ LOG.fatal("Unhandled exception. Aborting...", t);
+ abort();
+ }
+ }
+ RegionHistorian.getInstance().offline();
+ this.leases.closeAfterLeasesExpire();
+ this.worker.stop();
+ this.server.stop();
+ if (this.infoServer != null) {
+ LOG.info("Stopping infoServer");
+ try {
+ this.infoServer.stop();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+
+ // Send interrupts to wake up threads if sleeping so they notice shutdown.
+ // TODO: Should we check they are alive? If OOME could have exited already
+ cacheFlusher.interruptIfNecessary();
+ logFlusher.interrupt();
+ compactSplitThread.interruptIfNecessary();
+ logRoller.interruptIfNecessary();
+ this.majorCompactionChecker.interrupt();
+
+ if (abortRequested) {
+ if (this.fsOk) {
+ // Only try to clean up if the file system is available
+ try {
+ if (this.log != null) {
+ this.log.close();
+ LOG.info("On abort, closed hlog");
+ }
+ } catch (Throwable e) {
+ LOG.error("Unable to close log in abort",
+ RemoteExceptionHandler.checkThrowable(e));
+ }
+ closeAllRegions(); // Don't leave any open file handles
+ }
+ LOG.info("aborting server at: " +
+ serverInfo.getServerAddress().toString());
+ } else {
+ ArrayList<HRegion> closedRegions = closeAllRegions();
+ try {
+ log.closeAndDelete();
+ } catch (Throwable e) {
+ LOG.error("Close and delete failed",
+ RemoteExceptionHandler.checkThrowable(e));
+ }
+ try {
+ HMsg[] exitMsg = new HMsg[closedRegions.size() + 1];
+ exitMsg[0] = REPORT_EXITING;
+ // Tell the master what regions we are/were serving
+ int i = 1;
+ for (HRegion region: closedRegions) {
+ exitMsg[i++] = new HMsg(HMsg.Type.MSG_REPORT_CLOSE,
+ region.getRegionInfo());
+ }
+
+ LOG.info("telling master that region server is shutting down at: " +
+ serverInfo.getServerAddress().toString());
+ hbaseMaster.regionServerReport(serverInfo, exitMsg, (HRegionInfo[])null);
+ } catch (Throwable e) {
+ LOG.warn("Failed to send exiting message to master: ",
+ RemoteExceptionHandler.checkThrowable(e));
+ }
+ LOG.info("stopping server at: " +
+ serverInfo.getServerAddress().toString());
+ }
+ if (this.hbaseMaster != null) {
+ HBaseRPC.stopProxy(this.hbaseMaster);
+ this.hbaseMaster = null;
+ }
+ join();
+
+ if (shutdownHDFS.get()) {
+ runThread(this.hdfsShutdownThread,
+ this.conf.getLong("hbase.dfs.shutdown.wait", 30000));
+ }
+
+ LOG.info(Thread.currentThread().getName() + " exiting");
+ }
+
+ /**
+ * Run and wait on passed thread in HRS context.
+ * @param t
+ * @param dfsShutdownWait
+ */
+ public void runThread(final Thread t, final long dfsShutdownWait) {
+ if (t == null) {
+ return;
+ }
+ t.start();
+ Threads.shutdown(t, dfsShutdownWait);
+ }
+
+ /**
+ * Set the hdfs shutdown thread to run on exit. Pass null to disable
+ * running of the shutdown test. Needed by tests.
+ * @param t Thread to run. Pass null to disable tests.
+ * @return Previous occupant of the shutdown thread position.
+ */
+ public Thread setHDFSShutdownThreadOnExit(final Thread t) {
+ Thread old = this.hdfsShutdownThread;
+ this.hdfsShutdownThread = t;
+ return old;
+ }
+
+ /*
+ * Run init. Sets up hlog and starts up all server threads.
+ * @param c Extra configuration.
+ */
+ protected void init(final MapWritable c) throws IOException {
+ try {
+ for (Map.Entry<Writable, Writable> e: c.entrySet()) {
+ String key = e.getKey().toString();
+ String value = e.getValue().toString();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Config from master: " + key + "=" + value);
+ }
+ this.conf.set(key, value);
+ }
+ // Master may have sent us a new address with the other configs.
+ // Update our address in this case. See HBASE-719
+ if(conf.get("hbase.regionserver.address") != null)
+ serverInfo.setServerAddress(new HServerAddress
+ (conf.get("hbase.regionserver.address"),
+ serverInfo.getServerAddress().getPort()));
+ // Master sent us hbase.rootdir to use. Should be fully qualified
+ // path with file system specification included. Set 'fs.default.name'
+ // to match the filesystem on hbase.rootdir else underlying hadoop hdfs
+ // accessors will be going against wrong filesystem (unless all is set
+ // to defaults).
+ this.conf.set("fs.default.name", this.conf.get("hbase.rootdir"));
+ this.fs = FileSystem.get(this.conf);
+
+ // Register shutdown hook for HRegionServer, runs an orderly shutdown
+ // when a kill signal is recieved
+ Runtime.getRuntime().addShutdownHook(new ShutdownThread(this,
+ Thread.currentThread()));
+ this.hdfsShutdownThread = suppressHdfsShutdownHook();
+
+ this.rootDir = new Path(this.conf.get(HConstants.HBASE_DIR));
+ this.log = setupHLog();
+ this.logFlusher.setHLog(log);
+ // Init in here rather than in constructor after thread name has been set
+ this.metrics = new RegionServerMetrics();
+ startServiceThreads();
+ isOnline = true;
+ } catch (Throwable e) {
+ this.isOnline = false;
+ this.stopRequested.set(true);
+ throw convertThrowableToIOE(cleanup(e, "Failed init"),
+ "Region server startup failed");
+ }
+ }
+
+ /*
+ * @param r Region to get RegionLoad for.
+ * @return RegionLoad instance.
+ * @throws IOException
+ */
+ private HServerLoad.RegionLoad createRegionLoad(final HRegion r)
+ throws IOException {
+ byte[] name = r.getRegionName();
+ int stores = 0;
+ int storefiles = 0;
+ int memcacheSizeMB = (int)(r.memcacheSize.get()/1024/1024);
+ int storefileIndexSizeMB = 0;
+ synchronized (r.stores) {
+ stores += r.stores.size();
+ for (Store store: r.stores.values()) {
+ storefiles += store.getStorefilesCount();
+ storefileIndexSizeMB +=
+ (int)(store.getStorefilesIndexSize()/1024/1024);
+ }
+ }
+ return new HServerLoad.RegionLoad(name, stores, storefiles, memcacheSizeMB,
+ storefileIndexSizeMB);
+ }
+
+ /**
+ * @param regionName
+ * @return An instance of RegionLoad.
+ * @throws IOException
+ */
+ public HServerLoad.RegionLoad createRegionLoad(final byte [] regionName)
+ throws IOException {
+ return createRegionLoad(this.onlineRegions.get(Bytes.mapKey(regionName)));
+ }
+
+ /*
+ * Cleanup after Throwable caught invoking method. Converts <code>t</code>
+ * to IOE if it isn't already.
+ * @param t Throwable
+ * @return Throwable converted to an IOE; methods can only let out IOEs.
+ */
+ private Throwable cleanup(final Throwable t) {
+ return cleanup(t, null);
+ }
+
+ /*
+ * Cleanup after Throwable caught invoking method. Converts <code>t</code>
+ * to IOE if it isn't already.
+ * @param t Throwable
+ * @param msg Message to log in error. Can be null.
+ * @return Throwable converted to an IOE; methods can only let out IOEs.
+ */
+ private Throwable cleanup(final Throwable t, final String msg) {
+ if (msg == null) {
+ LOG.error(RemoteExceptionHandler.checkThrowable(t));
+ } else {
+ LOG.error(msg, RemoteExceptionHandler.checkThrowable(t));
+ }
+ if (!checkOOME(t)) {
+ checkFileSystem();
+ }
+ return t;
+ }
+
+ /*
+ * @param t
+ * @return Make <code>t</code> an IOE if it isn't already.
+ */
+ private IOException convertThrowableToIOE(final Throwable t) {
+ return convertThrowableToIOE(t, null);
+ }
+
+ /*
+ * @param t
+ * @param msg Message to put in new IOE if passed <code>t</code> is not an IOE
+ * @return Make <code>t</code> an IOE if it isn't already.
+ */
+ private IOException convertThrowableToIOE(final Throwable t,
+ final String msg) {
+ return (t instanceof IOException? (IOException)t:
+ msg == null || msg.length() == 0?
+ new IOException(t): new IOException(msg, t));
+ }
+ /*
+ * Check if an OOME and if so, call abort.
+ * @param e
+ * @return True if we OOME'd and are aborting.
+ */
+ public boolean checkOOME(final Throwable e) {
+ boolean stop = false;
+ if (e instanceof OutOfMemoryError ||
+ (e.getCause() != null && e.getCause() instanceof OutOfMemoryError) ||
+ (e.getMessage() != null &&
+ e.getMessage().contains("java.lang.OutOfMemoryError"))) {
+ LOG.fatal("OutOfMemoryError, aborting.", e);
+ abort();
+ stop = true;
+ }
+ return stop;
+ }
+
+
+ /**
+ * Checks to see if the file system is still accessible.
+ * If not, sets abortRequested and stopRequested
+ *
+ * @return false if file system is not available
+ */
+ protected boolean checkFileSystem() {
+ if (this.fsOk && this.fs != null) {
+ try {
+ FSUtils.checkFileSystemAvailable(this.fs);
+ } catch (IOException e) {
+ LOG.fatal("Shutting down HRegionServer: file system not available", e);
+ abort();
+ this.fsOk = false;
+ }
+ }
+ return this.fsOk;
+ }
+
+ /**
+ * Thread for toggling safemode after some configurable interval.
+ */
+ private class CompactionLimitThread extends Thread {
+ protected CompactionLimitThread() {}
+
+ @Override
+ public void run() {
+ // First wait until we exit safe mode
+ synchronized (safeMode) {
+ while(safeMode.get()) {
+ LOG.debug("Waiting to exit safe mode");
+ try {
+ safeMode.wait();
+ } catch (InterruptedException e) {
+ // ignore
+ }
+ }
+ }
+
+ // now that safemode is off, slowly increase the per-cycle compaction
+ // limit, finally setting it to unlimited (-1)
+
+ int compactionCheckInterval =
+ conf.getInt("hbase.regionserver.thread.splitcompactcheckfrequency",
+ 20 * 1000);
+ final int limitSteps[] = {
+ 1, 1, 1, 1,
+ 2, 2, 2, 2, 2, 2,
+ 3, 3, 3, 3, 3, 3, 3, 3,
+ 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
+ -1
+ };
+ for (int i = 0; i < limitSteps.length; i++) {
+ // Just log changes.
+ if (compactSplitThread.getLimit() != limitSteps[i] &&
+ LOG.isDebugEnabled()) {
+ LOG.debug("setting compaction limit to " + limitSteps[i]);
+ }
+ compactSplitThread.setLimit(limitSteps[i]);
+ try {
+ Thread.sleep(compactionCheckInterval);
+ } catch (InterruptedException ex) {
+ // unlimit compactions before exiting
+ compactSplitThread.setLimit(-1);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(this.getName() + " exiting on interrupt");
+ }
+ return;
+ }
+ }
+ LOG.info("compactions no longer limited");
+ }
+ }
+
+ /*
+ * Thread to shutdown the region server in an orderly manner. This thread
+ * is registered as a shutdown hook in the HRegionServer constructor and is
+ * only called when the HRegionServer receives a kill signal.
+ */
+ private static class ShutdownThread extends Thread {
+ private final HRegionServer instance;
+ private final Thread mainThread;
+
+ /**
+ * @param instance
+ * @param mainThread
+ */
+ public ShutdownThread(HRegionServer instance, Thread mainThread) {
+ this.instance = instance;
+ this.mainThread = mainThread;
+ }
+
+ @Override
+ public void run() {
+ LOG.info("Starting shutdown thread.");
+
+ // tell the region server to stop
+ instance.stop();
+
+ // Wait for main thread to exit.
+ Threads.shutdown(mainThread);
+
+ LOG.info("Shutdown thread complete");
+ }
+ }
+
+ // We need to call HDFS shutdown when we are done shutting down
+ private Thread hdfsShutdownThread;
+
+ /*
+ * Inner class that runs on a long period checking if regions need major
+ * compaction.
+ */
+ private static class MajorCompactionChecker extends Chore {
+ private final HRegionServer instance;
+
+ MajorCompactionChecker(final HRegionServer h,
+ final int sleepTime, final AtomicBoolean stopper) {
+ super(sleepTime, stopper);
+ this.instance = h;
+ LOG.info("Runs every " + sleepTime + "ms");
+ }
+
+ @Override
+ protected void chore() {
+ Set<Integer> keys = this.instance.onlineRegions.keySet();
+ for (Integer i: keys) {
+ HRegion r = this.instance.onlineRegions.get(i);
+ try {
+ if (r != null && r.isMajorCompaction()) {
+ // Queue a compaction. Will recognize if major is needed.
+ this.instance.compactSplitThread.
+ compactionRequested(r, getName() + " requests major compaction");
+ }
+ } catch (IOException e) {
+ LOG.warn("Failed major compaction check on " + r, e);
+ }
+ }
+ }
+ }
+
+ /**
+ * So, HDFS caches FileSystems so when you call FileSystem.get it's fast. In
+ * order to make sure things are cleaned up, it also creates a shutdown hook
+ * so that all filesystems can be closed when the process is terminated. This
+ * conveniently runs concurrently with our own shutdown handler, and
+ * therefore causes all the filesystems to be closed before the server can do
+ * all its necessary cleanup.
+ *
+ * The crazy dirty reflection in this method sneaks into the FileSystem cache
+ * and grabs the shutdown hook, removes it from the list of active shutdown
+ * hooks, and hangs onto it until later. Then, after we're properly done with
+ * our graceful shutdown, we can execute the hdfs hook manually to make sure
+ * loose ends are tied up.
+ *
+ * This seems quite fragile and susceptible to breaking if Hadoop changes
+ * anything about the way this cleanup is managed. Keep an eye on things.
+ */
+ private Thread suppressHdfsShutdownHook() {
+ try {
+ Field field = FileSystem.class.getDeclaredField ("clientFinalizer");
+ field.setAccessible(true);
+ Thread hdfsClientFinalizer = (Thread)field.get(null);
+ if (hdfsClientFinalizer == null) {
+ throw new RuntimeException("client finalizer is null, can't suppress!");
+ }
+ Runtime.getRuntime().removeShutdownHook(hdfsClientFinalizer);
+ return hdfsClientFinalizer;
+
+ } catch (NoSuchFieldException nsfe) {
+ LOG.fatal("Couldn't find field 'clientFinalizer' in FileSystem!", nsfe);
+ throw new RuntimeException("Failed to suppress HDFS shutdown hook");
+ } catch (IllegalAccessException iae) {
+ LOG.fatal("Couldn't access field 'clientFinalizer' in FileSystem!", iae);
+ throw new RuntimeException("Failed to suppress HDFS shutdown hook");
+ }
+ }
+
+ /**
+ * Report the status of the server. A server is online once all the startup
+ * is completed (setting up filesystem, starting service threads, etc.). This
+ * method is designed mostly to be useful in tests.
+ * @return true if online, false if not.
+ */
+ public boolean isOnline() {
+ return isOnline;
+ }
+
+ private HLog setupHLog() throws RegionServerRunningException,
+ IOException {
+
+ Path logdir = new Path(rootDir, HLog.getHLogDirectoryName(serverInfo));
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Log dir " + logdir);
+ }
+ if (fs.exists(logdir)) {
+ throw new RegionServerRunningException("region server already " +
+ "running at " + this.serverInfo.getServerAddress().toString() +
+ " because logdir " + logdir.toString() + " exists");
+ }
+ HLog newlog = new HLog(fs, logdir, conf, logRoller);
+ return newlog;
+ }
+
+ /*
+ * @param interval Interval since last time metrics were called.
+ */
+ protected void doMetrics() {
+ this.metrics.regions.set(this.onlineRegions.size());
+ this.metrics.incrementRequests(this.requestCount.get());
+ // Is this too expensive every three seconds getting a lock on onlineRegions
+ // and then per store carried? Can I make metrics be sloppier and avoid
+ // the synchronizations?
+ int stores = 0;
+ int storefiles = 0;
+ long memcacheSize = 0;
+ long storefileIndexSize = 0;
+ synchronized (this.onlineRegions) {
+ for (Map.Entry<Integer, HRegion> e: this.onlineRegions.entrySet()) {
+ HRegion r = e.getValue();
+ memcacheSize += r.memcacheSize.get();
+ synchronized (r.stores) {
+ stores += r.stores.size();
+ for(Map.Entry<byte [], Store> ee: r.stores.entrySet()) {
+ Store store = ee.getValue();
+ storefiles += store.getStorefilesCount();
+ try {
+ storefileIndexSize += store.getStorefilesIndexSize();
+ } catch (IOException ex) {
+ LOG.warn("error getting store file index size for " + store +
+ ": " + StringUtils.stringifyException(ex));
+ }
+ }
+ }
+ }
+ }
+ this.metrics.stores.set(stores);
+ this.metrics.storefiles.set(storefiles);
+ this.metrics.memcacheSizeMB.set((int)(memcacheSize/(1024*1024)));
+ this.metrics.storefileIndexSizeMB.set((int)(storefileIndexSize/(1024*1024)));
+ }
+
+ /**
+ * @return Region server metrics instance.
+ */
+ public RegionServerMetrics getMetrics() {
+ return this.metrics;
+ }
+
+ /*
+ * Start maintanence Threads, Server, Worker and lease checker threads.
+ * Install an UncaughtExceptionHandler that calls abort of RegionServer if we
+ * get an unhandled exception. We cannot set the handler on all threads.
+ * Server's internal Listener thread is off limits. For Server, if an OOME,
+ * it waits a while then retries. Meantime, a flush or a compaction that
+ * tries to run should trigger same critical condition and the shutdown will
+ * run. On its way out, this server will shut down Server. Leases are sort
+ * of inbetween. It has an internal thread that while it inherits from
+ * Chore, it keeps its own internal stop mechanism so needs to be stopped
+ * by this hosting server. Worker logs the exception and exits.
+ */
+ private void startServiceThreads() throws IOException {
+ String n = Thread.currentThread().getName();
+ UncaughtExceptionHandler handler = new UncaughtExceptionHandler() {
+ public void uncaughtException(Thread t, Throwable e) {
+ abort();
+ LOG.fatal("Set stop flag in " + t.getName(), e);
+ }
+ };
+ Threads.setDaemonThreadRunning(this.logRoller, n + ".logRoller",
+ handler);
+ Threads.setDaemonThreadRunning(this.logFlusher, n + ".logFlusher",
+ handler);
+ Threads.setDaemonThreadRunning(this.cacheFlusher, n + ".cacheFlusher",
+ handler);
+ Threads.setDaemonThreadRunning(this.compactSplitThread, n + ".compactor",
+ handler);
+ Threads.setDaemonThreadRunning(this.workerThread, n + ".worker", handler);
+ Threads.setDaemonThreadRunning(this.majorCompactionChecker,
+ n + ".majorCompactionChecker", handler);
+
+ // Leases is not a Thread. Internally it runs a daemon thread. If it gets
+ // an unhandled exception, it will just exit.
+ this.leases.setName(n + ".leaseChecker");
+ this.leases.start();
+ // Put up info server.
+ int port = this.conf.getInt("hbase.regionserver.info.port", 60030);
+ // -1 is for disabling info server
+ if (port >= 0) {
+ String addr = this.conf.get("hbase.master.info.bindAddress", "0.0.0.0");
+ // check if auto port bind enabled
+ boolean auto = this.conf.getBoolean("hbase.regionserver.info.port.auto",
+ false);
+ while (true) {
+ try {
+ this.infoServer = new InfoServer("regionserver", addr, port, false);
+ this.infoServer.setAttribute("regionserver", this);
+ this.infoServer.start();
+ break;
+ } catch (BindException e) {
+ if (!auto){
+ // auto bind disabled throw BindException
+ throw e;
+ }
+ // auto bind enabled, try to use another port
+ LOG.info("Failed binding http info server to port: " + port);
+ port++;
+ // update HRS server info
+ serverInfo.setInfoPort(port);
+ }
+ }
+ }
+
+ // Set up the safe mode handler if safe mode has been configured.
+ if (!conf.getBoolean("hbase.regionserver.safemode", true)) {
+ safeMode.set(false);
+ compactSplitThread.setLimit(-1);
+ LOG.debug("skipping safe mode");
+ } else {
+ this.compactionLimitThread = new CompactionLimitThread();
+ Threads.setDaemonThreadRunning(this.compactionLimitThread, n + ".safeMode",
+ handler);
+ }
+
+ // Start Server. This service is like leases in that it internally runs
+ // a thread.
+ this.server.start();
+ LOG.info("HRegionServer started at: " +
+ serverInfo.getServerAddress().toString());
+ }
+
+ /*
+ * Verify that server is healthy
+ */
+ private boolean isHealthy() {
+ if (!fsOk) {
+ // File system problem
+ return false;
+ }
+ // Verify that all threads are alive
+ if (!(leases.isAlive() && compactSplitThread.isAlive() &&
+ cacheFlusher.isAlive() && logRoller.isAlive() &&
+ workerThread.isAlive() && this.majorCompactionChecker.isAlive())) {
+ // One or more threads are no longer alive - shut down
+ stop();
+ return false;
+ }
+ return true;
+ }
+
+ /*
+ * Run some housekeeping tasks before we go into 'hibernation' sleeping at
+ * the end of the main HRegionServer run loop.
+ */
+ private void housekeeping() {
+ // If the todo list has > 0 messages, iterate looking for open region
+ // messages. Send the master a message that we're working on its
+ // processing so it doesn't assign the region elsewhere.
+ if (this.toDo.isEmpty()) {
+ return;
+ }
+ // This iterator is 'safe'. We are guaranteed a view on state of the
+ // queue at time iterator was taken out. Apparently goes from oldest.
+ for (ToDoEntry e: this.toDo) {
+ HMsg msg = e.msg;
+ if (msg == null) {
+ LOG.warn("Message is empty: " + e);
+ }
+ if (e.msg.isType(HMsg.Type.MSG_REGION_OPEN)) {
+ addProcessingMessage(e.msg.getRegionInfo());
+ }
+ }
+ }
+
+ /** @return the HLog */
+ HLog getLog() {
+ return this.log;
+ }
+
+ /**
+ * Sets a flag that will cause all the HRegionServer threads to shut down
+ * in an orderly fashion. Used by unit tests.
+ */
+ public void stop() {
+ this.stopRequested.set(true);
+ synchronized(this) {
+ notifyAll(); // Wakes run() if it is sleeping
+ }
+ }
+
+ /**
+ * Cause the server to exit without closing the regions it is serving, the
+ * log it is using and without notifying the master.
+ * Used unit testing and on catastrophic events such as HDFS is yanked out
+ * from under hbase or we OOME.
+ */
+ public void abort() {
+ this.abortRequested = true;
+ this.reservedSpace.clear();
+ LOG.info("Dump of metrics: " + this.metrics.toString());
+ stop();
+ }
+
+ /**
+ * Wait on all threads to finish.
+ * Presumption is that all closes and stops have already been called.
+ */
+ void join() {
+ Threads.shutdown(this.majorCompactionChecker);
+ Threads.shutdown(this.workerThread);
+ Threads.shutdown(this.cacheFlusher);
+ Threads.shutdown(this.compactSplitThread);
+ Threads.shutdown(this.logRoller);
+ }
+
+ private boolean getMaster() {
+ HServerAddress masterAddress = null;
+ while (masterAddress == null) {
+ if (stopRequested.get()) {
+ return false;
+ }
+ try {
+ masterAddress = zooKeeperWrapper.readMasterAddressOrThrow();
+ } catch (IOException e) {
+ LOG.warn("Unable to read master address from ZooKeeper. Retrying." +
+ " Error was:", e);
+ sleeper.sleep();
+ }
+ }
+
+ LOG.info("Telling master at " + masterAddress + " that we are up");
+ HMasterRegionInterface master = null;
+ while (!stopRequested.get() && master == null) {
+ try {
+ // Do initial RPC setup. The final argument indicates that the RPC
+ // should retry indefinitely.
+ master = (HMasterRegionInterface)HBaseRPC.waitForProxy(
+ HMasterRegionInterface.class, HBaseRPCProtocolVersion.versionID,
+ masterAddress.getInetSocketAddress(),
+ this.conf, -1, this.rpcTimeout);
+ } catch (IOException e) {
+ LOG.warn("Unable to connect to master. Retrying. Error was:", e);
+ sleeper.sleep();
+ }
+ }
+ this.hbaseMaster = master;
+ return true;
+ }
+
+ /*
+ * Let the master know we're here
+ * Run initialization using parameters passed us by the master.
+ */
+ private MapWritable reportForDuty() {
+ if (!getMaster()) {
+ return null;
+ }
+ MapWritable result = null;
+ long lastMsg = 0;
+ while(!stopRequested.get()) {
+ try {
+ this.requestCount.set(0);
+ MemoryUsage memory =
+ ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
+ HServerLoad hsl = new HServerLoad(0, (int)memory.getUsed()/1024/1024,
+ (int)memory.getMax()/1024/1024);
+ this.serverInfo.setLoad(hsl);
+ if (LOG.isDebugEnabled())
+ LOG.debug("sending initial server load: " + hsl);
+ lastMsg = System.currentTimeMillis();
+ result = this.hbaseMaster.regionServerStartup(serverInfo);
+ break;
+ } catch (Leases.LeaseStillHeldException e) {
+ LOG.info("Lease " + e.getName() + " already held on master. Check " +
+ "DNS configuration so that all region servers are" +
+ "reporting their true IPs and not 127.0.0.1. Otherwise, this" +
+ "problem should resolve itself after the lease period of " +
+ this.conf.get("hbase.master.lease.period")
+ + " seconds expires over on the master");
+ } catch (IOException e) {
+ LOG.warn("error telling master we are up", e);
+ }
+ sleeper.sleep(lastMsg);
+ }
+ return result;
+ }
+
+ /* Add to the outbound message buffer */
+ private void reportOpen(HRegionInfo region) {
+ outboundMsgs.add(new HMsg(HMsg.Type.MSG_REPORT_OPEN, region));
+ }
+
+ /* Add to the outbound message buffer */
+ private void reportClose(HRegionInfo region) {
+ reportClose(region, null);
+ }
+
+ /* Add to the outbound message buffer */
+ private void reportClose(final HRegionInfo region, final byte[] message) {
+ outboundMsgs.add(new HMsg(HMsg.Type.MSG_REPORT_CLOSE, region, message));
+ }
+
+ /**
+ * Add to the outbound message buffer
+ *
+ * When a region splits, we need to tell the master that there are two new
+ * regions that need to be assigned.
+ *
+ * We do not need to inform the master about the old region, because we've
+ * updated the meta or root regions, and the master will pick that up on its
+ * next rescan of the root or meta tables.
+ */
+ void reportSplit(HRegionInfo oldRegion, HRegionInfo newRegionA,
+ HRegionInfo newRegionB) {
+ outboundMsgs.add(new HMsg(HMsg.Type.MSG_REPORT_SPLIT, oldRegion,
+ ("Daughters; " +
+ newRegionA.getRegionNameAsString() + ", " +
+ newRegionB.getRegionNameAsString()).getBytes()));
+ outboundMsgs.add(new HMsg(HMsg.Type.MSG_REPORT_OPEN, newRegionA));
+ outboundMsgs.add(new HMsg(HMsg.Type.MSG_REPORT_OPEN, newRegionB));
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // HMaster-given operations
+ //////////////////////////////////////////////////////////////////////////////
+
+ /*
+ * Data structure to hold a HMsg and retries count.
+ */
+ private static final class ToDoEntry {
+ protected volatile int tries;
+ protected final HMsg msg;
+
+ ToDoEntry(final HMsg msg) {
+ this.tries = 0;
+ this.msg = msg;
+ }
+ }
+
+ final BlockingQueue<ToDoEntry> toDo = new LinkedBlockingQueue<ToDoEntry>();
+ private Worker worker;
+ private Thread workerThread;
+
+ /** Thread that performs long running requests from the master */
+ class Worker implements Runnable {
+ void stop() {
+ synchronized(toDo) {
+ toDo.notifyAll();
+ }
+ }
+
+ public void run() {
+ try {
+ while(!stopRequested.get()) {
+ ToDoEntry e = null;
+ try {
+ e = toDo.poll(threadWakeFrequency, TimeUnit.MILLISECONDS);
+ if(e == null || stopRequested.get()) {
+ continue;
+ }
+ LOG.info("Worker: " + e.msg);
+ HRegion region = null;
+ HRegionInfo info = e.msg.getRegionInfo();
+ switch(e.msg.getType()) {
+
+ case MSG_REGIONSERVER_QUIESCE:
+ closeUserRegions();
+ break;
+
+ case MSG_REGION_OPEN:
+ // Open a region
+ if (!haveRootRegion.get() && !info.isRootRegion()) {
+ // root region is not online yet. requeue this task
+ LOG.info("putting region open request back into queue because" +
+ " root region is not yet available");
+ try {
+ toDo.put(e);
+ } catch (InterruptedException ex) {
+ LOG.warn("insertion into toDo queue was interrupted", ex);
+ break;
+ }
+ }
+ openRegion(info);
+ break;
+
+ case MSG_REGION_CLOSE:
+ // Close a region
+ closeRegion(e.msg.getRegionInfo(), true);
+ break;
+
+ case MSG_REGION_CLOSE_WITHOUT_REPORT:
+ // Close a region, don't reply
+ closeRegion(e.msg.getRegionInfo(), false);
+ break;
+
+ case MSG_REGION_SPLIT:
+ region = getRegion(info.getRegionName());
+ region.flushcache();
+ region.regionInfo.shouldSplit(true);
+ // force a compaction; split will be side-effect.
+ compactSplitThread.compactionRequested(region,
+ e.msg.getType().name());
+ break;
+
+ case MSG_REGION_MAJOR_COMPACT:
+ case MSG_REGION_COMPACT:
+ // Compact a region
+ region = getRegion(info.getRegionName());
+ compactSplitThread.compactionRequested(region,
+ e.msg.isType(Type.MSG_REGION_MAJOR_COMPACT),
+ e.msg.getType().name());
+ break;
+
+ case MSG_REGION_FLUSH:
+ region = getRegion(info.getRegionName());
+ region.flushcache();
+ break;
+
+ default:
+ throw new AssertionError(
+ "Impossible state during msg processing. Instruction: "
+ + e.msg.toString());
+ }
+ } catch (InterruptedException ex) {
+ // continue
+ } catch (Exception ex) {
+ if (ex instanceof IOException) {
+ ex = RemoteExceptionHandler.checkIOException((IOException) ex);
+ }
+ if(e != null && e.tries < numRetries) {
+ LOG.warn(ex);
+ e.tries++;
+ try {
+ toDo.put(e);
+ } catch (InterruptedException ie) {
+ throw new RuntimeException("Putting into msgQueue was " +
+ "interrupted.", ex);
+ }
+ } else {
+ LOG.error("unable to process message" +
+ (e != null ? (": " + e.msg.toString()) : ""), ex);
+ if (!checkFileSystem()) {
+ break;
+ }
+ }
+ }
+ }
+ } catch(Throwable t) {
+ if (!checkOOME(t)) {
+ LOG.fatal("Unhandled exception", t);
+ }
+ } finally {
+ LOG.info("worker thread exiting");
+ }
+ }
+ }
+
+ void openRegion(final HRegionInfo regionInfo) {
+ // If historian is not online and this is not a meta region, online it.
+ if (!regionInfo.isMetaRegion() &&
+ !RegionHistorian.getInstance().isOnline()) {
+ RegionHistorian.getInstance().online(this.conf);
+ }
+ Integer mapKey = Bytes.mapKey(regionInfo.getRegionName());
+ HRegion region = this.onlineRegions.get(mapKey);
+ if (region == null) {
+ try {
+ region = instantiateRegion(regionInfo);
+ // Startup a compaction early if one is needed.
+ this.compactSplitThread.
+ compactionRequested(region, "Region open check");
+ } catch (Throwable e) {
+ Throwable t = cleanup(e,
+ "Error opening " + regionInfo.getRegionNameAsString());
+ // TODO: add an extra field in HRegionInfo to indicate that there is
+ // an error. We can't do that now because that would be an incompatible
+ // change that would require a migration
+ reportClose(regionInfo, StringUtils.stringifyException(t).getBytes());
+ return;
+ }
+ this.lock.writeLock().lock();
+ try {
+ this.log.setSequenceNumber(region.getMinSequenceId());
+ this.onlineRegions.put(mapKey, region);
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ }
+ reportOpen(regionInfo);
+ }
+
+ protected HRegion instantiateRegion(final HRegionInfo regionInfo)
+ throws IOException {
+ HRegion r = new HRegion(HTableDescriptor.getTableDir(rootDir, regionInfo
+ .getTableDesc().getName()), this.log, this.fs, conf, regionInfo,
+ this.cacheFlusher);
+ r.initialize(null, new Progressable() {
+ public void progress() {
+ addProcessingMessage(regionInfo);
+ }
+ });
+ return r;
+ }
+
+ /**
+ * Add a MSG_REPORT_PROCESS_OPEN to the outbound queue.
+ * This method is called while region is in the queue of regions to process
+ * and then while the region is being opened, it is called from the Worker
+ * thread that is running the region open.
+ * @param hri Region to add the message for
+ */
+ public void addProcessingMessage(final HRegionInfo hri) {
+ getOutboundMsgs().add(new HMsg(HMsg.Type.MSG_REPORT_PROCESS_OPEN, hri));
+ }
+
+ void closeRegion(final HRegionInfo hri, final boolean reportWhenCompleted)
+ throws IOException {
+ HRegion region = this.removeFromOnlineRegions(hri);
+ if (region != null) {
+ region.close();
+ if(reportWhenCompleted) {
+ reportClose(hri);
+ }
+ }
+ }
+
+ /** Called either when the master tells us to restart or from stop() */
+ ArrayList<HRegion> closeAllRegions() {
+ ArrayList<HRegion> regionsToClose = new ArrayList<HRegion>();
+ this.lock.writeLock().lock();
+ try {
+ regionsToClose.addAll(onlineRegions.values());
+ onlineRegions.clear();
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ for(HRegion region: regionsToClose) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("closing region " + Bytes.toString(region.getRegionName()));
+ }
+ try {
+ region.close(abortRequested);
+ } catch (Throwable e) {
+ cleanup(e, "Error closing " + Bytes.toString(region.getRegionName()));
+ }
+ }
+ return regionsToClose;
+ }
+
+ /*
+ * Thread to run close of a region.
+ */
+ private static class RegionCloserThread extends Thread {
+ private final HRegion r;
+
+ protected RegionCloserThread(final HRegion r) {
+ super(Thread.currentThread().getName() + ".regionCloser." + r.toString());
+ this.r = r;
+ }
+
+ @Override
+ public void run() {
+ try {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Closing region " + r.toString());
+ }
+ r.close();
+ } catch (Throwable e) {
+ LOG.error("Error closing region " + r.toString(),
+ RemoteExceptionHandler.checkThrowable(e));
+ }
+ }
+ }
+
+ /** Called as the first stage of cluster shutdown. */
+ void closeUserRegions() {
+ ArrayList<HRegion> regionsToClose = new ArrayList<HRegion>();
+ this.lock.writeLock().lock();
+ try {
+ synchronized (onlineRegions) {
+ for (Iterator<Map.Entry<Integer, HRegion>> i =
+ onlineRegions.entrySet().iterator(); i.hasNext();) {
+ Map.Entry<Integer, HRegion> e = i.next();
+ HRegion r = e.getValue();
+ if (!r.getRegionInfo().isMetaRegion()) {
+ regionsToClose.add(r);
+ i.remove();
+ }
+ }
+ }
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ // Run region closes in parallel.
+ Set<Thread> threads = new HashSet<Thread>();
+ try {
+ for (final HRegion r : regionsToClose) {
+ RegionCloserThread t = new RegionCloserThread(r);
+ t.start();
+ threads.add(t);
+ }
+ } finally {
+ for (Thread t : threads) {
+ while (t.isAlive()) {
+ try {
+ t.join();
+ } catch (InterruptedException e) {
+ e.printStackTrace();
+ }
+ }
+ }
+ }
+ this.quiesced.set(true);
+ if (onlineRegions.size() == 0) {
+ outboundMsgs.add(REPORT_EXITING);
+ } else {
+ outboundMsgs.add(REPORT_QUIESCED);
+ }
+ }
+
+ //
+ // HRegionInterface
+ //
+
+ public HRegionInfo getRegionInfo(final byte [] regionName)
+ throws NotServingRegionException {
+ requestCount.incrementAndGet();
+ return getRegion(regionName).getRegionInfo();
+ }
+
+ public Cell [] get(final byte [] regionName, final byte [] row,
+ final byte [] column, final long timestamp, final int numVersions)
+ throws IOException {
+ checkOpen();
+ requestCount.incrementAndGet();
+ try {
+ List<KeyValue> results =
+ getRegion(regionName).get(row, column, timestamp, numVersions);
+ return Cell.createSingleCellArray(results);
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ }
+
+ public RowResult getRow(final byte [] regionName, final byte [] row,
+ final byte [][] columns, final long ts,
+ final int numVersions, final long lockId)
+ throws IOException {
+ checkOpen();
+ requestCount.incrementAndGet();
+ try {
+ // convert the columns array into a set so it's easy to check later.
+ NavigableSet<byte []> columnSet = null;
+ if (columns != null) {
+ columnSet = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ columnSet.addAll(Arrays.asList(columns));
+ }
+ HRegion region = getRegion(regionName);
+ HbaseMapWritable<byte [], Cell> result =
+ region.getFull(row, columnSet, ts, numVersions, getLockFromId(lockId));
+ if (result == null || result.isEmpty())
+ return null;
+ return new RowResult(row, result);
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ }
+
+ public RowResult getClosestRowBefore(final byte [] regionName,
+ final byte [] row, final byte [] columnFamily)
+ throws IOException {
+ checkOpen();
+ requestCount.incrementAndGet();
+ try {
+ // locate the region we're operating on
+ HRegion region = getRegion(regionName);
+ // ask the region for all the data
+ RowResult rr = region.getClosestRowBefore(row, columnFamily);
+ return rr;
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ }
+
+ public RowResult next(final long scannerId) throws IOException {
+ RowResult[] rrs = next(scannerId, 1);
+ return rrs.length == 0 ? null : rrs[0];
+ }
+
+ public RowResult [] next(final long scannerId, int nbRows) throws IOException {
+ checkOpen();
+ List<List<KeyValue>> results = new ArrayList<List<KeyValue>>();
+ try {
+ String scannerName = String.valueOf(scannerId);
+ InternalScanner s = scanners.get(scannerName);
+ if (s == null) {
+ throw new UnknownScannerException("Name: " + scannerName);
+ }
+ this.leases.renewLease(scannerName);
+ for (int i = 0; i < nbRows; i++) {
+ requestCount.incrementAndGet();
+ // Collect values to be returned here
+ List<KeyValue> values = new ArrayList<KeyValue>();
+ while (s.next(values)) {
+ if (!values.isEmpty()) {
+ // Row has something in it. Return the value.
+ results.add(values);
+ break;
+ }
+ }
+ }
+ return RowResult.createRowResultArray(results);
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ }
+
+ public void batchUpdate(final byte [] regionName, BatchUpdate b, long lockId)
+ throws IOException {
+ if (b.getRow() == null)
+ throw new IllegalArgumentException("update has null row");
+
+ checkOpen();
+ this.requestCount.incrementAndGet();
+ HRegion region = getRegion(regionName);
+ try {
+ cacheFlusher.reclaimMemcacheMemory();
+ region.batchUpdate(b, getLockFromId(b.getRowLock()));
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ }
+
+ public int batchUpdates(final byte[] regionName, final BatchUpdate [] b)
+ throws IOException {
+ int i = 0;
+ checkOpen();
+ try {
+ HRegion region = getRegion(regionName);
+ this.cacheFlusher.reclaimMemcacheMemory();
+ Integer[] locks = new Integer[b.length];
+ for (i = 0; i < b.length; i++) {
+ this.requestCount.incrementAndGet();
+ locks[i] = getLockFromId(b[i].getRowLock());
+ region.batchUpdate(b[i], locks[i]);
+ }
+ } catch(WrongRegionException ex) {
+ return i;
+ } catch (NotServingRegionException ex) {
+ return i;
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ return -1;
+ }
+
+ public boolean checkAndSave(final byte [] regionName, final BatchUpdate b,
+ final HbaseMapWritable<byte[],byte[]> expectedValues)
+ throws IOException {
+ if (b.getRow() == null)
+ throw new IllegalArgumentException("update has null row");
+ checkOpen();
+ this.requestCount.incrementAndGet();
+ HRegion region = getRegion(regionName);
+ try {
+ cacheFlusher.reclaimMemcacheMemory();
+ return region.checkAndSave(b,
+ expectedValues,getLockFromId(b.getRowLock()), true);
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ }
+
+ //
+ // remote scanner interface
+ //
+
+ public long openScanner(byte [] regionName, byte [][] cols, byte [] firstRow,
+ final long timestamp, final RowFilterInterface filter)
+ throws IOException {
+ checkOpen();
+ NullPointerException npe = null;
+ if (regionName == null) {
+ npe = new NullPointerException("regionName is null");
+ } else if (cols == null) {
+ npe = new NullPointerException("columns to scan is null");
+ } else if (firstRow == null) {
+ npe = new NullPointerException("firstRow for scanner is null");
+ }
+ if (npe != null) {
+ throw new IOException("Invalid arguments to openScanner", npe);
+ }
+ requestCount.incrementAndGet();
+ try {
+ HRegion r = getRegion(regionName);
+ InternalScanner s =
+ r.getScanner(cols, firstRow, timestamp, filter);
+ long scannerId = addScanner(s);
+ return scannerId;
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t, "Failed openScanner"));
+ }
+ }
+
+ protected long addScanner(InternalScanner s) throws LeaseStillHeldException {
+ long scannerId = -1L;
+ scannerId = rand.nextLong();
+ String scannerName = String.valueOf(scannerId);
+ synchronized(scanners) {
+ scanners.put(scannerName, s);
+ }
+ this.leases.
+ createLease(scannerName, new ScannerListener(scannerName));
+ return scannerId;
+ }
+
+ public void close(final long scannerId) throws IOException {
+ try {
+ checkOpen();
+ requestCount.incrementAndGet();
+ String scannerName = String.valueOf(scannerId);
+ InternalScanner s = null;
+ synchronized(scanners) {
+ s = scanners.remove(scannerName);
+ }
+ if(s == null) {
+ throw new UnknownScannerException(scannerName);
+ }
+ s.close();
+ this.leases.cancelLease(scannerName);
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ }
+
+ /**
+ * Instantiated as a scanner lease.
+ * If the lease times out, the scanner is closed
+ */
+ private class ScannerListener implements LeaseListener {
+ private final String scannerName;
+
+ ScannerListener(final String n) {
+ this.scannerName = n;
+ }
+
+ public void leaseExpired() {
+ LOG.info("Scanner " + this.scannerName + " lease expired");
+ InternalScanner s = null;
+ synchronized(scanners) {
+ s = scanners.remove(this.scannerName);
+ }
+ if (s != null) {
+ try {
+ s.close();
+ } catch (IOException e) {
+ LOG.error("Closing scanner", e);
+ }
+ }
+ }
+ }
+
+ //
+ // Methods that do the actual work for the remote API
+ //
+
+ public void deleteAll(final byte [] regionName, final byte [] row,
+ final byte [] column, final long timestamp, final long lockId)
+ throws IOException {
+ HRegion region = getRegion(regionName);
+ region.deleteAll(row, column, timestamp, getLockFromId(lockId));
+ }
+
+ public void deleteAll(final byte [] regionName, final byte [] row,
+ final long timestamp, final long lockId)
+ throws IOException {
+ HRegion region = getRegion(regionName);
+ region.deleteAll(row, timestamp, getLockFromId(lockId));
+ }
+
+ public void deleteAllByRegex(byte[] regionName, byte[] row, String colRegex,
+ long timestamp, long lockId) throws IOException {
+ getRegion(regionName).deleteAllByRegex(row, colRegex, timestamp,
+ getLockFromId(lockId));
+ }
+
+ public void deleteFamily(byte [] regionName, byte [] row, byte [] family,
+ long timestamp, final long lockId)
+ throws IOException{
+ getRegion(regionName).deleteFamily(row, family, timestamp,
+ getLockFromId(lockId));
+ }
+
+ public void deleteFamilyByRegex(byte[] regionName, byte[] row, String familyRegex,
+ long timestamp, long lockId) throws IOException {
+ getRegion(regionName).deleteFamilyByRegex(row, familyRegex, timestamp,
+ getLockFromId(lockId));
+ }
+
+ public boolean exists(byte[] regionName, byte[] row, byte[] column,
+ long timestamp, long lockId)
+ throws IOException {
+ return getRegion(regionName).exists(row, column, timestamp,
+ getLockFromId(lockId));
+ }
+
+ public long lockRow(byte [] regionName, byte [] row)
+ throws IOException {
+ checkOpen();
+ NullPointerException npe = null;
+ if(regionName == null) {
+ npe = new NullPointerException("regionName is null");
+ } else if(row == null) {
+ npe = new NullPointerException("row to lock is null");
+ }
+ if(npe != null) {
+ IOException io = new IOException("Invalid arguments to lockRow");
+ io.initCause(npe);
+ throw io;
+ }
+ requestCount.incrementAndGet();
+ try {
+ HRegion region = getRegion(regionName);
+ Integer r = region.obtainRowLock(row);
+ long lockId = addRowLock(r,region);
+ LOG.debug("Row lock " + lockId + " explicitly acquired by client");
+ return lockId;
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t,
+ "Error obtaining row lock (fsOk: " + this.fsOk + ")"));
+ }
+ }
+
+ protected long addRowLock(Integer r, HRegion region) throws LeaseStillHeldException {
+ long lockId = -1L;
+ lockId = rand.nextLong();
+ String lockName = String.valueOf(lockId);
+ synchronized(rowlocks) {
+ rowlocks.put(lockName, r);
+ }
+ this.leases.
+ createLease(lockName, new RowLockListener(lockName, region));
+ return lockId;
+ }
+
+ /**
+ * Method to get the Integer lock identifier used internally
+ * from the long lock identifier used by the client.
+ * @param lockId long row lock identifier from client
+ * @return intId Integer row lock used internally in HRegion
+ * @throws IOException Thrown if this is not a valid client lock id.
+ */
+ private Integer getLockFromId(long lockId)
+ throws IOException {
+ if(lockId == -1L) {
+ return null;
+ }
+ String lockName = String.valueOf(lockId);
+ Integer rl = null;
+ synchronized(rowlocks) {
+ rl = rowlocks.get(lockName);
+ }
+ if(rl == null) {
+ throw new IOException("Invalid row lock");
+ }
+ this.leases.renewLease(lockName);
+ return rl;
+ }
+
+ public void unlockRow(byte [] regionName, long lockId)
+ throws IOException {
+ checkOpen();
+ NullPointerException npe = null;
+ if(regionName == null) {
+ npe = new NullPointerException("regionName is null");
+ } else if(lockId == -1L) {
+ npe = new NullPointerException("lockId is null");
+ }
+ if(npe != null) {
+ IOException io = new IOException("Invalid arguments to unlockRow");
+ io.initCause(npe);
+ throw io;
+ }
+ requestCount.incrementAndGet();
+ try {
+ HRegion region = getRegion(regionName);
+ String lockName = String.valueOf(lockId);
+ Integer r = null;
+ synchronized(rowlocks) {
+ r = rowlocks.remove(lockName);
+ }
+ if(r == null) {
+ throw new UnknownRowLockException(lockName);
+ }
+ region.releaseRowLock(r);
+ this.leases.cancelLease(lockName);
+ LOG.debug("Row lock " + lockId + " has been explicitly released by client");
+ } catch (Throwable t) {
+ throw convertThrowableToIOE(cleanup(t));
+ }
+ }
+
+ Map<String, Integer> rowlocks =
+ new ConcurrentHashMap<String, Integer>();
+
+ /**
+ * Instantiated as a row lock lease.
+ * If the lease times out, the row lock is released
+ */
+ private class RowLockListener implements LeaseListener {
+ private final String lockName;
+ private final HRegion region;
+
+ RowLockListener(final String lockName, final HRegion region) {
+ this.lockName = lockName;
+ this.region = region;
+ }
+
+ public void leaseExpired() {
+ LOG.info("Row Lock " + this.lockName + " lease expired");
+ Integer r = null;
+ synchronized(rowlocks) {
+ r = rowlocks.remove(this.lockName);
+ }
+ if(r != null) {
+ region.releaseRowLock(r);
+ }
+ }
+ }
+
+ /**
+ * @return Info on this server.
+ */
+ public HServerInfo getServerInfo() {
+ return this.serverInfo;
+ }
+
+ /** @return the info server */
+ public InfoServer getInfoServer() {
+ return infoServer;
+ }
+
+ /**
+ * @return true if a stop has been requested.
+ */
+ public boolean isStopRequested() {
+ return stopRequested.get();
+ }
+
+ /**
+ * @return true if the region server is in safe mode
+ */
+ public boolean isInSafeMode() {
+ return safeMode.get();
+ }
+
+ /**
+ *
+ * @return the configuration
+ */
+ public HBaseConfiguration getConfiguration() {
+ return conf;
+ }
+
+ /** @return the write lock for the server */
+ ReentrantReadWriteLock.WriteLock getWriteLock() {
+ return lock.writeLock();
+ }
+
+ /**
+ * @return Immutable list of this servers regions.
+ */
+ public Collection<HRegion> getOnlineRegions() {
+ return Collections.unmodifiableCollection(onlineRegions.values());
+ }
+
+ /**
+ * @return The HRegionInfos from online regions sorted
+ */
+ public SortedSet<HRegionInfo> getSortedOnlineRegionInfos() {
+ SortedSet<HRegionInfo> result = new TreeSet<HRegionInfo>();
+ synchronized(this.onlineRegions) {
+ for (HRegion r: this.onlineRegions.values()) {
+ result.add(r.getRegionInfo());
+ }
+ }
+ return result;
+ }
+
+ /**
+ * This method removes HRegion corresponding to hri from the Map of onlineRegions.
+ *
+ * @param hri the HRegionInfo corresponding to the HRegion to-be-removed.
+ * @return the removed HRegion, or null if the HRegion was not in onlineRegions.
+ */
+ HRegion removeFromOnlineRegions(HRegionInfo hri) {
+ this.lock.writeLock().lock();
+ HRegion toReturn = null;
+ try {
+ toReturn = onlineRegions.remove(Bytes.mapKey(hri.getRegionName()));
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ return toReturn;
+ }
+
+ /**
+ * @return A new Map of online regions sorted by region size with the first
+ * entry being the biggest.
+ */
+ public SortedMap<Long, HRegion> getCopyOfOnlineRegionsSortedBySize() {
+ // we'll sort the regions in reverse
+ SortedMap<Long, HRegion> sortedRegions = new TreeMap<Long, HRegion>(
+ new Comparator<Long>() {
+ public int compare(Long a, Long b) {
+ return -1 * a.compareTo(b);
+ }
+ });
+ // Copy over all regions. Regions are sorted by size with biggest first.
+ synchronized (this.onlineRegions) {
+ for (HRegion region : this.onlineRegions.values()) {
+ sortedRegions.put(Long.valueOf(region.memcacheSize.get()), region);
+ }
+ }
+ return sortedRegions;
+ }
+
+ /**
+ * @param regionName
+ * @return HRegion for the passed <code>regionName</code> or null if named
+ * region is not member of the online regions.
+ */
+ public HRegion getOnlineRegion(final byte [] regionName) {
+ return onlineRegions.get(Bytes.mapKey(regionName));
+ }
+
+ /** @return the request count */
+ public AtomicInteger getRequestCount() {
+ return this.requestCount;
+ }
+
+ /** @return reference to FlushRequester */
+ public FlushRequester getFlushRequester() {
+ return this.cacheFlusher;
+ }
+
+ /**
+ * Protected utility method for safely obtaining an HRegion handle.
+ * @param regionName Name of online {@link HRegion} to return
+ * @return {@link HRegion} for <code>regionName</code>
+ * @throws NotServingRegionException
+ */
+ protected HRegion getRegion(final byte [] regionName)
+ throws NotServingRegionException {
+ HRegion region = null;
+ this.lock.readLock().lock();
+ try {
+ region = onlineRegions.get(Integer.valueOf(Bytes.hashCode(regionName)));
+ if (region == null) {
+ throw new NotServingRegionException(regionName);
+ }
+ return region;
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /**
+ * Get the top N most loaded regions this server is serving so we can
+ * tell the master which regions it can reallocate if we're overloaded.
+ * TODO: actually calculate which regions are most loaded. (Right now, we're
+ * just grabbing the first N regions being served regardless of load.)
+ */
+ protected HRegionInfo[] getMostLoadedRegions() {
+ ArrayList<HRegionInfo> regions = new ArrayList<HRegionInfo>();
+ synchronized (onlineRegions) {
+ for (HRegion r : onlineRegions.values()) {
+ if (r.isClosed() || r.isClosing()) {
+ continue;
+ }
+ if (regions.size() < numRegionsToReport) {
+ regions.add(r.getRegionInfo());
+ } else {
+ break;
+ }
+ }
+ }
+ return regions.toArray(new HRegionInfo[regions.size()]);
+ }
+
+ /**
+ * Called to verify that this server is up and running.
+ *
+ * @throws IOException
+ */
+ protected void checkOpen() throws IOException {
+ if (this.stopRequested.get() || this.abortRequested) {
+ throw new IOException("Server not running" +
+ (this.abortRequested? ", aborting": ""));
+ }
+ if (!fsOk) {
+ throw new IOException("File system not available");
+ }
+ }
+
+ /**
+ * @return Returns list of non-closed regions hosted on this server. If no
+ * regions to check, returns an empty list.
+ */
+ protected Set<HRegion> getRegionsToCheck() {
+ HashSet<HRegion> regionsToCheck = new HashSet<HRegion>();
+ //TODO: is this locking necessary?
+ lock.readLock().lock();
+ try {
+ regionsToCheck.addAll(this.onlineRegions.values());
+ } finally {
+ lock.readLock().unlock();
+ }
+ // Purge closed regions.
+ for (final Iterator<HRegion> i = regionsToCheck.iterator(); i.hasNext();) {
+ HRegion r = i.next();
+ if (r.isClosed()) {
+ i.remove();
+ }
+ }
+ return regionsToCheck;
+ }
+
+ public long getProtocolVersion(final String protocol,
+ final long clientVersion)
+ throws IOException {
+ if (protocol.equals(HRegionInterface.class.getName())) {
+ return HBaseRPCProtocolVersion.versionID;
+ }
+ throw new IOException("Unknown protocol to name node: " + protocol);
+ }
+
+ /**
+ * @return Queue to which you can add outbound messages.
+ */
+ protected List<HMsg> getOutboundMsgs() {
+ return this.outboundMsgs;
+ }
+
+ /**
+ * Return the total size of all memcaches in every region.
+ * @return memcache size in bytes
+ */
+ public long getGlobalMemcacheSize() {
+ long total = 0;
+ synchronized (onlineRegions) {
+ for (HRegion region : onlineRegions.values()) {
+ total += region.memcacheSize.get();
+ }
+ }
+ return total;
+ }
+
+ /**
+ * @return Return the leases.
+ */
+ protected Leases getLeases() {
+ return leases;
+ }
+
+ /**
+ * @return Return the rootDir.
+ */
+ protected Path getRootDir() {
+ return rootDir;
+ }
+
+ /**
+ * @return Return the fs.
+ */
+ protected FileSystem getFileSystem() {
+ return fs;
+ }
+
+ //
+ // Main program and support routines
+ //
+
+ private static void printUsageAndExit() {
+ printUsageAndExit(null);
+ }
+
+ private static void printUsageAndExit(final String message) {
+ if (message != null) {
+ System.err.println(message);
+ }
+ System.err.println("Usage: java " +
+ "org.apache.hbase.HRegionServer [--bind=hostname:port] start");
+ System.exit(0);
+ }
+
+ /**
+ * Do class main.
+ * @param args
+ * @param regionServerClass HRegionServer to instantiate.
+ */
+ protected static void doMain(final String [] args,
+ final Class<? extends HRegionServer> regionServerClass) {
+ if (args.length < 1) {
+ printUsageAndExit();
+ }
+ Configuration conf = new HBaseConfiguration();
+
+ // Process command-line args. TODO: Better cmd-line processing
+ // (but hopefully something not as painful as cli options).
+ final String addressArgKey = "--bind=";
+ for (String cmd: args) {
+ if (cmd.startsWith(addressArgKey)) {
+ conf.set(REGIONSERVER_ADDRESS, cmd.substring(addressArgKey.length()));
+ continue;
+ }
+
+ if (cmd.equals("start")) {
+ try {
+ // If 'local', don't start a region server here. Defer to
+ // LocalHBaseCluster. It manages 'local' clusters.
+ if (LocalHBaseCluster.isLocal(conf)) {
+ LOG.warn("Not starting a distinct region server because " +
+ "hbase.master is set to 'local' mode");
+ } else {
+ RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
+ if (runtime != null) {
+ LOG.info("vmInputArguments=" + runtime.getInputArguments());
+ }
+ Constructor<? extends HRegionServer> c =
+ regionServerClass.getConstructor(HBaseConfiguration.class);
+ HRegionServer hrs = c.newInstance(conf);
+ Thread t = new Thread(hrs);
+ t.setName("regionserver" + hrs.server.getListenerAddress());
+ t.start();
+ }
+ } catch (Throwable t) {
+ LOG.error( "Can not start region server because "+
+ StringUtils.stringifyException(t) );
+ System.exit(-1);
+ }
+ break;
+ }
+
+ if (cmd.equals("stop")) {
+ printUsageAndExit("To shutdown the regionserver run " +
+ "bin/hbase-daemon.sh stop regionserver or send a kill signal to" +
+ "the regionserver pid");
+ }
+
+ // Print out usage if we get to here.
+ printUsageAndExit();
+ }
+ }
+
+ /**
+ * @param args
+ */
+ public static void main(String [] args) {
+ Configuration conf = new HBaseConfiguration();
+ @SuppressWarnings("unchecked")
+ Class<? extends HRegionServer> regionServerClass = (Class<? extends HRegionServer>) conf
+ .getClass(HConstants.REGION_SERVER_IMPL, HRegionServer.class);
+ doMain(args, regionServerClass);
+ }
+
+ /** {@inheritDoc} */
+ public long incrementColumnValue(byte[] regionName, byte[] row,
+ byte[] column, long amount) throws IOException {
+ checkOpen();
+
+ NullPointerException npe = null;
+ if (regionName == null) {
+ npe = new NullPointerException("regionName is null");
+ } else if (row == null) {
+ npe = new NullPointerException("row is null");
+ } else if (column == null) {
+ npe = new NullPointerException("column is null");
+ }
+ if (npe != null) {
+ IOException io = new IOException(
+ "Invalid arguments to incrementColumnValue", npe);
+ throw io;
+ }
+ requestCount.incrementAndGet();
+ try {
+ HRegion region = getRegion(regionName);
+ return region.incrementColumnValue(row, column, amount);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+
+
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/InternalScanner.java b/src/java/org/apache/hadoop/hbase/regionserver/InternalScanner.java
new file mode 100644
index 0000000..eff23bd
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/InternalScanner.java
@@ -0,0 +1,65 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * Internal scanners differ from client-side scanners in that they operate on
+ * HStoreKeys and byte[] instead of RowResults. This is because they are
+ * actually close to how the data is physically stored, and therefore it is more
+ * convenient to interact with them that way. It is also much easier to merge
+ * the results across SortedMaps than RowResults.
+ *
+ * <p>Additionally, we need to be able to determine if the scanner is doing
+ * wildcard column matches (when only a column family is specified or if a
+ * column regex is specified) or if multiple members of the same column family
+ * were specified. If so, we need to ignore the timestamp to ensure that we get
+ * all the family members, as they may have been last updated at different
+ * times.
+ */
+public interface InternalScanner extends Closeable {
+ /**
+ * Grab the next row's worth of values. The scanner will return the most
+ * recent data value for each row that is not newer than the target time
+ * passed when the scanner was created.
+ * @param results
+ * @return true if data was returned
+ * @throws IOException
+ */
+ public boolean next(List<KeyValue> results)
+ throws IOException;
+
+ /**
+ * Closes the scanner and releases any resources it has allocated
+ * @throws IOException
+ */
+ public void close() throws IOException;
+
+ /** @return true if the scanner is matching a column family or regex */
+ public boolean isWildcardScanner();
+
+ /** @return true if the scanner is matching multiple column family members */
+ public boolean isMultipleMatchScanner();
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/LogFlusher.java b/src/java/org/apache/hadoop/hbase/regionserver/LogFlusher.java
new file mode 100644
index 0000000..5620458
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/LogFlusher.java
@@ -0,0 +1,60 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Chore;
+
+/**
+ * LogFlusher is a Chore that wakes every threadWakeInterval and calls
+ * the HLog to do an optional sync if there are unflushed entries, and the
+ * optionalFlushInterval has passed since the last flush.
+ */
+public class LogFlusher extends Chore {
+ static final Log LOG = LogFactory.getLog(LogFlusher.class);
+
+ private final AtomicReference<HLog> log =
+ new AtomicReference<HLog>(null);
+
+ LogFlusher(final int period, final AtomicBoolean stop) {
+ super(period, stop);
+ }
+
+ void setHLog(HLog log) {
+ synchronized (log) {
+ this.log.set(log);
+ }
+ }
+
+ @Override
+ protected void chore() {
+ synchronized (log) {
+ HLog hlog = log.get();
+ if (hlog != null) {
+ hlog.optionalSync();
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/LogRollListener.java b/src/java/org/apache/hadoop/hbase/regionserver/LogRollListener.java
new file mode 100644
index 0000000..588c9fe
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/LogRollListener.java
@@ -0,0 +1,29 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+/**
+ * Mechanism by which the HLog requests a log roll
+ */
+public interface LogRollListener {
+ /** Request that the log be rolled */
+ public void logRollRequested();
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/LogRoller.java b/src/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
new file mode 100644
index 0000000..f39df01
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
@@ -0,0 +1,130 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Runs periodically to determine if the HLog should be rolled.
+ *
+ * NOTE: This class extends Thread rather than Chore because the sleep time
+ * can be interrupted when there is something to do, rather than the Chore
+ * sleep time which is invariant.
+ */
+class LogRoller extends Thread implements LogRollListener {
+ static final Log LOG = LogFactory.getLog(LogRoller.class);
+ private final ReentrantLock rollLock = new ReentrantLock();
+ private final AtomicBoolean rollLog = new AtomicBoolean(false);
+ private final HRegionServer server;
+
+ /** @param server */
+ public LogRoller(final HRegionServer server) {
+ super();
+ this.server = server;
+ }
+
+ @Override
+ public void run() {
+ while (!server.isStopRequested()) {
+ if (!rollLog.get()) {
+ synchronized (rollLog) {
+ try {
+ rollLog.wait(server.threadWakeFrequency);
+ } catch (InterruptedException e) {
+ continue;
+ }
+ }
+ continue;
+ }
+ rollLock.lock(); // Don't interrupt us. We're working
+ try {
+ byte [] regionToFlush = server.getLog().rollWriter();
+ if (regionToFlush != null) {
+ scheduleFlush(regionToFlush);
+ }
+ } catch (FailedLogCloseException e) {
+ LOG.fatal("Forcing server shutdown", e);
+ server.checkFileSystem();
+ server.abort();
+ } catch (java.net.ConnectException e) {
+ LOG.fatal("Forcing server shutdown", e);
+ server.checkFileSystem();
+ server.abort();
+ } catch (IOException ex) {
+ LOG.fatal("Log rolling failed with ioe: ",
+ RemoteExceptionHandler.checkIOException(ex));
+ server.checkFileSystem();
+ // Abort if we get here. We probably won't recover an IOE. HBASE-1132
+ server.abort();
+ } catch (Exception ex) {
+ LOG.error("Log rolling failed", ex);
+ server.checkFileSystem();
+ } finally {
+ rollLog.set(false);
+ rollLock.unlock();
+ }
+ }
+ LOG.info("LogRoller exiting.");
+ }
+
+ private void scheduleFlush(final byte [] region) {
+ boolean scheduled = false;
+ HRegion r = this.server.getOnlineRegion(region);
+ FlushRequester requester = null;
+ if (r != null) {
+ requester = this.server.getFlushRequester();
+ if (requester != null) {
+ requester.request(r);
+ scheduled = true;
+ }
+ }
+ if (!scheduled) {
+ LOG.warn("Failed to schedule flush of " +
+ Bytes.toString(region) + "r=" + r + ", requester=" + requester);
+ }
+ }
+
+ public void logRollRequested() {
+ synchronized (rollLog) {
+ rollLog.set(true);
+ rollLog.notifyAll();
+ }
+ }
+
+ /**
+ * Called by region server to wake up this thread if it sleeping.
+ * It is sleeping if rollLock is not held.
+ */
+ public void interruptIfNecessary() {
+ try {
+ rollLock.lock();
+ this.interrupt();
+ } finally {
+ rollLock.unlock();
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/LruHashMap.java b/src/java/org/apache/hadoop/hbase/regionserver/LruHashMap.java
new file mode 100644
index 0000000..0fa23fb
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/LruHashMap.java
@@ -0,0 +1,1099 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.io.HeapSize;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * The LruHashMap is a memory-aware HashMap with a configurable maximum
+ * memory footprint.
+ * <p>
+ * It maintains an ordered list of all entries in the map ordered by
+ * access time. When space needs to be freed becase the maximum has been
+ * reached, or the application has asked to free memory, entries will be
+ * evicted according to an LRU (least-recently-used) algorithm. That is,
+ * those entries which have not been accessed the longest will be evicted
+ * first.
+ * <p>
+ * Both the Key and Value Objects used for this class must extend
+ * <code>HeapSize</code> in order to track heap usage.
+ * <p>
+ * This class contains internal synchronization and is thread-safe.
+ */
+public class LruHashMap<K extends HeapSize, V extends HeapSize>
+implements HeapSize, Map<K,V> {
+
+ static final Log LOG = LogFactory.getLog(LruHashMap.class);
+
+ /** The default size (in bytes) of the LRU */
+ private static final long DEFAULT_MAX_MEM_USAGE = 50000;
+ /** The default capacity of the hash table */
+ private static final int DEFAULT_INITIAL_CAPACITY = 16;
+ /** The maxmum capacity of the hash table */
+ private static final int MAXIMUM_CAPACITY = 1 << 30;
+ /** The default load factor to use */
+ private static final float DEFAULT_LOAD_FACTOR = 0.75f;
+
+ /** Memory overhead of this Object (for HeapSize) */
+ private static final int OVERHEAD = 5 * HeapSize.LONG + 2 * HeapSize.INT +
+ 2 * HeapSize.FLOAT + 3 * HeapSize.REFERENCE + 1 * HeapSize.ARRAY;
+
+ /** Load factor allowed (usually 75%) */
+ private final float loadFactor;
+ /** Number of key/vals in the map */
+ private int size;
+ /** Size at which we grow hash */
+ private int threshold;
+ /** Entries in the map */
+ private Entry [] entries;
+
+ /** Pointer to least recently used entry */
+ private Entry<K,V> headPtr;
+ /** Pointer to most recently used entry */
+ private Entry<K,V> tailPtr;
+
+ /** Maximum memory usage of this map */
+ private long memTotal = 0;
+ /** Amount of available memory */
+ private long memFree = 0;
+
+ /** Number of successful (found) get() calls */
+ private long hitCount = 0;
+ /** Number of unsuccessful (not found) get() calls */
+ private long missCount = 0;
+
+ /**
+ * Constructs a new, empty map with the specified initial capacity,
+ * load factor, and maximum memory usage.
+ *
+ * @param initialCapacity the initial capacity
+ * @param loadFactor the load factor
+ * @param maxMemUsage the maximum total memory usage
+ * @throws IllegalArgumentException if the initial capacity is less than one
+ * @throws IllegalArgumentException if the initial capacity is greater than
+ * the maximum capacity
+ * @throws IllegalArgumentException if the load factor is <= 0
+ * @throws IllegalArgumentException if the max memory usage is too small
+ * to support the base overhead
+ */
+ public LruHashMap(int initialCapacity, float loadFactor,
+ long maxMemUsage) {
+ if (initialCapacity < 1) {
+ throw new IllegalArgumentException("Initial capacity must be > 0");
+ }
+ if (initialCapacity > MAXIMUM_CAPACITY) {
+ throw new IllegalArgumentException("Initial capacity is too large");
+ }
+ if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
+ throw new IllegalArgumentException("Load factor must be > 0");
+ }
+ if (maxMemUsage <= (OVERHEAD + initialCapacity * HeapSize.REFERENCE)) {
+ throw new IllegalArgumentException("Max memory usage too small to " +
+ "support base overhead");
+ }
+
+ /** Find a power of 2 >= initialCapacity */
+ int capacity = calculateCapacity(initialCapacity);
+ this.loadFactor = loadFactor;
+ this.threshold = calculateThreshold(capacity,loadFactor);
+ this.entries = new Entry[capacity];
+ this.memFree = maxMemUsage;
+ this.memTotal = maxMemUsage;
+ init();
+ }
+
+ /**
+ * Constructs a new, empty map with the specified initial capacity and
+ * load factor, and default maximum memory usage.
+ *
+ * @param initialCapacity the initial capacity
+ * @param loadFactor the load factor
+ * @throws IllegalArgumentException if the initial capacity is less than one
+ * @throws IllegalArgumentException if the initial capacity is greater than
+ * the maximum capacity
+ * @throws IllegalArgumentException if the load factor is <= 0
+ */
+ public LruHashMap(int initialCapacity, float loadFactor) {
+ this(initialCapacity, loadFactor, DEFAULT_MAX_MEM_USAGE);
+ }
+
+ /**
+ * Constructs a new, empty map with the specified initial capacity and
+ * with the default load factor and maximum memory usage.
+ *
+ * @param initialCapacity the initial capacity
+ * @throws IllegalArgumentException if the initial capacity is less than one
+ * @throws IllegalArgumentException if the initial capacity is greater than
+ * the maximum capacity
+ */
+ public LruHashMap(int initialCapacity) {
+ this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_MAX_MEM_USAGE);
+ }
+
+ /**
+ * Constructs a new, empty map with the specified maximum memory usage
+ * and with default initial capacity and load factor.
+ *
+ * @param maxMemUsage the maximum total memory usage
+ * @throws IllegalArgumentException if the max memory usage is too small
+ * to support the base overhead
+ */
+ public LruHashMap(long maxMemUsage) {
+ this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR,
+ maxMemUsage);
+ }
+
+ /**
+ * Constructs a new, empty map with the default initial capacity,
+ * load factor and maximum memory usage.
+ */
+ public LruHashMap() {
+ this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR,
+ DEFAULT_MAX_MEM_USAGE);
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * Get the currently available memory for this LRU in bytes.
+ * This is (maxAllowed - currentlyUsed).
+ *
+ * @return currently available bytes
+ */
+ public long getMemFree() {
+ return memFree;
+ }
+
+ /**
+ * Get the maximum memory allowed for this LRU in bytes.
+ *
+ * @return maximum allowed bytes
+ */
+ public long getMemMax() {
+ return memTotal;
+ }
+
+ /**
+ * Get the currently used memory for this LRU in bytes.
+ *
+ * @return currently used memory in bytes
+ */
+ public long getMemUsed() {
+ return (memTotal - memFree);
+ }
+
+ /**
+ * Get the number of hits to the map. This is the number of times
+ * a call to get() returns a matched key.
+ *
+ * @return number of hits
+ */
+ public long getHitCount() {
+ return hitCount;
+ }
+
+ /**
+ * Get the number of misses to the map. This is the number of times
+ * a call to get() returns null.
+ *
+ * @return number of misses
+ */
+ public long getMissCount() {
+ return missCount;
+ }
+
+ /**
+ * Get the hit ratio. This is the number of hits divided by the
+ * total number of requests.
+ *
+ * @return hit ratio (double between 0 and 1)
+ */
+ public double getHitRatio() {
+ return (double)((double)hitCount/
+ ((double)(hitCount+missCount)));
+ }
+
+ /**
+ * Free the requested amount of memory from the LRU map.
+ *
+ * This will do LRU eviction from the map until at least as much
+ * memory as requested is freed. This does not affect the maximum
+ * memory usage parameter.
+ *
+ * @param requestedAmount memory to free from LRU in bytes
+ * @return actual amount of memory freed in bytes
+ */
+ public synchronized long freeMemory(long requestedAmount) throws Exception {
+ long minMemory = getMinimumUsage();
+ if(requestedAmount > (getMemUsed() - getMinimumUsage())) {
+ return clearAll();
+ }
+ long freedMemory = 0;
+ while(freedMemory < requestedAmount) {
+ freedMemory += evictFromLru();
+ }
+ return freedMemory;
+ }
+
+ /**
+ * The total memory usage of this map
+ *
+ * @return memory usage of map in bytes
+ */
+ public long heapSize() {
+ return (memTotal - memFree);
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * Retrieves the value associated with the specified key.
+ *
+ * If an entry is found, it is updated in the LRU as the most recently
+ * used (last to be evicted) entry in the map.
+ *
+ * @param key the key
+ * @return the associated value, or null if none found
+ * @throws NullPointerException if key is null
+ */
+ public synchronized V get(Object key) {
+ checkKey((K)key);
+ int hash = hash(key);
+ int i = hashIndex(hash, entries.length);
+ Entry<K,V> e = entries[i];
+ while (true) {
+ if (e == null) {
+ missCount++;
+ return null;
+ }
+ if (e.hash == hash && isEqual(key, e.key)) {
+ // Hit! Update position in LRU
+ hitCount++;
+ updateLru(e);
+ return e.value;
+ }
+ e = e.next;
+ }
+ }
+
+ /**
+ * Insert a key-value mapping into the map.
+ *
+ * Entry will be inserted as the most recently used.
+ *
+ * Both the key and value are required to be Objects and must
+ * implement the HeapSize interface.
+ *
+ * @param key the key
+ * @param value the value
+ * @return the value that was previously mapped to this key, null if none
+ * @throws UnsupportedOperationException if either objects do not
+ * implement HeapSize
+ * @throws NullPointerException if the key or value is null
+ */
+ public synchronized V put(K key, V value) {
+ checkKey(key);
+ checkValue(value);
+ int hash = hash(key);
+ int i = hashIndex(hash, entries.length);
+
+ // For old values
+ for (Entry<K,V> e = entries[i]; e != null; e = e.next) {
+ if (e.hash == hash && isEqual(key, e.key)) {
+ V oldValue = e.value;
+ long memChange = e.replaceValue(value);
+ checkAndFreeMemory(memChange);
+ // If replacing an old value for this key, update in LRU
+ updateLru(e);
+ return oldValue;
+ }
+ }
+ long memChange = addEntry(hash, key, value, i);
+ checkAndFreeMemory(memChange);
+ return null;
+ }
+
+ /**
+ * Deletes the mapping for the specified key if it exists.
+ *
+ * @param key the key of the entry to be removed from the map
+ * @return the value associated with the specified key, or null
+ * if no mapping exists.
+ */
+ public synchronized V remove(Object key) {
+ Entry<K,V> e = removeEntryForKey((K)key);
+ if(e == null) return null;
+ // Add freed memory back to available
+ memFree += e.heapSize();
+ return e.value;
+ }
+
+ /**
+ * Gets the size (number of entries) of the map.
+ *
+ * @return size of the map
+ */
+ public int size() {
+ return size;
+ }
+
+ /**
+ * Checks whether the map is currently empty.
+ *
+ * @return true if size of map is zero
+ */
+ public boolean isEmpty() {
+ return size == 0;
+ }
+
+ /**
+ * Clears all entries from the map.
+ *
+ * This frees all entries, tracking memory usage along the way.
+ * All references to entries are removed so they can be GC'd.
+ */
+ public synchronized void clear() {
+ memFree += clearAll();
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * Checks whether there is a value in the map for the specified key.
+ *
+ * Does not affect the LRU.
+ *
+ * @param key the key to check
+ * @return true if the map contains a value for this key, false if not
+ * @throws NullPointerException if the key is null
+ */
+ public synchronized boolean containsKey(Object key) {
+ checkKey((K)key);
+ int hash = hash(key);
+ int i = hashIndex(hash, entries.length);
+ Entry e = entries[i];
+ while (e != null) {
+ if (e.hash == hash && isEqual(key, e.key))
+ return true;
+ e = e.next;
+ }
+ return false;
+ }
+
+ /**
+ * Checks whether this is a mapping which contains the specified value.
+ *
+ * Does not affect the LRU. This is an inefficient operation.
+ *
+ * @param value the value to check
+ * @return true if the map contains an entry for this value, false
+ * if not
+ * @throws NullPointerException if the value is null
+ */
+ public synchronized boolean containsValue(Object value) {
+ checkValue((V)value);
+ Entry[] tab = entries;
+ for (int i = 0; i < tab.length ; i++)
+ for (Entry e = tab[i] ; e != null ; e = e.next)
+ if (value.equals(e.value))
+ return true;
+ return false;
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * Enforces key constraints. Null keys are not permitted and key must
+ * implement HeapSize. It should not be necessary to verify the second
+ * constraint because that's enforced on instantiation?
+ *
+ * Can add other constraints in the future.
+ *
+ * @param key the key
+ * @throws NullPointerException if the key is null
+ * @throws UnsupportedOperationException if the key class does not
+ * implement the HeapSize interface
+ */
+ private void checkKey(K key) {
+ if(key == null) {
+ throw new NullPointerException("null keys are not allowed");
+ }
+ }
+
+ /**
+ * Enforces value constraints. Null values are not permitted and value must
+ * implement HeapSize. It should not be necessary to verify the second
+ * constraint because that's enforced on instantiation?
+ *
+ * Can add other contraints in the future.
+ *
+ * @param value the value
+ * @throws NullPointerException if the value is null
+ * @throws UnsupportedOperationException if the value class does not
+ * implement the HeapSize interface
+ */
+ private void checkValue(V value) {
+ if(value == null) {
+ throw new NullPointerException("null values are not allowed");
+ }
+ }
+
+ /**
+ * Returns the minimum memory usage of the base map structure.
+ *
+ * @return baseline memory overhead of object in bytes
+ */
+ private long getMinimumUsage() {
+ return OVERHEAD + (entries.length * HeapSize.REFERENCE);
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * Evicts and frees based on LRU until at least as much memory as requested
+ * is available.
+ *
+ * @param memNeeded the amount of memory needed in bytes
+ */
+ private void checkAndFreeMemory(long memNeeded) {
+ while(memFree < memNeeded) {
+ evictFromLru();
+ }
+ memFree -= memNeeded;
+ }
+
+ /**
+ * Evicts based on LRU. This removes all references and updates available
+ * memory.
+ *
+ * @return amount of memory freed in bytes
+ */
+ private long evictFromLru() {
+ long freed = headPtr.heapSize();
+ memFree += freed;
+ removeEntry(headPtr);
+ return freed;
+ }
+
+ /**
+ * Moves the specified entry to the most recently used slot of the
+ * LRU. This is called whenever an entry is fetched.
+ *
+ * @param e entry that was accessed
+ */
+ private void updateLru(Entry<K,V> e) {
+ Entry<K,V> prev = e.getPrevPtr();
+ Entry<K,V> next = e.getNextPtr();
+ if(next != null) {
+ if(prev != null) {
+ prev.setNextPtr(next);
+ next.setPrevPtr(prev);
+ } else {
+ headPtr = next;
+ headPtr.setPrevPtr(null);
+ }
+ e.setNextPtr(null);
+ e.setPrevPtr(tailPtr);
+ tailPtr.setNextPtr(e);
+ tailPtr = e;
+ }
+ }
+
+ /**
+ * Removes the specified entry from the map and LRU structure.
+ *
+ * @param entry entry to be removed
+ */
+ private void removeEntry(Entry<K,V> entry) {
+ K k = entry.key;
+ int hash = entry.hash;
+ int i = hashIndex(hash, entries.length);
+ Entry<K,V> prev = entries[i];
+ Entry<K,V> e = prev;
+
+ while (e != null) {
+ Entry<K,V> next = e.next;
+ if (e.hash == hash && isEqual(k, e.key)) {
+ size--;
+ if (prev == e) {
+ entries[i] = next;
+ } else {
+ prev.next = next;
+ }
+
+ Entry<K,V> prevPtr = e.getPrevPtr();
+ Entry<K,V> nextPtr = e.getNextPtr();
+
+ if(prevPtr != null && nextPtr != null) {
+ prevPtr.setNextPtr(nextPtr);
+ nextPtr.setPrevPtr(prevPtr);
+ } else if(prevPtr != null) {
+ tailPtr = prevPtr;
+ prevPtr.setNextPtr(null);
+ } else if(nextPtr != null) {
+ headPtr = nextPtr;
+ nextPtr.setPrevPtr(null);
+ }
+
+ return;
+ }
+ prev = e;
+ e = next;
+ }
+ }
+
+ /**
+ * Removes and returns the entry associated with the specified
+ * key.
+ *
+ * @param key key of the entry to be deleted
+ * @return entry that was removed, or null if none found
+ */
+ private Entry<K,V> removeEntryForKey(K key) {
+ int hash = hash(key);
+ int i = hashIndex(hash, entries.length);
+ Entry<K,V> prev = entries[i];
+ Entry<K,V> e = prev;
+
+ while (e != null) {
+ Entry<K,V> next = e.next;
+ if (e.hash == hash && isEqual(key, e.key)) {
+ size--;
+ if (prev == e) {
+ entries[i] = next;
+ } else {
+ prev.next = next;
+ }
+
+ // Updating LRU
+ Entry<K,V> prevPtr = e.getPrevPtr();
+ Entry<K,V> nextPtr = e.getNextPtr();
+ if(prevPtr != null && nextPtr != null) {
+ prevPtr.setNextPtr(nextPtr);
+ nextPtr.setPrevPtr(prevPtr);
+ } else if(prevPtr != null) {
+ tailPtr = prevPtr;
+ prevPtr.setNextPtr(null);
+ } else if(nextPtr != null) {
+ headPtr = nextPtr;
+ nextPtr.setPrevPtr(null);
+ }
+
+ return e;
+ }
+ prev = e;
+ e = next;
+ }
+
+ return e;
+ }
+
+ /**
+ * Adds a new entry with the specified key, value, hash code, and
+ * bucket index to the map.
+ *
+ * Also puts it in the bottom (most-recent) slot of the list and
+ * checks to see if we need to grow the array.
+ *
+ * @param hash hash value of key
+ * @param key the key
+ * @param value the value
+ * @param bucketIndex index into hash array to store this entry
+ * @return the amount of heap size used to store the new entry
+ */
+ private long addEntry(int hash, K key, V value, int bucketIndex) {
+ Entry<K,V> e = entries[bucketIndex];
+ Entry<K,V> newE = new Entry<K,V>(hash, key, value, e, tailPtr);
+ entries[bucketIndex] = newE;
+ // add as most recently used in lru
+ if (size == 0) {
+ headPtr = newE;
+ tailPtr = newE;
+ } else {
+ newE.setPrevPtr(tailPtr);
+ tailPtr.setNextPtr(newE);
+ tailPtr = newE;
+ }
+ // Grow table if we are past the threshold now
+ if (size++ >= threshold) {
+ growTable(2 * entries.length);
+ }
+ return newE.heapSize();
+ }
+
+ /**
+ * Clears all the entries in the map. Tracks the amount of memory being
+ * freed along the way and returns the total.
+ *
+ * Cleans up all references to allow old entries to be GC'd.
+ *
+ * @return total memory freed in bytes
+ */
+ private long clearAll() {
+ Entry cur;
+ Entry prev;
+ long freedMemory = 0;
+ for(int i=0; i<entries.length; i++) {
+ cur = entries[i];
+ while(cur != null) {
+ freedMemory += cur.heapSize();
+ cur = cur.next;
+ }
+ entries[i] = null;
+ }
+ headPtr = null;
+ tailPtr = null;
+ size = 0;
+ return freedMemory;
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * Recreates the entire contents of the hashmap into a new array
+ * with double the capacity. This method is called when the number of
+ * keys in the map reaches the current threshold.
+ *
+ * @param newCapacity the new size of the hash entries
+ */
+ private void growTable(int newCapacity) {
+ Entry [] oldTable = entries;
+ int oldCapacity = oldTable.length;
+
+ // Do not allow growing the table beyond the max capacity
+ if (oldCapacity == MAXIMUM_CAPACITY) {
+ threshold = Integer.MAX_VALUE;
+ return;
+ }
+
+ // Determine how much additional space will be required to grow the array
+ long requiredSpace = (newCapacity - oldCapacity) * HeapSize.REFERENCE;
+
+ // Verify/enforce we have sufficient memory to grow
+ checkAndFreeMemory(requiredSpace);
+
+ Entry [] newTable = new Entry[newCapacity];
+
+ // Transfer existing entries to new hash table
+ for(int i=0; i < oldCapacity; i++) {
+ Entry<K,V> entry = oldTable[i];
+ if(entry != null) {
+ // Set to null for GC
+ oldTable[i] = null;
+ do {
+ Entry<K,V> next = entry.next;
+ int idx = hashIndex(entry.hash, newCapacity);
+ entry.next = newTable[idx];
+ newTable[idx] = entry;
+ entry = next;
+ } while(entry != null);
+ }
+ }
+
+ entries = newTable;
+ threshold = (int)(newCapacity * loadFactor);
+ }
+
+ /**
+ * Gets the hash code for the specified key.
+ * This implementation uses the additional hashing routine
+ * from JDK 1.4.
+ *
+ * @param key the key to get a hash value for
+ * @return the hash value
+ */
+ private int hash(Object key) {
+ int h = key.hashCode();
+ h += ~(h << 9);
+ h ^= (h >>> 14);
+ h += (h << 4);
+ h ^= (h >>> 10);
+ return h;
+ }
+
+ /**
+ * Compares two objects for equality. Method uses equals method and
+ * assumes neither value is null.
+ *
+ * @param x the first value
+ * @param y the second value
+ * @return true if equal
+ */
+ private boolean isEqual(Object x, Object y) {
+ return (x == y || x.equals(y));
+ }
+
+ /**
+ * Determines the index into the current hash table for the specified
+ * hashValue.
+ *
+ * @param hashValue the hash value
+ * @param length the current number of hash buckets
+ * @return the index of the current hash array to use
+ */
+ private int hashIndex(int hashValue, int length) {
+ return hashValue & (length - 1);
+ }
+
+ /**
+ * Calculates the capacity of the array backing the hash
+ * by normalizing capacity to a power of 2 and enforcing
+ * capacity limits.
+ *
+ * @param proposedCapacity the proposed capacity
+ * @return the normalized capacity
+ */
+ private int calculateCapacity(int proposedCapacity) {
+ int newCapacity = 1;
+ if(proposedCapacity > MAXIMUM_CAPACITY) {
+ newCapacity = MAXIMUM_CAPACITY;
+ } else {
+ while(newCapacity < proposedCapacity) {
+ newCapacity <<= 1;
+ }
+ if(newCapacity > MAXIMUM_CAPACITY) {
+ newCapacity = MAXIMUM_CAPACITY;
+ }
+ }
+ return newCapacity;
+ }
+
+ /**
+ * Calculates the threshold of the map given the capacity and load
+ * factor. Once the number of entries in the map grows to the
+ * threshold we will double the size of the array.
+ *
+ * @param capacity the size of the array
+ * @param factor the load factor of the hash
+ */
+ private int calculateThreshold(int capacity, float factor) {
+ return (int)(capacity * factor);
+ }
+
+ /**
+ * Set the initial heap usage of this class. Includes class variable
+ * overhead and the entry array.
+ */
+ private void init() {
+ memFree -= OVERHEAD;
+ memFree -= (entries.length * HeapSize.REFERENCE);
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * Debugging function that returns a List sorted by access time.
+ *
+ * The order is oldest to newest (first in list is next to be evicted).
+ *
+ * @return Sorted list of entries
+ */
+ public List<Entry<K,V>> entryLruList() {
+ List<Entry<K,V>> entryList = new ArrayList<Entry<K,V>>();
+ Entry<K,V> entry = headPtr;
+ while(entry != null) {
+ entryList.add(entry);
+ entry = entry.getNextPtr();
+ }
+ return entryList;
+ }
+
+ /**
+ * Debugging function that returns a Set of all entries in the hash table.
+ *
+ * @return Set of entries in hash
+ */
+ public Set<Entry<K,V>> entryTableSet() {
+ Set<Entry<K,V>> entrySet = new HashSet<Entry<K,V>>();
+ Entry [] table = entries;
+ for(int i=0;i<table.length;i++) {
+ for(Entry e = table[i]; e != null; e = e.next) {
+ entrySet.add(e);
+ }
+ }
+ return entrySet;
+ }
+
+ /**
+ * Get the head of the linked list (least recently used).
+ *
+ * @return head of linked list
+ */
+ public Entry getHeadPtr() {
+ return headPtr;
+ }
+
+ /**
+ * Get the tail of the linked list (most recently used).
+ *
+ * @return tail of linked list
+ */
+ public Entry getTailPtr() {
+ return tailPtr;
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * To best optimize this class, some of the methods that are part of a
+ * Map implementation are not supported. This is primarily related
+ * to being able to get Sets and Iterators of this map which require
+ * significant overhead and code complexity to support and are
+ * unnecessary for the requirements of this class.
+ */
+
+ /**
+ * Intentionally unimplemented.
+ */
+ public Set<Map.Entry<K,V>> entrySet() {
+ throw new UnsupportedOperationException(
+ "entrySet() is intentionally unimplemented");
+ }
+
+ /**
+ * Intentionally unimplemented.
+ */
+ public boolean equals(Object o) {
+ throw new UnsupportedOperationException(
+ "equals(Object) is intentionally unimplemented");
+ }
+
+ /**
+ * Intentionally unimplemented.
+ */
+ public int hashCode() {
+ throw new UnsupportedOperationException(
+ "hashCode(Object) is intentionally unimplemented");
+ }
+
+ /**
+ * Intentionally unimplemented.
+ */
+ public Set<K> keySet() {
+ throw new UnsupportedOperationException(
+ "keySet() is intentionally unimplemented");
+ }
+
+ /**
+ * Intentionally unimplemented.
+ */
+ public void putAll(Map<? extends K, ? extends V> m) {
+ throw new UnsupportedOperationException(
+ "putAll() is intentionally unimplemented");
+ }
+
+ /**
+ * Intentionally unimplemented.
+ */
+ public Collection<V> values() {
+ throw new UnsupportedOperationException(
+ "values() is intentionally unimplemented");
+ }
+
+ //--------------------------------------------------------------------------
+ /**
+ * Entry to store key/value mappings.
+ * <p>
+ * Contains previous and next pointers for the doubly linked-list which is
+ * used for LRU eviction.
+ * <p>
+ * Instantiations of this class are memory aware. Both the key and value
+ * classes used must also implement <code>HeapSize</code>.
+ */
+ protected static class Entry<K extends HeapSize, V extends HeapSize>
+ implements Map.Entry<K,V>, HeapSize {
+ /** The baseline overhead memory usage of this class */
+ static final int OVERHEAD = 1 * HeapSize.LONG + 5 * HeapSize.REFERENCE +
+ 2 * HeapSize.INT;
+
+ /** The key */
+ protected final K key;
+ /** The value */
+ protected V value;
+ /** The hash value for this entries key */
+ protected final int hash;
+ /** The next entry in the hash chain (for collisions) */
+ protected Entry<K,V> next;
+
+ /** The previous entry in the LRU list (towards LRU) */
+ protected Entry<K,V> prevPtr;
+ /** The next entry in the LRU list (towards MRU) */
+ protected Entry<K,V> nextPtr;
+
+ /** The precomputed heap size of this entry */
+ protected long heapSize;
+
+ /**
+ * Create a new entry.
+ *
+ * @param h the hash value of the key
+ * @param k the key
+ * @param v the value
+ * @param nextChainPtr the next entry in the hash chain, null if none
+ * @param prevLruPtr the previous entry in the LRU
+ */
+ Entry(int h, K k, V v, Entry<K,V> nextChainPtr, Entry<K,V> prevLruPtr) {
+ value = v;
+ next = nextChainPtr;
+ key = k;
+ hash = h;
+ prevPtr = prevLruPtr;
+ nextPtr = null;
+ // Pre-compute heap size
+ heapSize = OVERHEAD + k.heapSize() + v.heapSize();
+ }
+
+ /**
+ * Get the key of this entry.
+ *
+ * @return the key associated with this entry
+ */
+ public K getKey() {
+ return key;
+ }
+
+ /**
+ * Get the value of this entry.
+ *
+ * @return the value currently associated with this entry
+ */
+ public V getValue() {
+ return value;
+ }
+
+ /**
+ * Set the value of this entry.
+ *
+ * It is not recommended to use this method when changing the value.
+ * Rather, using <code>replaceValue</code> will return the difference
+ * in heap usage between the previous and current values.
+ *
+ * @param newValue the new value to associate with this entry
+ * @return the value previously associated with this entry
+ */
+ public V setValue(V newValue) {
+ V oldValue = value;
+ value = newValue;
+ return oldValue;
+ }
+
+ /**
+ * Replace the value of this entry.
+ *
+ * Computes and returns the difference in heap size when changing
+ * the value associated with this entry.
+ *
+ * @param newValue the new value to associate with this entry
+ * @return the change in heap usage of this entry in bytes
+ */
+ protected long replaceValue(V newValue) {
+ long sizeDiff = newValue.heapSize() - value.heapSize();
+ value = newValue;
+ heapSize += sizeDiff;
+ return sizeDiff;
+ }
+
+ /**
+ * Returns true is the specified entry has the same key and the
+ * same value as this entry.
+ *
+ * @param o entry to test against current
+ * @return true is entries have equal key and value, false if no
+ */
+ public boolean equals(Object o) {
+ if (!(o instanceof Map.Entry))
+ return false;
+ Map.Entry e = (Map.Entry)o;
+ Object k1 = getKey();
+ Object k2 = e.getKey();
+ if (k1 == k2 || (k1 != null && k1.equals(k2))) {
+ Object v1 = getValue();
+ Object v2 = e.getValue();
+ if (v1 == v2 || (v1 != null && v1.equals(v2)))
+ return true;
+ }
+ return false;
+ }
+
+ /**
+ * Returns the hash code of the entry by xor'ing the hash values
+ * of the key and value of this entry.
+ *
+ * @return hash value of this entry
+ */
+ public int hashCode() {
+ return (key.hashCode() ^ value.hashCode());
+ }
+
+ /**
+ * Returns String representation of the entry in form "key=value"
+ *
+ * @return string value of entry
+ */
+ public String toString() {
+ return getKey() + "=" + getValue();
+ }
+
+ //------------------------------------------------------------------------
+ /**
+ * Sets the previous pointer for the entry in the LRU.
+ * @param prevPtr previous entry
+ */
+ protected void setPrevPtr(Entry<K,V> prevPtr){
+ this.prevPtr = prevPtr;
+ }
+
+ /**
+ * Returns the previous pointer for the entry in the LRU.
+ * @return previous entry
+ */
+ protected Entry<K,V> getPrevPtr(){
+ return prevPtr;
+ }
+
+ /**
+ * Sets the next pointer for the entry in the LRU.
+ * @param nextPtr next entry
+ */
+ protected void setNextPtr(Entry<K,V> nextPtr){
+ this.nextPtr = nextPtr;
+ }
+
+ /**
+ * Returns the next pointer for the entry in teh LRU.
+ * @return next entry
+ */
+ protected Entry<K,V> getNextPtr(){
+ return nextPtr;
+ }
+
+ /**
+ * Returns the pre-computed and "deep" size of the Entry
+ * @return size of the entry in bytes
+ */
+ public long heapSize() {
+ return heapSize;
+ }
+ }
+}
+
+
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/Memcache.java b/src/java/org/apache/hadoop/hbase/regionserver/Memcache.java
new file mode 100644
index 0000000..f2dc207
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/Memcache.java
@@ -0,0 +1,758 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.lang.management.RuntimeMXBean;
+import java.rmi.UnexpectedException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.SortedSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentSkipListSet;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+import java.util.regex.Pattern;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.regionserver.HRegion.Counter;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * The Memcache holds in-memory modifications to the HRegion. Modifications
+ * are {@link KeyValue}s. When asked to flush, current memcache is moved
+ * to snapshot and is cleared. We continue to serve edits out of new memcache
+ * and backing snapshot until flusher reports in that the flush succeeded. At
+ * this point we let the snapshot go.
+ * TODO: Adjust size of the memcache when we remove items because they have
+ * been deleted.
+ */
+class Memcache {
+ private static final Log LOG = LogFactory.getLog(Memcache.class);
+
+ private final long ttl;
+
+ // Note that since these structures are always accessed with a lock held,
+ // no additional synchronization is required.
+
+ // The currently active sorted set of edits. Using explicit type because
+ // if I use NavigableSet, I lose some facility -- I can't get a NavigableSet
+ // when I do tailSet or headSet.
+ volatile ConcurrentSkipListSet<KeyValue> memcache;
+
+ // Snapshot of memcache. Made for flusher.
+ volatile ConcurrentSkipListSet<KeyValue> snapshot;
+
+ private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+
+ final KeyValue.KVComparator comparator;
+
+ // Used comparing versions -- same r/c and ts but different type.
+ final KeyValue.KVComparator comparatorIgnoreType;
+
+ // Used comparing versions -- same r/c and type but different timestamp.
+ final KeyValue.KVComparator comparatorIgnoreTimestamp;
+
+ // TODO: Fix this guess by studying jprofiler
+ private final static int ESTIMATED_KV_HEAP_TAX = 60;
+
+ /**
+ * Default constructor. Used for tests.
+ */
+ public Memcache() {
+ this(HConstants.FOREVER, KeyValue.COMPARATOR);
+ }
+
+ /**
+ * Constructor.
+ * @param ttl The TTL for cache entries, in milliseconds.
+ * @param c
+ */
+ public Memcache(final long ttl, final KeyValue.KVComparator c) {
+ this.ttl = ttl;
+ this.comparator = c;
+ this.comparatorIgnoreTimestamp =
+ this.comparator.getComparatorIgnoringTimestamps();
+ this.comparatorIgnoreType = this.comparator.getComparatorIgnoringType();
+ this.memcache = createSet(c);
+ this.snapshot = createSet(c);
+ }
+
+ static ConcurrentSkipListSet<KeyValue> createSet(final KeyValue.KVComparator c) {
+ return new ConcurrentSkipListSet<KeyValue>(c);
+ }
+
+ void dump() {
+ for (KeyValue kv: this.memcache) {
+ LOG.info(kv);
+ }
+ for (KeyValue kv: this.snapshot) {
+ LOG.info(kv);
+ }
+ }
+
+ /**
+ * Creates a snapshot of the current Memcache.
+ * Snapshot must be cleared by call to {@link #clearSnapshot(SortedMap)}
+ * To get the snapshot made by this method, use {@link #getSnapshot}.
+ */
+ void snapshot() {
+ this.lock.writeLock().lock();
+ try {
+ // If snapshot currently has entries, then flusher failed or didn't call
+ // cleanup. Log a warning.
+ if (!this.snapshot.isEmpty()) {
+ LOG.warn("Snapshot called again without clearing previous. " +
+ "Doing nothing. Another ongoing flush or did we fail last attempt?");
+ } else {
+ // We used to synchronize on the memcache here but we're inside a
+ // write lock so removed it. Comment is left in case removal was a
+ // mistake. St.Ack
+ if (!this.memcache.isEmpty()) {
+ this.snapshot = this.memcache;
+ this.memcache = createSet(this.comparator);
+ }
+ }
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ }
+
+ /**
+ * Return the current snapshot.
+ * Called by flusher to get current snapshot made by a previous
+ * call to {@link snapshot}.
+ * @return Return snapshot.
+ * @see {@link #snapshot()}
+ * @see {@link #clearSnapshot(NavigableSet)}
+ */
+ ConcurrentSkipListSet<KeyValue> getSnapshot() {
+ return this.snapshot;
+ }
+
+ /**
+ * The passed snapshot was successfully persisted; it can be let go.
+ * @param ss The snapshot to clean out.
+ * @throws UnexpectedException
+ * @see {@link #snapshot()}
+ */
+ void clearSnapshot(final Set<KeyValue> ss)
+ throws UnexpectedException {
+ this.lock.writeLock().lock();
+ try {
+ if (this.snapshot != ss) {
+ throw new UnexpectedException("Current snapshot is " +
+ this.snapshot + ", was passed " + ss);
+ }
+ // OK. Passed in snapshot is same as current snapshot. If not-empty,
+ // create a new snapshot and let the old one go.
+ if (!ss.isEmpty()) {
+ this.snapshot = createSet(this.comparator);
+ }
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ }
+
+ /**
+ * Write an update
+ * @param kv
+ * @return approximate size of the passed key and value.
+ */
+ long add(final KeyValue kv) {
+ long size = -1;
+ this.lock.readLock().lock();
+ try {
+ boolean notpresent = this.memcache.add(kv);
+ // if false then memcache is not changed (look memcache.add(kv) docs)
+ // need to remove kv and add again to replace it
+ if (!notpresent && this.memcache.remove(kv)) {
+ this.memcache.add(kv);
+ }
+ size = heapSize(kv, notpresent);
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ return size;
+ }
+
+ /*
+ * Calculate how the memcache size has changed, approximately. Be careful.
+ * If class changes, be sure to change the size calculation.
+ * Add in tax of Map.Entry.
+ * @param kv
+ * @param notpresent True if the kv was NOT present in the set.
+ * @return Size
+ */
+ long heapSize(final KeyValue kv, final boolean notpresent) {
+ return notpresent?
+ // Add overhead for value byte array and for Map.Entry -- 57 bytes
+ // on x64 according to jprofiler.
+ ESTIMATED_KV_HEAP_TAX + 57 + kv.getLength(): 0; // Guess no change in size.
+ }
+
+ /**
+ * Look back through all the backlog TreeMaps to find the target.
+ * @param kv
+ * @param numVersions
+ * @return Set of KeyValues. Empty size not null if no results.
+ */
+ List<KeyValue> get(final KeyValue kv, final int numVersions) {
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ get(kv, numVersions, results,
+ new TreeSet<KeyValue>(this.comparatorIgnoreType),
+ System.currentTimeMillis());
+ return results;
+ }
+
+ /**
+ * Look back through all the backlog TreeMaps to find the target.
+ * @param key
+ * @param versions
+ * @param results
+ * @param deletes Pass a Set that has a Comparator that ignores key type.
+ * @param now
+ * @return True if enough versions.
+ */
+ boolean get(final KeyValue key, final int versions,
+ List<KeyValue> results, final NavigableSet<KeyValue> deletes,
+ final long now) {
+ this.lock.readLock().lock();
+ try {
+ if (get(this.memcache, key, versions, results, deletes, now)) {
+ return true;
+ }
+ return get(this.snapshot, key, versions , results, deletes, now);
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /**
+ * @param kv Find the row that comes after this one. If null, we return the
+ * first.
+ * @return Next row or null if none found.
+ */
+ KeyValue getNextRow(final KeyValue kv) {
+ this.lock.readLock().lock();
+ try {
+ return getLowest(getNextRow(kv, this.memcache),
+ getNextRow(kv, this.snapshot));
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /*
+ * @param a
+ * @param b
+ * @return Return lowest of a or b or null if both a and b are null
+ */
+ private KeyValue getLowest(final KeyValue a, final KeyValue b) {
+ if (a == null) {
+ return b;
+ }
+ if (b == null) {
+ return a;
+ }
+ return comparator.compareRows(a, b) <= 0? a: b;
+ }
+
+ /*
+ * @param kv Find row that follows this one. If null, return first.
+ * @param set Set to look in for a row beyond <code>row</code>.
+ * @return Next row or null if none found. If one found, will be a new
+ * KeyValue -- can be destroyed by subsequent calls to this method.
+ */
+ private KeyValue getNextRow(final KeyValue kv,
+ final NavigableSet<KeyValue> set) {
+ KeyValue result = null;
+ SortedSet<KeyValue> tailset = kv == null? set: set.tailSet(kv);
+ // Iterate until we fall into the next row; i.e. move off current row
+ for (KeyValue i : tailset) {
+ if (comparator.compareRows(i, kv) <= 0)
+ continue;
+ // Note: Not suppressing deletes or expired cells. Needs to be handled
+ // by higher up functions.
+ result = i;
+ break;
+ }
+ return result;
+ }
+
+ /**
+ * Return all the available columns for the given key. The key indicates a
+ * row and timestamp, but not a column name.
+ * @param origin Where to start searching. Specifies a row and timestamp.
+ * Columns are specified in following arguments.
+ * @param columns Pass null for all columns else the wanted subset.
+ * @param columnPattern Column pattern to match.
+ * @param numVersions number of versions to retrieve
+ * @param versionsCount Map of KV to Count. Uses a Comparator that doesn't
+ * look at timestamps so only Row/Column are compared.
+ * @param deletes Pass a Set that has a Comparator that ignores key type.
+ * @param results Where to stick row results found.
+ * @return True if we found enough results for passed <code>columns</code>
+ * and <code>numVersions</code>.
+ */
+ boolean getFull(final KeyValue key, NavigableSet<byte []> columns,
+ final Pattern columnPattern,
+ int numVersions, final Map<KeyValue, HRegion.Counter> versionsCount,
+ final NavigableSet<KeyValue> deletes,
+ final List<KeyValue> results, final long now) {
+ this.lock.readLock().lock();
+ try {
+ // Used to be synchronized but now with weak iteration, no longer needed.
+ if (getFull(this.memcache, key, columns, columnPattern, numVersions,
+ versionsCount, deletes, results, now)) {
+ // Has enough results.
+ return true;
+ }
+ return getFull(this.snapshot, key, columns, columnPattern, numVersions,
+ versionsCount, deletes, results, now);
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /*
+ * @param set
+ * @param target Where to start searching.
+ * @param columns
+ * @param versions
+ * @param versionCounter
+ * @param deletes Pass a Set that has a Comparator that ignores key type.
+ * @param keyvalues
+ * @return True if enough results found.
+ */
+ private boolean getFull(final ConcurrentSkipListSet<KeyValue> set,
+ final KeyValue target, final Set<byte []> columns,
+ final Pattern columnPattern,
+ final int versions, final Map<KeyValue, HRegion.Counter> versionCounter,
+ final NavigableSet<KeyValue> deletes, List<KeyValue> keyvalues,
+ final long now) {
+ boolean hasEnough = false;
+ if (target == null) {
+ return hasEnough;
+ }
+ NavigableSet<KeyValue> tailset = set.tailSet(target);
+ if (tailset == null || tailset.isEmpty()) {
+ return hasEnough;
+ }
+ // TODO: This loop same as in HStore.getFullFromStoreFile. Make sure they
+ // are the same.
+ for (KeyValue kv: tailset) {
+ // Make sure we have not passed out the row. If target key has a
+ // column on it, then we are looking explicit key+column combination. If
+ // we've passed it out, also break.
+ if (target.isEmptyColumn()? !this.comparator.matchingRows(target, kv):
+ !this.comparator.matchingRowColumn(target, kv)) {
+ break;
+ }
+ if (!Store.getFullCheck(this.comparator, target, kv, columns, columnPattern)) {
+ continue;
+ }
+ if (Store.doKeyValue(kv, versions, versionCounter, columns, deletes, now,
+ this.ttl, keyvalues, tailset)) {
+ hasEnough = true;
+ break;
+ }
+ }
+ return hasEnough;
+ }
+
+ /**
+ * @param row Row to look for.
+ * @param candidateKeys Map of candidate keys (Accumulation over lots of
+ * lookup over stores and memcaches)
+ */
+ void getRowKeyAtOrBefore(final KeyValue row,
+ final NavigableSet<KeyValue> candidateKeys) {
+ getRowKeyAtOrBefore(row, candidateKeys,
+ new TreeSet<KeyValue>(this.comparator), System.currentTimeMillis());
+ }
+
+ /**
+ * @param kv Row to look for.
+ * @param candidates Map of candidate keys (Accumulation over lots of
+ * lookup over stores and memcaches). Pass a Set with a Comparator that
+ * ignores key Type so we can do Set.remove using a delete, i.e. a KeyValue
+ * with a different Type to the candidate key.
+ * @param deletes Pass a Set that has a Comparator that ignores key type.
+ */
+ void getRowKeyAtOrBefore(final KeyValue kv,
+ final NavigableSet<KeyValue> candidates,
+ final NavigableSet<KeyValue> deletes, final long now) {
+ this.lock.readLock().lock();
+ try {
+ getRowKeyAtOrBefore(memcache, kv, candidates, deletes, now);
+ getRowKeyAtOrBefore(snapshot, kv, candidates, deletes, now);
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ private void getRowKeyAtOrBefore(final ConcurrentSkipListSet<KeyValue> set,
+ final KeyValue kv, final NavigableSet<KeyValue> candidates,
+ final NavigableSet<KeyValue> deletes, final long now) {
+ if (set.isEmpty()) {
+ return;
+ }
+ // We want the earliest possible to start searching from. Start before
+ // the candidate key in case it turns out a delete came in later.
+ KeyValue search = candidates.isEmpty()? kv: candidates.first();
+
+ // Get all the entries that come equal or after our search key
+ SortedSet<KeyValue> tailset = set.tailSet(search);
+
+ // if there are items in the tail map, there's either a direct match to
+ // the search key, or a range of values between the first candidate key
+ // and the ultimate search key (or the end of the cache)
+ if (!tailset.isEmpty() &&
+ this.comparator.compareRows(tailset.first(), search) <= 0) {
+ // Keep looking at cells as long as they are no greater than the
+ // ultimate search key and there's still records left in the map.
+ KeyValue deleted = null;
+ KeyValue found = null;
+ for (Iterator<KeyValue> iterator = tailset.iterator();
+ iterator.hasNext() && (found == null ||
+ this.comparator.compareRows(found, kv) <= 0);) {
+ found = iterator.next();
+ if (this.comparator.compareRows(found, kv) <= 0) {
+ if (found.isDeleteType()) {
+ Store.handleDeletes(found, candidates, deletes);
+ if (deleted == null) {
+ deleted = found;
+ }
+ } else {
+ if (Store.notExpiredAndNotInDeletes(this.ttl, found, now, deletes)) {
+ candidates.add(found);
+ } else {
+ if (deleted == null) {
+ deleted = found;
+ }
+ // TODO: Check this removes the right key.
+ // Its expired. Remove it.
+ iterator.remove();
+ }
+ }
+ }
+ }
+ if (candidates.isEmpty() && deleted != null) {
+ getRowKeyBefore(set, deleted, candidates, deletes, now);
+ }
+ } else {
+ // The tail didn't contain any keys that matched our criteria, or was
+ // empty. Examine all the keys that proceed our splitting point.
+ getRowKeyBefore(set, search, candidates, deletes, now);
+ }
+ }
+
+ /*
+ * Get row key that comes before passed <code>search_key</code>
+ * Use when we know search_key is not in the map and we need to search
+ * earlier in the cache.
+ * @param set
+ * @param search
+ * @param candidates
+ * @param deletes Pass a Set that has a Comparator that ignores key type.
+ * @param now
+ */
+ private void getRowKeyBefore(ConcurrentSkipListSet<KeyValue> set,
+ KeyValue search, NavigableSet<KeyValue> candidates,
+ final NavigableSet<KeyValue> deletes, final long now) {
+ NavigableSet<KeyValue> headSet = set.headSet(search);
+ // If we tried to create a headMap and got an empty map, then there are
+ // no keys at or before the search key, so we're done.
+ if (headSet.isEmpty()) {
+ return;
+ }
+
+ // If there aren't any candidate keys at this point, we need to search
+ // backwards until we find at least one candidate or run out of headMap.
+ if (candidates.isEmpty()) {
+ KeyValue lastFound = null;
+ for (Iterator<KeyValue> i = headSet.descendingIterator(); i.hasNext();) {
+ KeyValue found = i.next();
+ // if the last row we found a candidate key for is different than
+ // the row of the current candidate, we can stop looking -- if its
+ // not a delete record.
+ boolean deleted = found.isDeleteType();
+ if (lastFound != null &&
+ this.comparator.matchingRows(lastFound, found) && !deleted) {
+ break;
+ }
+ // If this isn't a delete, record it as a candidate key. Also
+ // take note of this candidate so that we'll know when
+ // we cross the row boundary into the previous row.
+ if (!deleted) {
+ if (Store.notExpiredAndNotInDeletes(this.ttl, found, now, deletes)) {
+ lastFound = found;
+ candidates.add(found);
+ } else {
+ // Its expired.
+ Store.expiredOrDeleted(set, found);
+ }
+ } else {
+ // We are encountering items in reverse. We may have just added
+ // an item to candidates that this later item deletes. Check. If we
+ // found something in candidates, remove it from the set.
+ if (Store.handleDeletes(found, candidates, deletes)) {
+ remove(set, found);
+ }
+ }
+ }
+ } else {
+ // If there are already some candidate keys, we only need to consider
+ // the very last row's worth of keys in the headMap, because any
+ // smaller acceptable candidate keys would have caused us to start
+ // our search earlier in the list, and we wouldn't be searching here.
+ SortedSet<KeyValue> rowTailMap =
+ headSet.tailSet(headSet.last().cloneRow(HConstants.LATEST_TIMESTAMP));
+ Iterator<KeyValue> i = rowTailMap.iterator();
+ do {
+ KeyValue found = i.next();
+ if (found.isDeleteType()) {
+ Store.handleDeletes(found, candidates, deletes);
+ } else {
+ if (ttl == HConstants.FOREVER ||
+ now < found.getTimestamp() + ttl ||
+ !deletes.contains(found)) {
+ candidates.add(found);
+ } else {
+ Store.expiredOrDeleted(set, found);
+ }
+ }
+ } while (i.hasNext());
+ }
+ }
+
+ /*
+ * Examine a single map for the desired key.
+ *
+ * TODO - This is kinda slow. We need a data structure that allows for
+ * proximity-searches, not just precise-matches.
+ *
+ * @param set
+ * @param key
+ * @param results
+ * @param versions
+ * @param keyvalues
+ * @param deletes Pass a Set that has a Comparator that ignores key type.
+ * @param now
+ * @return True if enough versions.
+ */
+ private boolean get(final ConcurrentSkipListSet<KeyValue> set,
+ final KeyValue key, final int versions,
+ final List<KeyValue> keyvalues,
+ final NavigableSet<KeyValue> deletes,
+ final long now) {
+ NavigableSet<KeyValue> tailset = set.tailSet(key);
+ if (tailset.isEmpty()) {
+ return false;
+ }
+ boolean enoughVersions = false;
+ for (KeyValue kv : tailset) {
+ if (this.comparator.matchingRowColumn(kv, key)) {
+ if (Store.doKeyValue(kv, versions, deletes, now, this.ttl, keyvalues,
+ tailset)) {
+ enoughVersions = true;
+ break;
+ }
+ } else {
+ // By L.N. HBASE-684, map is sorted, so we can't find match any more.
+ break;
+ }
+ }
+ return enoughVersions;
+ }
+
+ /*
+ * @param set
+ * @param kv This is a delete record. Remove anything behind this of same
+ * r/c/ts.
+ * @return True if we removed anything.
+ */
+ private boolean remove(final NavigableSet<KeyValue> set, final KeyValue kv) {
+ SortedSet<KeyValue> s = set.tailSet(kv);
+ if (s.isEmpty()) {
+ return false;
+ }
+ boolean removed = false;
+ for (KeyValue k: s) {
+ if (this.comparatorIgnoreType.compare(k, kv) == 0) {
+ // Same r/c/ts. Remove it.
+ s.remove(k);
+ removed = true;
+ continue;
+ }
+ break;
+ }
+ return removed;
+ }
+
+ /**
+ * @return a scanner over the keys in the Memcache
+ */
+ InternalScanner getScanner(long timestamp,
+ final NavigableSet<byte []> targetCols, final byte [] firstRow)
+ throws IOException {
+ this.lock.readLock().lock();
+ try {
+ return new MemcacheScanner(timestamp, targetCols, firstRow);
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // MemcacheScanner implements the InternalScanner.
+ // It lets the caller scan the contents of the Memcache.
+ //////////////////////////////////////////////////////////////////////////////
+
+ private class MemcacheScanner extends HAbstractScanner {
+ private KeyValue current;
+ private final NavigableSet<byte []> columns;
+ private final NavigableSet<KeyValue> deletes;
+ private final Map<KeyValue, Counter> versionCounter;
+ private final long now = System.currentTimeMillis();
+
+ MemcacheScanner(final long timestamp, final NavigableSet<byte []> columns,
+ final byte [] firstRow)
+ throws IOException {
+ // Call to super will create ColumnMatchers and whether this is a regex
+ // scanner or not. Will also save away timestamp. Also sorts rows.
+ super(timestamp, columns);
+ this.deletes = new TreeSet<KeyValue>(comparatorIgnoreType);
+ this.versionCounter =
+ new TreeMap<KeyValue, Counter>(comparatorIgnoreTimestamp);
+ this.current = KeyValue.createFirstOnRow(firstRow, timestamp);
+ // If we're being asked to scan explicit columns rather than all in
+ // a family or columns that match regexes, cache the sorted array of
+ // columns.
+ this.columns = isWildcardScanner()? null: columns;
+ }
+
+ @Override
+ public boolean next(final List<KeyValue> keyvalues)
+ throws IOException {
+ if (this.scannerClosed) {
+ return false;
+ }
+ while (keyvalues.isEmpty() && this.current != null) {
+ // Deletes are per row.
+ if (!deletes.isEmpty()) {
+ deletes.clear();
+ }
+ if (!versionCounter.isEmpty()) {
+ versionCounter.clear();
+ }
+ // The getFull will take care of expired and deletes inside memcache.
+ // The first getFull when row is the special empty bytes will return
+ // nothing so we go around again. Alternative is calling a getNextRow
+ // if row is null but that looks like it would take same amount of work
+ // so leave it for now.
+ getFull(this.current, isWildcardScanner()? null: this.columns, null, 1,
+ versionCounter, deletes, keyvalues, this.now);
+ for (KeyValue bb: keyvalues) {
+ if (isWildcardScanner()) {
+ // Check the results match. We only check columns, not timestamps.
+ // We presume that timestamps have been handled properly when we
+ // called getFull.
+ if (!columnMatch(bb)) {
+ keyvalues.remove(bb);
+ }
+ }
+ }
+ // Add any deletes found so they are available to the StoreScanner#next.
+ if (!this.deletes.isEmpty()) {
+ keyvalues.addAll(deletes);
+ }
+ this.current = getNextRow(this.current);
+ // Change current to be column-less and to have the scanners' now. We
+ // do this because first item on 'next row' may not have the scanners'
+ // now time which will cause trouble down in getFull; same reason no
+ // column.
+ if (this.current != null) this.current = this.current.cloneRow(this.now);
+ }
+ return !keyvalues.isEmpty();
+ }
+
+ public void close() {
+ if (!scannerClosed) {
+ scannerClosed = true;
+ }
+ }
+ }
+
+ /**
+ * Code to help figure if our approximation of object heap sizes is close
+ * enough. See hbase-900. Fills memcaches then waits so user can heap
+ * dump and bring up resultant hprof in something like jprofiler which
+ * allows you get 'deep size' on objects.
+ * @param args
+ * @throws InterruptedException
+ * @throws IOException
+ */
+ public static void main(String [] args)
+ throws InterruptedException, IOException {
+ RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
+ LOG.info("vmName=" + runtime.getVmName() + ", vmVendor=" +
+ runtime.getVmVendor() + ", vmVersion=" + runtime.getVmVersion());
+ LOG.info("vmInputArguments=" + runtime.getInputArguments());
+ Memcache memcache1 = new Memcache();
+ // TODO: x32 vs x64
+ long size = 0;
+ final int count = 10000;
+ byte [] column = Bytes.toBytes("col:umn");
+ for (int i = 0; i < count; i++) {
+ // Give each its own ts
+ size += memcache1.add(new KeyValue(Bytes.toBytes(i), column, i));
+ }
+ LOG.info("memcache1 estimated size=" + size);
+ for (int i = 0; i < count; i++) {
+ size += memcache1.add(new KeyValue(Bytes.toBytes(i), column, i));
+ }
+ LOG.info("memcache1 estimated size (2nd loading of same data)=" + size);
+ // Make a variably sized memcache.
+ Memcache memcache2 = new Memcache();
+ for (int i = 0; i < count; i++) {
+ size += memcache2.add(new KeyValue(Bytes.toBytes(i), column, i,
+ new byte[i]));
+ }
+ LOG.info("memcache2 estimated size=" + size);
+ final int seconds = 30;
+ LOG.info("Waiting " + seconds + " seconds while heap dump is taken");
+ for (int i = 0; i < seconds; i++) {
+ // Thread.sleep(1000);
+ }
+ LOG.info("Exiting.");
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/MemcacheFlusher.java b/src/java/org/apache/hadoop/hbase/regionserver/MemcacheFlusher.java
new file mode 100644
index 0000000..00aa48d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/MemcacheFlusher.java
@@ -0,0 +1,336 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.util.ArrayList;
+import java.util.ConcurrentModificationException;
+import java.util.HashSet;
+import java.util.SortedMap;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.DroppedSnapshotException;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Thread that flushes cache on request
+ *
+ * NOTE: This class extends Thread rather than Chore because the sleep time
+ * can be interrupted when there is something to do, rather than the Chore
+ * sleep time which is invariant.
+ *
+ * @see FlushRequester
+ */
+class MemcacheFlusher extends Thread implements FlushRequester {
+ static final Log LOG = LogFactory.getLog(MemcacheFlusher.class);
+ private final BlockingQueue<HRegion> flushQueue =
+ new LinkedBlockingQueue<HRegion>();
+
+ private final HashSet<HRegion> regionsInQueue = new HashSet<HRegion>();
+
+ private final long threadWakeFrequency;
+ private final HRegionServer server;
+ private final ReentrantLock lock = new ReentrantLock();
+
+ protected final long globalMemcacheLimit;
+ protected final long globalMemcacheLimitLowMark;
+
+ public static final float DEFAULT_UPPER = 0.4f;
+ public static final float DEFAULT_LOWER = 0.25f;
+ public static final String UPPER_KEY =
+ "hbase.regionserver.globalMemcache.upperLimit";
+ public static final String LOWER_KEY =
+ "hbase.regionserver.globalMemcache.lowerLimit";
+ private long blockingStoreFilesNumber;
+ private long blockingWaitTime;
+
+ /**
+ * @param conf
+ * @param server
+ */
+ public MemcacheFlusher(final HBaseConfiguration conf,
+ final HRegionServer server) {
+ super();
+ this.server = server;
+ this.threadWakeFrequency =
+ conf.getLong(HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000);
+ long max = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getMax();
+ this.globalMemcacheLimit = globalMemcacheLimit(max, DEFAULT_UPPER,
+ UPPER_KEY, conf);
+ long lower = globalMemcacheLimit(max, DEFAULT_LOWER, LOWER_KEY, conf);
+ if (lower > this.globalMemcacheLimit) {
+ lower = this.globalMemcacheLimit;
+ LOG.info("Setting globalMemcacheLimitLowMark == globalMemcacheLimit " +
+ "because supplied " + LOWER_KEY + " was > " + UPPER_KEY);
+ }
+ this.globalMemcacheLimitLowMark = lower;
+ this.blockingStoreFilesNumber =
+ conf.getInt("hbase.hstore.blockingStoreFiles", -1);
+ if (this.blockingStoreFilesNumber == -1) {
+ this.blockingStoreFilesNumber = 1 +
+ conf.getInt("hbase.hstore.compactionThreshold", 3);
+ }
+ this.blockingWaitTime = conf.getInt("hbase.hstore.blockingWaitTime",
+ 90000); // default of 180 seconds
+ LOG.info("globalMemcacheLimit=" +
+ StringUtils.humanReadableInt(this.globalMemcacheLimit) +
+ ", globalMemcacheLimitLowMark=" +
+ StringUtils.humanReadableInt(this.globalMemcacheLimitLowMark) +
+ ", maxHeap=" + StringUtils.humanReadableInt(max));
+ }
+
+ /**
+ * Calculate size using passed <code>key</code> for configured
+ * percentage of <code>max</code>.
+ * @param max
+ * @param defaultLimit
+ * @param key
+ * @param c
+ * @return Limit.
+ */
+ static long globalMemcacheLimit(final long max,
+ final float defaultLimit, final String key, final HBaseConfiguration c) {
+ float limit = c.getFloat(key, defaultLimit);
+ return getMemcacheLimit(max, limit, defaultLimit);
+ }
+
+ static long getMemcacheLimit(final long max, final float limit,
+ final float defaultLimit) {
+ if (limit >= 0.9f || limit < 0.1f) {
+ LOG.warn("Setting global memcache limit to default of " + defaultLimit +
+ " because supplied value outside allowed range of 0.1 -> 0.9");
+ }
+ return (long)(max * limit);
+ }
+
+ @Override
+ public void run() {
+ while (!this.server.isStopRequested() && this.server.isInSafeMode()) {
+ try {
+ Thread.sleep(threadWakeFrequency);
+ } catch (InterruptedException ex) {
+ continue;
+ }
+ }
+ while (!server.isStopRequested()) {
+ HRegion r = null;
+ try {
+ r = flushQueue.poll(threadWakeFrequency, TimeUnit.MILLISECONDS);
+ if (r == null) {
+ continue;
+ }
+ if (!flushRegion(r, false)) {
+ break;
+ }
+ } catch (InterruptedException ex) {
+ continue;
+ } catch (ConcurrentModificationException ex) {
+ continue;
+ } catch (Exception ex) {
+ LOG.error("Cache flush failed" +
+ (r != null ? (" for region " + Bytes.toString(r.getRegionName())) : ""),
+ ex);
+ if (!server.checkFileSystem()) {
+ break;
+ }
+ }
+ }
+ regionsInQueue.clear();
+ flushQueue.clear();
+ LOG.info(getName() + " exiting");
+ }
+
+ public void request(HRegion r) {
+ synchronized (regionsInQueue) {
+ if (!regionsInQueue.contains(r)) {
+ regionsInQueue.add(r);
+ flushQueue.add(r);
+ }
+ }
+ }
+
+ /**
+ * Only interrupt once it's done with a run through the work loop.
+ */
+ void interruptIfNecessary() {
+ lock.lock();
+ try {
+ this.interrupt();
+ } finally {
+ lock.unlock();
+ }
+ }
+
+ /*
+ * Flush a region.
+ *
+ * @param region the region to be flushed
+ * @param removeFromQueue True if the region needs to be removed from the
+ * flush queue. False if called from the main flusher run loop and true if
+ * called from flushSomeRegions to relieve memory pressure from the region
+ * server. If <code>true</code>, we are in a state of emergency; we are not
+ * taking on updates regionserver-wide, not until memory is flushed. In this
+ * case, do not let a compaction run inline with blocked updates. Compactions
+ * can take a long time. Stopping compactions, there is a danger that number
+ * of flushes will overwhelm compaction on a busy server; we'll have to see.
+ * That compactions do not run when called out of flushSomeRegions means that
+ * compactions can be reported by the historian without danger of deadlock
+ * (HBASE-670).
+ *
+ * <p>In the main run loop, regions have already been removed from the flush
+ * queue, and if this method is called for the relief of memory pressure,
+ * this may not be necessarily true. We want to avoid trying to remove
+ * region from the queue because if it has already been removed, it requires a
+ * sequential scan of the queue to determine that it is not in the queue.
+ *
+ * <p>If called from flushSomeRegions, the region may be in the queue but
+ * it may have been determined that the region had a significant amount of
+ * memory in use and needed to be flushed to relieve memory pressure. In this
+ * case, its flush may preempt the pending request in the queue, and if so,
+ * it needs to be removed from the queue to avoid flushing the region
+ * multiple times.
+ *
+ * @return true if the region was successfully flushed, false otherwise. If
+ * false, there will be accompanying log messages explaining why the log was
+ * not flushed.
+ */
+ private boolean flushRegion(HRegion region, boolean removeFromQueue) {
+ // Wait until it is safe to flush.
+ boolean toomany;
+ do {
+ toomany = false;
+ for (Store hstore: region.stores.values()) {
+ int files = hstore.getStorefilesCount();
+ if (files > this.blockingStoreFilesNumber) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Too many store files in store " + hstore + ": " +
+ files + ", waiting");
+ }
+ toomany = true;
+ server.compactSplitThread.compactionRequested(region, getName());
+ try {
+ Thread.sleep(blockingWaitTime);
+ } catch (InterruptedException e) {
+ // ignore
+ }
+ }
+ }
+ } while (toomany);
+ synchronized (regionsInQueue) {
+ // See comment above for removeFromQueue on why we do not
+ // take the region out of the set. If removeFromQueue is true, remove it
+ // from the queue too if it is there. This didn't used to be a
+ // constraint, but now that HBASE-512 is in play, we need to try and
+ // limit double-flushing of regions.
+ if (regionsInQueue.remove(region) && removeFromQueue) {
+ flushQueue.remove(region);
+ }
+ lock.lock();
+ }
+ try {
+ // See comment above for removeFromQueue on why we do not
+ // compact if removeFromQueue is true. Note that region.flushCache()
+ // only returns true if a flush is done and if a compaction is needed.
+ if (region.flushcache() && !removeFromQueue) {
+ server.compactSplitThread.compactionRequested(region, getName());
+ }
+ } catch (DroppedSnapshotException ex) {
+ // Cache flush can fail in a few places. If it fails in a critical
+ // section, we get a DroppedSnapshotException and a replay of hlog
+ // is required. Currently the only way to do this is a restart of
+ // the server. Abort because hdfs is probably bad (HBASE-644 is a case
+ // where hdfs was bad but passed the hdfs check).
+ LOG.fatal("Replay of hlog required. Forcing server shutdown", ex);
+ server.abort();
+ return false;
+ } catch (IOException ex) {
+ LOG.error("Cache flush failed"
+ + (region != null ? (" for region " + Bytes.toString(region.getRegionName())) : ""),
+ RemoteExceptionHandler.checkIOException(ex));
+ if (!server.checkFileSystem()) {
+ return false;
+ }
+ } finally {
+ lock.unlock();
+ }
+
+ return true;
+ }
+
+ /**
+ * Check if the regionserver's memcache memory usage is greater than the
+ * limit. If so, flush regions with the biggest memcaches until we're down
+ * to the lower limit. This method blocks callers until we're down to a safe
+ * amount of memcache consumption.
+ */
+ public synchronized void reclaimMemcacheMemory() {
+ if (server.getGlobalMemcacheSize() >= globalMemcacheLimit) {
+ flushSomeRegions();
+ }
+ }
+
+ /*
+ * Emergency! Need to flush memory.
+ */
+ private synchronized void flushSomeRegions() {
+ // keep flushing until we hit the low water mark
+ long globalMemcacheSize = -1;
+ ArrayList<HRegion> regionsToCompact = new ArrayList<HRegion>();
+ for (SortedMap<Long, HRegion> m =
+ this.server.getCopyOfOnlineRegionsSortedBySize();
+ (globalMemcacheSize = server.getGlobalMemcacheSize()) >=
+ this.globalMemcacheLimitLowMark;) {
+ // flush the region with the biggest memcache
+ if (m.size() <= 0) {
+ LOG.info("No online regions to flush though we've been asked flush " +
+ "some; globalMemcacheSize=" +
+ StringUtils.humanReadableInt(globalMemcacheSize) +
+ ", globalMemcacheLimitLowMark=" +
+ StringUtils.humanReadableInt(this.globalMemcacheLimitLowMark));
+ break;
+ }
+ HRegion biggestMemcacheRegion = m.remove(m.firstKey());
+ LOG.info("Forced flushing of " + biggestMemcacheRegion.toString() +
+ " because global memcache limit of " +
+ StringUtils.humanReadableInt(this.globalMemcacheLimit) +
+ " exceeded; currently " +
+ StringUtils.humanReadableInt(globalMemcacheSize) + " and flushing till " +
+ StringUtils.humanReadableInt(this.globalMemcacheLimitLowMark));
+ if (!flushRegion(biggestMemcacheRegion, true)) {
+ LOG.warn("Flush failed");
+ break;
+ }
+ regionsToCompact.add(biggestMemcacheRegion);
+ }
+ for (HRegion region : regionsToCompact) {
+ server.compactSplitThread.compactionRequested(region, getName());
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java b/src/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java
new file mode 100644
index 0000000..f152371
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Thrown if request for nonexistent column family.
+ */
+public class NoSuchColumnFamilyException extends DoNotRetryIOException {
+ private static final long serialVersionUID = -6569952730832331274L;
+
+ /** default constructor */
+ public NoSuchColumnFamilyException() {
+ super();
+ }
+
+ /**
+ * @param message
+ */
+ public NoSuchColumnFamilyException(String message) {
+ super(message);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/RegionServerRunningException.java b/src/java/org/apache/hadoop/hbase/regionserver/RegionServerRunningException.java
new file mode 100644
index 0000000..8bdfedb
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/RegionServerRunningException.java
@@ -0,0 +1,44 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+/**
+ * Thrown if the region server log directory exists (which indicates another
+ * region server is running at the same address)
+ */
+public class RegionServerRunningException extends IOException {
+ private static final long serialVersionUID = 1L << 31 - 1L;
+
+ /** Default Constructor */
+ public RegionServerRunningException() {
+ super();
+ }
+
+ /**
+ * Constructs the exception and supplies a string as the message
+ * @param s - message
+ */
+ public RegionServerRunningException(String s) {
+ super(s);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/Store.java b/src/java/org/apache/hadoop/hbase/regionserver/Store.java
new file mode 100644
index 0000000..18cf44e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/Store.java
@@ -0,0 +1,1800 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.ConcurrentSkipListSet;
+import java.util.concurrent.CopyOnWriteArraySet;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+import java.util.regex.Pattern;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.SequenceFile;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.regionserver.HRegion.Counter;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.util.Progressable;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * A Store holds a column family in a Region. Its a memcache and a set of zero
+ * or more StoreFiles, which stretch backwards over time.
+ *
+ * <p>There's no reason to consider append-logging at this level; all logging
+ * and locking is handled at the HRegion level. Store just provides
+ * services to manage sets of StoreFiles. One of the most important of those
+ * services is compaction services where files are aggregated once they pass
+ * a configurable threshold.
+ *
+ * <p>The only thing having to do with logs that Store needs to deal with is
+ * the reconstructionLog. This is a segment of an HRegion's log that might
+ * NOT be present upon startup. If the param is NULL, there's nothing to do.
+ * If the param is non-NULL, we need to process the log to reconstruct
+ * a TreeMap that might not have been written to disk before the process
+ * died.
+ *
+ * <p>It's assumed that after this constructor returns, the reconstructionLog
+ * file will be deleted (by whoever has instantiated the Store).
+ *
+ * <p>Locking and transactions are handled at a higher level. This API should
+ * not be called directly but by an HRegion manager.
+ */
+public class Store implements HConstants {
+ static final Log LOG = LogFactory.getLog(Store.class);
+ /**
+ * Comparator that looks at columns and compares their family portions.
+ * Presumes columns have already been checked for presence of delimiter.
+ * If no delimiter present, presume the buffer holds a store name so no need
+ * of a delimiter.
+ */
+ protected final Memcache memcache;
+ // This stores directory in the filesystem.
+ private final Path homedir;
+ private final HRegionInfo regioninfo;
+ private final HColumnDescriptor family;
+ final FileSystem fs;
+ private final HBaseConfiguration conf;
+ // ttl in milliseconds.
+ protected long ttl;
+ private long majorCompactionTime;
+ private int maxFilesToCompact;
+ private final long desiredMaxFileSize;
+ private volatile long storeSize = 0L;
+ private final Object flushLock = new Object();
+ final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+ final byte [] storeName;
+ private final String storeNameStr;
+
+ /*
+ * Sorted Map of readers keyed by maximum edit sequence id (Most recent should
+ * be last in in list). ConcurrentSkipListMap is lazily consistent so no
+ * need to lock it down when iterating; iterator view is that of when the
+ * iterator was taken out.
+ */
+ private final NavigableMap<Long, StoreFile> storefiles =
+ new ConcurrentSkipListMap<Long, StoreFile>();
+
+ // All access must be synchronized.
+ private final CopyOnWriteArraySet<ChangedReadersObserver> changedReaderObservers =
+ new CopyOnWriteArraySet<ChangedReadersObserver>();
+
+ // The most-recent log-seq-ID. The most-recent such ID means we can ignore
+ // all log messages up to and including that ID (because they're already
+ // reflected in the TreeMaps).
+ private volatile long maxSeqId = -1;
+
+ private final Path compactionDir;
+ private final Object compactLock = new Object();
+ private final int compactionThreshold;
+ private final int blocksize;
+ private final boolean bloomfilter;
+ private final Compression.Algorithm compression;
+
+ // Comparing KeyValues
+ final KeyValue.KVComparator comparator;
+ final KeyValue.KVComparator comparatorIgnoringType;
+
+ /**
+ * Constructor
+ * @param basedir qualified path under which the region directory lives;
+ * generally the table subdirectory
+ * @param info HRegionInfo for this region
+ * @param family HColumnDescriptor for this column
+ * @param fs file system object
+ * @param reconstructionLog existing log file to apply if any
+ * @param conf configuration object
+ * @param reporter Call on a period so hosting server can report we're
+ * making progress to master -- otherwise master might think region deploy
+ * failed. Can be null.
+ * @throws IOException
+ */
+ protected Store(Path basedir, HRegionInfo info, HColumnDescriptor family,
+ FileSystem fs, Path reconstructionLog, HBaseConfiguration conf,
+ final Progressable reporter)
+ throws IOException {
+ this.homedir = getStoreHomedir(basedir, info.getEncodedName(),
+ family.getName());
+ this.regioninfo = info;
+ this.family = family;
+ this.fs = fs;
+ this.conf = conf;
+ this.bloomfilter = family.isBloomfilter();
+ this.blocksize = family.getBlocksize();
+ this.compression = family.getCompression();
+ this.comparator = info.getComparator();
+ this.comparatorIgnoringType = this.comparator.getComparatorIgnoringType();
+ // getTimeToLive returns ttl in seconds. Convert to milliseconds.
+ this.ttl = family.getTimeToLive();
+ if (ttl != HConstants.FOREVER) {
+ this.ttl *= 1000;
+ }
+ this.memcache = new Memcache(this.ttl, this.comparator);
+ this.compactionDir = HRegion.getCompactionDir(basedir);
+ this.storeName = this.family.getName();
+ this.storeNameStr = Bytes.toString(this.storeName);
+
+ // By default, we compact if an HStore has more than
+ // MIN_COMMITS_FOR_COMPACTION map files
+ this.compactionThreshold =
+ conf.getInt("hbase.hstore.compactionThreshold", 3);
+
+ // By default we split region if a file > DEFAULT_MAX_FILE_SIZE.
+ long maxFileSize = info.getTableDesc().getMaxFileSize();
+ if (maxFileSize == HConstants.DEFAULT_MAX_FILE_SIZE) {
+ maxFileSize = conf.getLong("hbase.hregion.max.filesize",
+ HConstants.DEFAULT_MAX_FILE_SIZE);
+ }
+ this.desiredMaxFileSize = maxFileSize;
+
+ this.majorCompactionTime =
+ conf.getLong(HConstants.MAJOR_COMPACTION_PERIOD, 86400000);
+ if (family.getValue(HConstants.MAJOR_COMPACTION_PERIOD) != null) {
+ String strCompactionTime =
+ family.getValue(HConstants.MAJOR_COMPACTION_PERIOD);
+ this.majorCompactionTime = (new Long(strCompactionTime)).longValue();
+ }
+
+ this.maxFilesToCompact = conf.getInt("hbase.hstore.compaction.max", 10);
+
+ // loadStoreFiles calculates this.maxSeqId. as side-effect.
+ this.storefiles.putAll(loadStoreFiles());
+
+ // Do reconstruction log.
+ runReconstructionLog(reconstructionLog, this.maxSeqId, reporter);
+ }
+
+ HColumnDescriptor getFamily() {
+ return this.family;
+ }
+
+ long getMaxSequenceId() {
+ return this.maxSeqId;
+ }
+
+ /**
+ * @param tabledir
+ * @param encodedName Encoded region name.
+ * @param family
+ * @return Path to family/Store home directory.
+ */
+ public static Path getStoreHomedir(final Path tabledir,
+ final int encodedName, final byte [] family) {
+ return new Path(tabledir, new Path(Integer.toString(encodedName),
+ new Path(Bytes.toString(family))));
+ }
+
+ /*
+ * Run reconstruction log
+ * @param reconstructionLog
+ * @param msid
+ * @param reporter
+ * @throws IOException
+ */
+ private void runReconstructionLog(final Path reconstructionLog,
+ final long msid, final Progressable reporter)
+ throws IOException {
+ try {
+ doReconstructionLog(reconstructionLog, msid, reporter);
+ } catch (EOFException e) {
+ // Presume we got here because of lack of HADOOP-1700; for now keep going
+ // but this is probably not what we want long term. If we got here there
+ // has been data-loss
+ LOG.warn("Exception processing reconstruction log " + reconstructionLog +
+ " opening " + Bytes.toString(this.storeName) +
+ " -- continuing. Probably lack-of-HADOOP-1700 causing DATA LOSS!", e);
+ } catch (IOException e) {
+ // Presume we got here because of some HDFS issue. Don't just keep going.
+ // Fail to open the HStore. Probably means we'll fail over and over
+ // again until human intervention but alternative has us skipping logs
+ // and losing edits: HBASE-642.
+ LOG.warn("Exception processing reconstruction log " + reconstructionLog +
+ " opening " + Bytes.toString(this.storeName), e);
+ throw e;
+ }
+ }
+
+ /*
+ * Read the reconstructionLog to see whether we need to build a brand-new
+ * file out of non-flushed log entries.
+ *
+ * We can ignore any log message that has a sequence ID that's equal to or
+ * lower than maxSeqID. (Because we know such log messages are already
+ * reflected in the MapFiles.)
+ */
+ private void doReconstructionLog(final Path reconstructionLog,
+ final long maxSeqID, final Progressable reporter)
+ throws UnsupportedEncodingException, IOException {
+ if (reconstructionLog == null || !this.fs.exists(reconstructionLog)) {
+ // Nothing to do.
+ return;
+ }
+ // Check its not empty.
+ FileStatus [] stats = this.fs.listStatus(reconstructionLog);
+ if (stats == null || stats.length == 0) {
+ LOG.warn("Passed reconstruction log " + reconstructionLog +
+ " is zero-length");
+ return;
+ }
+ // TODO: This could grow large and blow heap out. Need to get it into
+ // general memory usage accounting.
+ long maxSeqIdInLog = -1;
+ ConcurrentSkipListSet<KeyValue> reconstructedCache =
+ Memcache.createSet(this.comparator);
+ SequenceFile.Reader logReader = new SequenceFile.Reader(this.fs,
+ reconstructionLog, this.conf);
+ try {
+ HLogKey key = new HLogKey();
+ KeyValue val = new KeyValue();
+ long skippedEdits = 0;
+ long editsCount = 0;
+ // How many edits to apply before we send a progress report.
+ int reportInterval =
+ this.conf.getInt("hbase.hstore.report.interval.edits", 2000);
+ while (logReader.next(key, val)) {
+ maxSeqIdInLog = Math.max(maxSeqIdInLog, key.getLogSeqNum());
+ if (key.getLogSeqNum() <= maxSeqID) {
+ skippedEdits++;
+ continue;
+ }
+ // Check this edit is for me. Also, guard against writing the special
+ // METACOLUMN info such as HBASE::CACHEFLUSH entries
+ if (/* Commented out for now -- St.Ack val.isTransactionEntry() ||*/
+ val.matchingColumnNoDelimiter(HLog.METACOLUMN,
+ HLog.METACOLUMN.length - 1) ||
+ !Bytes.equals(key.getRegionName(), regioninfo.getRegionName()) ||
+ !val.matchingFamily(family.getName())) {
+ continue;
+ }
+ reconstructedCache.add(val);
+ editsCount++;
+ // Every 2k edits, tell the reporter we're making progress.
+ // Have seen 60k edits taking 3minutes to complete.
+ if (reporter != null && (editsCount % reportInterval) == 0) {
+ reporter.progress();
+ }
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Applied " + editsCount + ", skipped " + skippedEdits +
+ " because sequence id <= " + maxSeqID);
+ }
+ } finally {
+ logReader.close();
+ }
+
+ if (reconstructedCache.size() > 0) {
+ // We create a "virtual flush" at maxSeqIdInLog+1.
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("flushing reconstructionCache");
+ }
+ internalFlushCache(reconstructedCache, maxSeqIdInLog + 1);
+ }
+ }
+
+ /*
+ * Creates a series of StoreFile loaded from the given directory.
+ * @throws IOException
+ */
+ private Map<Long, StoreFile> loadStoreFiles()
+ throws IOException {
+ Map<Long, StoreFile> results = new HashMap<Long, StoreFile>();
+ FileStatus files[] = this.fs.listStatus(this.homedir);
+ for (int i = 0; files != null && i < files.length; i++) {
+ // Skip directories.
+ if (files[i].isDir()) {
+ continue;
+ }
+ Path p = files[i].getPath();
+ // Check for empty file. Should never be the case but can happen
+ // after data loss in hdfs for whatever reason (upgrade, etc.): HBASE-646
+ if (this.fs.getFileStatus(p).getLen() <= 0) {
+ LOG.warn("Skipping " + p + " because its empty. HBASE-646 DATA LOSS?");
+ continue;
+ }
+ StoreFile curfile = new StoreFile(fs, p);
+ long storeSeqId = curfile.getMaxSequenceId();
+ if (storeSeqId > this.maxSeqId) {
+ this.maxSeqId = storeSeqId;
+ }
+ long length = curfile.getReader().length();
+ this.storeSize += length;
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("loaded " + FSUtils.getPath(p) + ", isReference=" +
+ curfile.isReference() + ", sequence id=" + storeSeqId +
+ ", length=" + length + ", majorCompaction=" +
+ curfile.isMajorCompaction());
+ }
+ results.put(Long.valueOf(storeSeqId), curfile);
+ }
+ return results;
+ }
+
+ /**
+ * Adds a value to the memcache
+ *
+ * @param kv
+ * @return memcache size delta
+ */
+ protected long add(final KeyValue kv) {
+ lock.readLock().lock();
+ try {
+ return this.memcache.add(kv);
+ } finally {
+ lock.readLock().unlock();
+ }
+ }
+
+ /**
+ * @return All store files.
+ */
+ NavigableMap<Long, StoreFile> getStorefiles() {
+ return this.storefiles;
+ }
+
+ /**
+ * Close all the readers
+ *
+ * We don't need to worry about subsequent requests because the HRegion holds
+ * a write lock that will prevent any more reads or writes.
+ *
+ * @throws IOException
+ */
+ List<StoreFile> close() throws IOException {
+ this.lock.writeLock().lock();
+ try {
+ ArrayList<StoreFile> result =
+ new ArrayList<StoreFile>(storefiles.values());
+ // Clear so metrics doesn't find them.
+ this.storefiles.clear();
+ for (StoreFile f: result) {
+ f.close();
+ }
+ LOG.debug("closed " + this.storeNameStr);
+ return result;
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ }
+
+ /**
+ * Snapshot this stores memcache. Call before running
+ * {@link #flushCache(long)} so it has some work to do.
+ */
+ void snapshot() {
+ this.memcache.snapshot();
+ }
+
+ /**
+ * Write out current snapshot. Presumes {@link #snapshot()} has been called
+ * previously.
+ * @param logCacheFlushId flush sequence number
+ * @return true if a compaction is needed
+ * @throws IOException
+ */
+ boolean flushCache(final long logCacheFlushId) throws IOException {
+ // Get the snapshot to flush. Presumes that a call to
+ // this.memcache.snapshot() has happened earlier up in the chain.
+ ConcurrentSkipListSet<KeyValue> cache = this.memcache.getSnapshot();
+ // If an exception happens flushing, we let it out without clearing
+ // the memcache snapshot. The old snapshot will be returned when we say
+ // 'snapshot', the next time flush comes around.
+ StoreFile sf = internalFlushCache(cache, logCacheFlushId);
+ if (sf == null) {
+ return false;
+ }
+ // Add new file to store files. Clear snapshot too while we have the
+ // Store write lock.
+ int size = updateStorefiles(logCacheFlushId, sf, cache);
+ return size >= this.compactionThreshold;
+ }
+
+ /*
+ * @param cache
+ * @param logCacheFlushId
+ * @return StoreFile created.
+ * @throws IOException
+ */
+ private StoreFile internalFlushCache(final ConcurrentSkipListSet<KeyValue> cache,
+ final long logCacheFlushId)
+ throws IOException {
+ HFile.Writer writer = null;
+ long flushed = 0;
+ // Don't flush if there are no entries.
+ if (cache.size() == 0) {
+ return null;
+ }
+ long now = System.currentTimeMillis();
+ // TODO: We can fail in the below block before we complete adding this
+ // flush to list of store files. Add cleanup of anything put on filesystem
+ // if we fail.
+ synchronized (flushLock) {
+ // A. Write the map out to the disk
+ writer = getWriter();
+ int entries = 0;
+ try {
+ for (KeyValue kv: cache) {
+ if (!isExpired(kv, ttl, now)) {
+ writer.append(kv);
+ entries++;
+ flushed += this.memcache.heapSize(kv, true);
+ }
+ }
+ // B. Write out the log sequence number that corresponds to this output
+ // MapFile. The MapFile is current up to and including logCacheFlushId.
+ StoreFile.appendMetadata(writer, logCacheFlushId);
+ } finally {
+ writer.close();
+ }
+ }
+ StoreFile sf = new StoreFile(this.fs, writer.getPath());
+ this.storeSize += sf.getReader().length();
+ if(LOG.isDebugEnabled()) {
+ LOG.debug("Added " + sf + ", entries=" + sf.getReader().getEntries() +
+ ", sequenceid=" + logCacheFlushId +
+ ", memsize=" + StringUtils.humanReadableInt(flushed) +
+ ", filesize=" + StringUtils.humanReadableInt(sf.getReader().length()) +
+ " to " + this.regioninfo.getRegionNameAsString());
+ }
+ return sf;
+ }
+
+ /**
+ * @return Writer for this store.
+ * @throws IOException
+ */
+ HFile.Writer getWriter() throws IOException {
+ return getWriter(this.homedir);
+ }
+
+ /*
+ * @return Writer for this store.
+ * @param basedir Directory to put writer in.
+ * @throws IOException
+ */
+ private HFile.Writer getWriter(final Path basedir) throws IOException {
+ return StoreFile.getWriter(this.fs, basedir, this.blocksize,
+ this.compression, this.comparator.getRawComparator(), this.bloomfilter);
+ }
+
+ /*
+ * Change storefiles adding into place the Reader produced by this new flush.
+ * @param logCacheFlushId
+ * @param sf
+ * @param cache That was used to make the passed file <code>p</code>.
+ * @throws IOException
+ * @return Count of store files.
+ */
+ private int updateStorefiles(final long logCacheFlushId,
+ final StoreFile sf, final NavigableSet<KeyValue> cache)
+ throws IOException {
+ int count = 0;
+ this.lock.writeLock().lock();
+ try {
+ this.storefiles.put(Long.valueOf(logCacheFlushId), sf);
+ count = this.storefiles.size();
+ // Tell listeners of the change in readers.
+ notifyChangedReadersObservers();
+ this.memcache.clearSnapshot(cache);
+ return count;
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ }
+
+ /*
+ * Notify all observers that set of Readers has changed.
+ * @throws IOException
+ */
+ private void notifyChangedReadersObservers() throws IOException {
+ for (ChangedReadersObserver o: this.changedReaderObservers) {
+ o.updateReaders();
+ }
+ }
+
+ /*
+ * @param o Observer who wants to know about changes in set of Readers
+ */
+ void addChangedReaderObserver(ChangedReadersObserver o) {
+ this.changedReaderObservers.add(o);
+ }
+
+ /*
+ * @param o Observer no longer interested in changes in set of Readers.
+ */
+ void deleteChangedReaderObserver(ChangedReadersObserver o) {
+ if (!this.changedReaderObservers.remove(o)) {
+ LOG.warn("Not in set" + o);
+ }
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // Compaction
+ //////////////////////////////////////////////////////////////////////////////
+
+ /**
+ * Compact the StoreFiles. This method may take some time, so the calling
+ * thread must be able to block for long periods.
+ *
+ * <p>During this time, the Store can work as usual, getting values from
+ * MapFiles and writing new MapFiles from the Memcache.
+ *
+ * Existing MapFiles are not destroyed until the new compacted TreeMap is
+ * completely written-out to disk.
+ *
+ * The compactLock prevents multiple simultaneous compactions.
+ * The structureLock prevents us from interfering with other write operations.
+ *
+ * We don't want to hold the structureLock for the whole time, as a compact()
+ * can be lengthy and we want to allow cache-flushes during this period.
+ *
+ * @param mc True to force a major compaction regardless of
+ * thresholds
+ * @return row to split around if a split is needed, null otherwise
+ * @throws IOException
+ */
+ StoreSize compact(final boolean mc) throws IOException {
+ boolean forceSplit = this.regioninfo.shouldSplit(false);
+ boolean majorcompaction = mc;
+ synchronized (compactLock) {
+ long maxId = -1;
+ // filesToCompact are sorted oldest to newest.
+ List<StoreFile> filesToCompact = null;
+ filesToCompact = new ArrayList<StoreFile>(this.storefiles.values());
+ if (filesToCompact.size() <= 0) {
+ LOG.debug(this.storeNameStr + ": no store files to compact");
+ return null;
+ }
+ // The max-sequenceID in any of the to-be-compacted TreeMaps is the
+ // last key of storefiles.
+ maxId = this.storefiles.lastKey().longValue();
+ // Check to see if we need to do a major compaction on this region.
+ // If so, change doMajorCompaction to true to skip the incremental
+ // compacting below. Only check if doMajorCompaction is not true.
+ if (!majorcompaction) {
+ majorcompaction = isMajorCompaction(filesToCompact);
+ }
+ boolean references = hasReferences(filesToCompact);
+ if (!majorcompaction && !references &&
+ (forceSplit || (filesToCompact.size() < compactionThreshold))) {
+ return checkSplit(forceSplit);
+ }
+ if (!fs.exists(this.compactionDir) && !fs.mkdirs(this.compactionDir)) {
+ LOG.warn("Mkdir on " + this.compactionDir.toString() + " failed");
+ return checkSplit(forceSplit);
+ }
+
+ // HBASE-745, preparing all store file sizes for incremental compacting
+ // selection.
+ int countOfFiles = filesToCompact.size();
+ long totalSize = 0;
+ long[] fileSizes = new long[countOfFiles];
+ long skipped = 0;
+ int point = 0;
+ for (int i = 0; i < countOfFiles; i++) {
+ StoreFile file = filesToCompact.get(i);
+ Path path = file.getPath();
+ if (path == null) {
+ LOG.warn("Path is null for " + file);
+ return null;
+ }
+ long len = file.getReader().length();
+ fileSizes[i] = len;
+ totalSize += len;
+ }
+ if (!majorcompaction && !references) {
+ // Here we select files for incremental compaction.
+ // The rule is: if the largest(oldest) one is more than twice the
+ // size of the second, skip the largest, and continue to next...,
+ // until we meet the compactionThreshold limit.
+ for (point = 0; point < countOfFiles - 1; point++) {
+ if ((fileSizes[point] < fileSizes[point + 1] * 2) &&
+ (countOfFiles - point) <= maxFilesToCompact) {
+ break;
+ }
+ skipped += fileSizes[point];
+ }
+ filesToCompact = new ArrayList<StoreFile>(filesToCompact.subList(point,
+ countOfFiles));
+ if (filesToCompact.size() <= 1) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Skipped compaction of 1 file; compaction size of " +
+ this.storeNameStr + ": " +
+ StringUtils.humanReadableInt(totalSize) + "; Skipped " + point +
+ " files, size: " + skipped);
+ }
+ return checkSplit(forceSplit);
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Compaction size of " + this.storeNameStr + ": " +
+ StringUtils.humanReadableInt(totalSize) + "; Skipped " + point +
+ " file(s), size: " + skipped);
+ }
+ }
+
+ // Step through them, writing to the brand-new file
+ HFile.Writer writer = getWriter(this.compactionDir);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Started compaction of " + filesToCompact.size() + " file(s)" +
+ (references? ", hasReferences=true,": " ") + " into " +
+ FSUtils.getPath(writer.getPath()));
+ }
+ try {
+ compact(writer, filesToCompact, majorcompaction);
+ } finally {
+ // Now, write out an HSTORE_LOGINFOFILE for the brand-new TreeMap.
+ StoreFile.appendMetadata(writer, maxId, majorcompaction);
+ writer.close();
+ }
+
+ // Move the compaction into place.
+ completeCompaction(filesToCompact, writer);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Completed" + (majorcompaction? " major ": " ") +
+ "compaction of " + this.storeNameStr +
+ "; store size is " + StringUtils.humanReadableInt(storeSize));
+ }
+ }
+ return checkSplit(forceSplit);
+ }
+
+ /*
+ * @param files
+ * @return True if any of the files in <code>files</code> are References.
+ */
+ private boolean hasReferences(Collection<StoreFile> files) {
+ if (files != null && files.size() > 0) {
+ for (StoreFile hsf: files) {
+ if (hsf.isReference()) {
+ return true;
+ }
+ }
+ }
+ return false;
+ }
+
+ /*
+ * Gets lowest timestamp from files in a dir
+ *
+ * @param fs
+ * @param dir
+ * @throws IOException
+ */
+ private static long getLowestTimestamp(FileSystem fs, Path dir)
+ throws IOException {
+ FileStatus[] stats = fs.listStatus(dir);
+ if (stats == null || stats.length == 0) {
+ return 0l;
+ }
+ long lowTimestamp = Long.MAX_VALUE;
+ for (int i = 0; i < stats.length; i++) {
+ long timestamp = stats[i].getModificationTime();
+ if (timestamp < lowTimestamp){
+ lowTimestamp = timestamp;
+ }
+ }
+ return lowTimestamp;
+ }
+
+ /*
+ * @return True if we should run a major compaction.
+ */
+ boolean isMajorCompaction() throws IOException {
+ List<StoreFile> filesToCompact = null;
+ // filesToCompact are sorted oldest to newest.
+ filesToCompact = new ArrayList<StoreFile>(this.storefiles.values());
+ return isMajorCompaction(filesToCompact);
+ }
+
+ /*
+ * @param filesToCompact Files to compact. Can be null.
+ * @return True if we should run a major compaction.
+ */
+ private boolean isMajorCompaction(final List<StoreFile> filesToCompact)
+ throws IOException {
+ boolean result = false;
+ if (filesToCompact == null || filesToCompact.size() <= 0) {
+ return result;
+ }
+ long lowTimestamp = getLowestTimestamp(fs,
+ filesToCompact.get(0).getPath().getParent());
+ long now = System.currentTimeMillis();
+ if (lowTimestamp > 0l && lowTimestamp < (now - this.majorCompactionTime)) {
+ // Major compaction time has elapsed.
+ long elapsedTime = now - lowTimestamp;
+ if (filesToCompact.size() == 1 &&
+ filesToCompact.get(0).isMajorCompaction() &&
+ (this.ttl == HConstants.FOREVER || elapsedTime < this.ttl)) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Skipping major compaction of " + this.storeNameStr +
+ " because one (major) compacted file only and elapsedTime " +
+ elapsedTime + "ms is < ttl=" + this.ttl);
+ }
+ } else {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Major compaction triggered on store " + this.storeNameStr +
+ "; time since last major compaction " + (now - lowTimestamp) + "ms");
+ }
+ result = true;
+ }
+ }
+ return result;
+ }
+
+ /*
+ * @param r StoreFile list to reverse
+ * @return A new array of content of <code>readers</code>, reversed.
+ */
+ private StoreFile [] reverse(final List<StoreFile> r) {
+ List<StoreFile> copy = new ArrayList<StoreFile>(r);
+ Collections.reverse(copy);
+ // MapFile.Reader is instance of StoreFileReader so this should be ok.
+ return copy.toArray(new StoreFile[0]);
+ }
+
+ /*
+ * @param rdrs List of StoreFiles
+ * @param keys Current keys
+ * @param done Which readers are done
+ * @return The lowest current key in passed <code>rdrs</code>
+ */
+ private int getLowestKey(final HFileScanner [] rdrs, final KeyValue [] keys,
+ final boolean [] done) {
+ int lowestKey = -1;
+ for (int i = 0; i < rdrs.length; i++) {
+ if (done[i]) {
+ continue;
+ }
+ if (lowestKey < 0) {
+ lowestKey = i;
+ } else {
+ if (this.comparator.compare(keys[i], keys[lowestKey]) < 0) {
+ lowestKey = i;
+ }
+ }
+ }
+ return lowestKey;
+ }
+
+ /*
+ * Compact a list of StoreFiles.
+ *
+ * We work by iterating through the readers in parallel looking at newest
+ * store file first. We always increment the lowest-ranked one. Updates to a
+ * single row/column will appear ranked by timestamp.
+ * @param compactedOut Where to write compaction.
+ * @param pReaders List of readers sorted oldest to newest.
+ * @param majorCompaction True to force a major compaction regardless of
+ * thresholds
+ * @throws IOException
+ */
+ private void compact(final HFile.Writer compactedOut,
+ final List<StoreFile> pReaders, final boolean majorCompaction)
+ throws IOException {
+ // Reverse order so newest store file is first.
+ StoreFile[] files = reverse(pReaders);
+ HFileScanner [] rdrs = new HFileScanner[files.length];
+ KeyValue [] kvs = new KeyValue[rdrs.length];
+ boolean[] done = new boolean[rdrs.length];
+ // Now, advance through the readers in order. This will have the
+ // effect of a run-time sort of the entire dataset.
+ int numDone = 0;
+ for (int i = 0; i < rdrs.length; i++) {
+ rdrs[i] = files[i].getReader().getScanner();
+ done[i] = !rdrs[i].seekTo();
+ if (done[i]) {
+ numDone++;
+ } else {
+ kvs[i] = rdrs[i].getKeyValue();
+ }
+ }
+
+ long now = System.currentTimeMillis();
+ int timesSeen = 0;
+ KeyValue lastSeen = KeyValue.LOWESTKEY;
+ KeyValue lastDelete = null;
+ int maxVersions = family.getMaxVersions();
+ while (numDone < done.length) {
+ // Get lowest key in all store files.
+ int lowestKey = getLowestKey(rdrs, kvs, done);
+ KeyValue kv = kvs[lowestKey];
+ // If its same row and column as last key, increment times seen.
+ if (this.comparator.matchingRowColumn(lastSeen, kv)) {
+ timesSeen++;
+ // Reset last delete if not exact timestamp -- lastDelete only stops
+ // exactly the same key making it out to the compacted store file.
+ if (lastDelete != null &&
+ lastDelete.getTimestamp() != kv.getTimestamp()) {
+ lastDelete = null;
+ }
+ } else {
+ timesSeen = 1;
+ lastDelete = null;
+ }
+
+ // Don't write empty rows or columns. Only remove cells on major
+ // compaction. Remove if expired or > VERSIONS
+ if (kv.nonNullRowAndColumn()) {
+ if (!majorCompaction) {
+ // Write out all values if not a major compaction.
+ compactedOut.append(kv);
+ } else {
+ boolean expired = false;
+ boolean deleted = false;
+ if (timesSeen <= maxVersions && !(expired = isExpired(kv, ttl, now))) {
+ // If this value key is same as a deleted key, skip
+ if (lastDelete != null &&
+ this.comparatorIgnoringType.compare(kv, lastDelete) == 0) {
+ deleted = true;
+ } else if (kv.isDeleteType()) {
+ // If a deleted value, skip
+ deleted = true;
+ lastDelete = kv;
+ } else {
+ compactedOut.append(kv);
+ }
+ }
+ if (expired || deleted) {
+ // HBASE-855 remove one from timesSeen because it did not make it
+ // past expired check -- don't count against max versions.
+ timesSeen--;
+ }
+ }
+ }
+
+ // Update last-seen items
+ lastSeen = kv;
+
+ // Advance the smallest key. If that reader's all finished, then
+ // mark it as done.
+ if (!rdrs[lowestKey].next()) {
+ done[lowestKey] = true;
+ rdrs[lowestKey] = null;
+ numDone++;
+ } else {
+ kvs[lowestKey] = rdrs[lowestKey].getKeyValue();
+ }
+ }
+ }
+
+ /*
+ * It's assumed that the compactLock will be acquired prior to calling this
+ * method! Otherwise, it is not thread-safe!
+ *
+ * It works by processing a compaction that's been written to disk.
+ *
+ * <p>It is usually invoked at the end of a compaction, but might also be
+ * invoked at HStore startup, if the prior execution died midway through.
+ *
+ * <p>Moving the compacted TreeMap into place means:
+ * <pre>
+ * 1) Moving the new compacted MapFile into place
+ * 2) Unload all replaced MapFiles, close and collect list to delete.
+ * 3) Loading the new TreeMap.
+ * 4) Compute new store size
+ * </pre>
+ *
+ * @param compactedFiles list of files that were compacted
+ * @param compactedFile HStoreFile that is the result of the compaction
+ * @throws IOException
+ */
+ private void completeCompaction(final List<StoreFile> compactedFiles,
+ final HFile.Writer compactedFile)
+ throws IOException {
+ // 1. Moving the new files into place.
+ Path p = null;
+ try {
+ p = StoreFile.rename(this.fs, compactedFile.getPath(),
+ StoreFile.getRandomFilename(fs, this.homedir));
+ } catch (IOException e) {
+ LOG.error("Failed move of compacted file " + compactedFile.getPath(), e);
+ return;
+ }
+ StoreFile finalCompactedFile = new StoreFile(this.fs, p);
+ this.lock.writeLock().lock();
+ try {
+ try {
+ // 3. Loading the new TreeMap.
+ // Change this.storefiles so it reflects new state but do not
+ // delete old store files until we have sent out notification of
+ // change in case old files are still being accessed by outstanding
+ // scanners.
+ for (Map.Entry<Long, StoreFile> e: this.storefiles.entrySet()) {
+ if (compactedFiles.contains(e.getValue())) {
+ this.storefiles.remove(e.getKey());
+ }
+ }
+ // Add new compacted Reader and store file.
+ Long orderVal = Long.valueOf(finalCompactedFile.getMaxSequenceId());
+ this.storefiles.put(orderVal, finalCompactedFile);
+ // Tell observers that list of Readers has changed.
+ notifyChangedReadersObservers();
+ // Finally, delete old store files.
+ for (StoreFile hsf: compactedFiles) {
+ hsf.delete();
+ }
+ } catch (IOException e) {
+ e = RemoteExceptionHandler.checkIOException(e);
+ LOG.error("Failed replacing compacted files for " +
+ this.storeNameStr +
+ ". Compacted file is " + finalCompactedFile.toString() +
+ ". Files replaced are " + compactedFiles.toString() +
+ " some of which may have been already removed", e);
+ }
+ // 4. Compute new store size
+ this.storeSize = 0L;
+ for (StoreFile hsf : this.storefiles.values()) {
+ this.storeSize += hsf.getReader().length();
+ }
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ }
+
+ // ////////////////////////////////////////////////////////////////////////////
+ // Accessors.
+ // (This is the only section that is directly useful!)
+ //////////////////////////////////////////////////////////////////////////////
+
+ /**
+ * Return all the available columns for the given key. The key indicates a
+ * row and timestamp, but not a column name.
+ *
+ * The returned object should map column names to Cells.
+ * @param key - Where to start searching. Specifies a row.
+ * Columns are specified in following arguments.
+ * @param columns Can be null which means get all
+ * @param columnPattern Can be null.
+ * @param numVersions
+ * @param versionsCounter Can be null.
+ * @param keyvalues
+ * @param now - Where to start searching. Specifies a timestamp.
+ * @throws IOException
+ */
+ public void getFull(KeyValue key, final NavigableSet<byte []> columns,
+ final Pattern columnPattern,
+ final int numVersions, Map<KeyValue, HRegion.Counter> versionsCounter,
+ List<KeyValue> keyvalues, final long now)
+ throws IOException {
+ // if the key is null, we're not even looking for anything. return.
+ if (key == null) {
+ return;
+ }
+ int versions = versionsToReturn(numVersions);
+ NavigableSet<KeyValue> deletes =
+ new TreeSet<KeyValue>(this.comparatorIgnoringType);
+ // Create a Map that has results by column so we can keep count of versions.
+ // It duplicates columns but doing check of columns, we don't want to make
+ // column set each time.
+ this.lock.readLock().lock();
+ try {
+ // get from the memcache first.
+ if (this.memcache.getFull(key, columns, columnPattern, versions,
+ versionsCounter, deletes, keyvalues, now)) {
+ // May have gotten enough results, enough to return.
+ return;
+ }
+ Map<Long, StoreFile> m = this.storefiles.descendingMap();
+ for (Iterator<Map.Entry<Long, StoreFile>> i = m.entrySet().iterator();
+ i.hasNext();) {
+ if (getFullFromStoreFile(i.next().getValue(), key, columns,
+ columnPattern, versions, versionsCounter, deletes, keyvalues)) {
+ return;
+ }
+ }
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /*
+ * @param f
+ * @param key Where to start searching. Specifies a row and timestamp.
+ * Columns are specified in following arguments.
+ * @param columns
+ * @param versions
+ * @param versionCounter
+ * @param deletes
+ * @param keyvalues
+ * @return True if we found enough results to satisfy the <code>versions</code>
+ * and <code>columns</code> passed.
+ * @throws IOException
+ */
+ private boolean getFullFromStoreFile(StoreFile f, KeyValue target,
+ Set<byte []> columns, final Pattern columnPattern, int versions,
+ Map<KeyValue, HRegion.Counter> versionCounter,
+ NavigableSet<KeyValue> deletes,
+ List<KeyValue> keyvalues)
+ throws IOException {
+ long now = System.currentTimeMillis();
+ HFileScanner scanner = f.getReader().getScanner();
+ if (!getClosest(scanner, target)) {
+ return false;
+ }
+ boolean hasEnough = false;
+ do {
+ KeyValue kv = scanner.getKeyValue();
+ // Make sure we have not passed out the row. If target key has a
+ // column on it, then we are looking explicit key+column combination. If
+ // we've passed it out, also break.
+ if (target.isEmptyColumn()? !this.comparator.matchingRows(target, kv):
+ !this.comparator.matchingRowColumn(target, kv)) {
+ break;
+ }
+ if (!Store.getFullCheck(this.comparator, target, kv, columns, columnPattern)) {
+ continue;
+ }
+ if (Store.doKeyValue(kv, versions, versionCounter, columns, deletes, now,
+ this.ttl, keyvalues, null)) {
+ hasEnough = true;
+ break;
+ }
+ } while (scanner.next());
+ return hasEnough;
+ }
+
+ /**
+ * Code shared by {@link Memcache#getFull(KeyValue, NavigableSet, Pattern, int, Map, NavigableSet, List, long)}
+ * and {@link #getFullFromStoreFile(StoreFile, KeyValue, Set, Pattern, int, Map, NavigableSet, List)}
+ * @param c
+ * @param target
+ * @param candidate
+ * @param columns
+ * @param columnPattern
+ * @return True if <code>candidate</code> matches column and timestamp.
+ */
+ static boolean getFullCheck(final KeyValue.KVComparator c,
+ final KeyValue target, final KeyValue candidate,
+ final Set<byte []> columns, final Pattern columnPattern) {
+ // Does column match?
+ if (!Store.matchingColumns(candidate, columns)) {
+ return false;
+ }
+ // if the column pattern is not null, we use it for column matching.
+ // we will skip the keys whose column doesn't match the pattern.
+ if (columnPattern != null) {
+ if (!(columnPattern.matcher(candidate.getColumnString()).matches())) {
+ return false;
+ }
+ }
+ if (c.compareTimestamps(target, candidate) > 0) {
+ return false;
+ }
+ return true;
+ }
+
+ /*
+ * @param wantedVersions How many versions were asked for.
+ * @return wantedVersions or this families' VERSIONS.
+ */
+ private int versionsToReturn(final int wantedVersions) {
+ if (wantedVersions <= 0) {
+ throw new IllegalArgumentException("Number of versions must be > 0");
+ }
+ // Make sure we do not return more than maximum versions for this store.
+ int maxVersions = this.family.getMaxVersions();
+ return wantedVersions > maxVersions &&
+ wantedVersions != HConstants.ALL_VERSIONS? maxVersions: wantedVersions;
+ }
+
+ /**
+ * Get the value for the indicated HStoreKey. Grab the target value and the
+ * previous <code>numVersions - 1</code> values, as well.
+ *
+ * Use {@link HConstants.ALL_VERSIONS} to retrieve all versions.
+ * @param key
+ * @param numVersions Number of versions to fetch. Must be > 0.
+ * @return values for the specified versions
+ * @throws IOException
+ */
+ List<KeyValue> get(final KeyValue key, final int numVersions)
+ throws IOException {
+ // This code below is very close to the body of the getKeys method. Any
+ // changes in the flow below should also probably be done in getKeys.
+ // TODO: Refactor so same code used.
+ long now = System.currentTimeMillis();
+ int versions = versionsToReturn(numVersions);
+ // Keep a list of deleted cell keys. We need this because as we go through
+ // the memcache and store files, the cell with the delete marker may be
+ // in one store and the old non-delete cell value in a later store.
+ // If we don't keep around the fact that the cell was deleted in a newer
+ // record, we end up returning the old value if user is asking for more
+ // than one version. This List of deletes should not be large since we
+ // are only keeping rows and columns that match those set on the get and
+ // which have delete values. If memory usage becomes an issue, could
+ // redo as bloom filter. Use sorted set because test for membership should
+ // be faster than calculating a hash. Use a comparator that ignores ts.
+ NavigableSet<KeyValue> deletes =
+ new TreeSet<KeyValue>(this.comparatorIgnoringType);
+ List<KeyValue> keyvalues = new ArrayList<KeyValue>();
+ this.lock.readLock().lock();
+ try {
+ // Check the memcache
+ if (this.memcache.get(key, versions, keyvalues, deletes, now)) {
+ return keyvalues;
+ }
+ Map<Long, StoreFile> m = this.storefiles.descendingMap();
+ boolean hasEnough = false;
+ for (Map.Entry<Long, StoreFile> e: m.entrySet()) {
+ StoreFile f = e.getValue();
+ HFileScanner scanner = f.getReader().getScanner();
+ if (!getClosest(scanner, key)) {
+ // Move to next file.
+ continue;
+ }
+ do {
+ KeyValue kv = scanner.getKeyValue();
+ // Make sure below matches what happens up in Memcache#get.
+ if (this.comparator.matchingRowColumn(kv, key)) {
+ if (doKeyValue(kv, versions, deletes, now, this.ttl, keyvalues, null)) {
+ hasEnough = true;
+ break;
+ }
+ } else {
+ // Row and column don't match. Must have gone past. Move to next file.
+ break;
+ }
+ } while (scanner.next());
+ if (hasEnough) {
+ break; // Break out of files loop.
+ }
+ }
+ return keyvalues.isEmpty()? null: keyvalues;
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /*
+ * Small method to check if we are over the max number of versions
+ * or we acheived this family max versions.
+ * The later happens when we have the situation described in HBASE-621.
+ * @param versions
+ * @param c
+ * @return
+ */
+ static boolean hasEnoughVersions(final int versions, final List<KeyValue> c) {
+ return versions > 0 && !c.isEmpty() && c.size() >= versions;
+ }
+
+ /*
+ * Used when doing getFulls.
+ * @param kv
+ * @param versions
+ * @param versionCounter
+ * @param columns
+ * @param deletes
+ * @param now
+ * @param ttl
+ * @param keyvalues
+ * @param set
+ * @return True if enough versions.
+ */
+ static boolean doKeyValue(final KeyValue kv,
+ final int versions,
+ final Map<KeyValue, Counter> versionCounter,
+ final Set<byte []> columns,
+ final NavigableSet<KeyValue> deletes,
+ final long now,
+ final long ttl,
+ final List<KeyValue> keyvalues,
+ final SortedSet<KeyValue> set) {
+ boolean hasEnough = false;
+ if (kv.isDeleteType()) {
+ if (!deletes.contains(kv)) {
+ deletes.add(kv);
+ }
+ } else if (!deletes.contains(kv)) {
+ // Skip expired cells
+ if (!isExpired(kv, ttl, now)) {
+ if (HRegion.okToAddResult(kv, versions, versionCounter)) {
+ HRegion.addResult(kv, versionCounter, keyvalues);
+ if (HRegion.hasEnoughVersions(versions, versionCounter, columns)) {
+ hasEnough = true;
+ }
+ }
+ } else {
+ // Remove the expired.
+ Store.expiredOrDeleted(set, kv);
+ }
+ }
+ return hasEnough;
+ }
+
+ /*
+ * Used when doing get.
+ * @param kv
+ * @param versions
+ * @param deletes
+ * @param now
+ * @param ttl
+ * @param keyvalues
+ * @param set
+ * @return True if enough versions.
+ */
+ static boolean doKeyValue(final KeyValue kv, final int versions,
+ final NavigableSet<KeyValue> deletes,
+ final long now, final long ttl,
+ final List<KeyValue> keyvalues, final SortedSet<KeyValue> set) {
+ boolean hasEnough = false;
+ if (!kv.isDeleteType()) {
+ // Filter out expired results
+ if (notExpiredAndNotInDeletes(ttl, kv, now, deletes)) {
+ if (!keyvalues.contains(kv)) {
+ keyvalues.add(kv);
+ if (hasEnoughVersions(versions, keyvalues)) {
+ hasEnough = true;
+ }
+ }
+ } else {
+ if (set != null) {
+ expiredOrDeleted(set, kv);
+ }
+ }
+ } else {
+ // Cell holds a delete value.
+ deletes.add(kv);
+ }
+ return hasEnough;
+ }
+
+ /*
+ * Test that the <i>target</i> matches the <i>origin</i>. If the <i>origin</i>
+ * has an empty column, then it just tests row equivalence. Otherwise, it uses
+ * HStoreKey.matchesRowCol().
+ * @param c Comparator to use.
+ * @param origin Key we're testing against
+ * @param target Key we're testing
+ */
+ static boolean matchingRowColumn(final KeyValue.KVComparator c,
+ final KeyValue origin, final KeyValue target) {
+ return origin.isEmptyColumn()? c.matchingRows(target, origin):
+ c.matchingRowColumn(target, origin);
+ }
+
+ static void expiredOrDeleted(final Set<KeyValue> set, final KeyValue kv) {
+ boolean b = set.remove(kv);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(kv.toString() + " expired: " + b);
+ }
+ }
+
+ /**
+ * Find the key that matches <i>row</i> exactly, or the one that immediately
+ * preceeds it. WARNING: Only use this method on a table where writes occur
+ * with stricly increasing timestamps. This method assumes this pattern of
+ * writes in order to make it reasonably performant.
+ * @param targetkey
+ * @return Found keyvalue
+ * @throws IOException
+ */
+ KeyValue getRowKeyAtOrBefore(final KeyValue targetkey)
+ throws IOException{
+ // Map of keys that are candidates for holding the row key that
+ // most closely matches what we're looking for. We'll have to update it as
+ // deletes are found all over the place as we go along before finally
+ // reading the best key out of it at the end. Use a comparator that
+ // ignores key types. Otherwise, we can't remove deleted items doing
+ // set.remove because of the differing type between insert and delete.
+ NavigableSet<KeyValue> candidates =
+ new TreeSet<KeyValue>(this.comparator.getComparatorIgnoringType());
+
+ // Keep a list of deleted cell keys. We need this because as we go through
+ // the store files, the cell with the delete marker may be in one file and
+ // the old non-delete cell value in a later store file. If we don't keep
+ // around the fact that the cell was deleted in a newer record, we end up
+ // returning the old value if user is asking for more than one version.
+ // This List of deletes should not be large since we are only keeping rows
+ // and columns that match those set on the scanner and which have delete
+ // values. If memory usage becomes an issue, could redo as bloom filter.
+ NavigableSet<KeyValue> deletes =
+ new TreeSet<KeyValue>(this.comparatorIgnoringType);
+ long now = System.currentTimeMillis();
+ this.lock.readLock().lock();
+ try {
+ // First go to the memcache. Pick up deletes and candidates.
+ this.memcache.getRowKeyAtOrBefore(targetkey, candidates, deletes, now);
+ // Process each store file. Run through from newest to oldest.
+ Map<Long, StoreFile> m = this.storefiles.descendingMap();
+ for (Map.Entry<Long, StoreFile> e: m.entrySet()) {
+ // Update the candidate keys from the current map file
+ rowAtOrBeforeFromStoreFile(e.getValue(), targetkey, candidates,
+ deletes, now);
+ }
+ // Return the best key from candidateKeys
+ return candidates.isEmpty()? null: candidates.last();
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /*
+ * Check an individual MapFile for the row at or before a given key
+ * and timestamp
+ * @param f
+ * @param targetkey
+ * @param candidates Pass a Set with a Comparator that
+ * ignores key Type so we can do Set.remove using a delete, i.e. a KeyValue
+ * with a different Type to the candidate key.
+ * @throws IOException
+ */
+ private void rowAtOrBeforeFromStoreFile(final StoreFile f,
+ final KeyValue targetkey, final NavigableSet<KeyValue> candidates,
+ final NavigableSet<KeyValue> deletes, final long now)
+ throws IOException {
+ // if there aren't any candidate keys yet, we'll do some things different
+ if (candidates.isEmpty()) {
+ rowAtOrBeforeCandidate(f, targetkey, candidates, deletes, now);
+ } else {
+ rowAtOrBeforeWithCandidates(f, targetkey, candidates, deletes, now);
+ }
+ }
+
+ /*
+ * @param ttlSetting
+ * @param hsk
+ * @param now
+ * @param deletes A Set whose Comparator ignores Type.
+ * @return True if key has not expired and is not in passed set of deletes.
+ */
+ static boolean notExpiredAndNotInDeletes(final long ttl,
+ final KeyValue key, final long now, final Set<KeyValue> deletes) {
+ return !isExpired(key, ttl, now) && (deletes == null || deletes.isEmpty() ||
+ !deletes.contains(key));
+ }
+
+ static boolean isExpired(final KeyValue key, final long ttl,
+ final long now) {
+ return ttl != HConstants.FOREVER && now > key.getTimestamp() + ttl;
+ }
+
+ /* Find a candidate for row that is at or before passed key, searchkey, in hfile.
+ * @param f
+ * @param targetkey Key to go search the hfile with.
+ * @param candidates
+ * @param now
+ * @throws IOException
+ * @see {@link #rowAtOrBeforeCandidate(HStoreKey, org.apache.hadoop.io.MapFile.Reader, byte[], SortedMap, long)}
+ */
+ private void rowAtOrBeforeCandidate(final StoreFile f,
+ final KeyValue targetkey, final NavigableSet<KeyValue> candidates,
+ final NavigableSet<KeyValue> deletes, final long now)
+ throws IOException {
+ KeyValue search = targetkey;
+ // If the row we're looking for is past the end of this mapfile, set the
+ // search key to be the last key. If its a deleted key, then we'll back
+ // up to the row before and return that.
+ // TODO: Cache last key as KV over in the file.
+ byte [] lastkey = f.getReader().getLastKey();
+ KeyValue lastKeyValue =
+ KeyValue.createKeyValueFromKey(lastkey, 0, lastkey.length);
+ if (this.comparator.compareRows(lastKeyValue, targetkey) < 0) {
+ search = lastKeyValue;
+ }
+ KeyValue knownNoGoodKey = null;
+ HFileScanner scanner = f.getReader().getScanner();
+ for (boolean foundCandidate = false; !foundCandidate;) {
+ // Seek to the exact row, or the one that would be immediately before it
+ int result = scanner.seekTo(search.getBuffer(), search.getKeyOffset(),
+ search.getKeyLength());
+ if (result < 0) {
+ // Not in file.
+ break;
+ }
+ KeyValue deletedOrExpiredRow = null;
+ KeyValue kv = null;
+ do {
+ kv = scanner.getKeyValue();
+ if (this.comparator.compareRows(kv, search) <= 0) {
+ if (!kv.isDeleteType()) {
+ if (handleNonDelete(kv, now, deletes, candidates)) {
+ foundCandidate = true;
+ // NOTE! Continue.
+ continue;
+ }
+ }
+ deletes.add(kv);
+ if (deletedOrExpiredRow == null) {
+ deletedOrExpiredRow = kv;
+ }
+ } else if (this.comparator.compareRows(kv, search) > 0) {
+ // if the row key we just read is beyond the key we're searching for,
+ // then we're done.
+ break;
+ } else {
+ // So, the row key doesn't match, but we haven't gone past the row
+ // we're seeking yet, so this row is a candidate for closest
+ // (assuming that it isn't a delete).
+ if (!kv.isDeleteType()) {
+ if (handleNonDelete(kv, now, deletes, candidates)) {
+ foundCandidate = true;
+ // NOTE: Continue
+ continue;
+ }
+ }
+ deletes.add(kv);
+ if (deletedOrExpiredRow == null) {
+ deletedOrExpiredRow = kv;
+ }
+ }
+ } while(scanner.next() && (knownNoGoodKey == null ||
+ this.comparator.compare(kv, knownNoGoodKey) < 0));
+
+ // If we get here and have no candidates but we did find a deleted or
+ // expired candidate, we need to look at the key before that
+ if (!foundCandidate && deletedOrExpiredRow != null) {
+ knownNoGoodKey = deletedOrExpiredRow;
+ if (!scanner.seekBefore(deletedOrExpiredRow.getBuffer(),
+ deletedOrExpiredRow.getKeyOffset(),
+ deletedOrExpiredRow.getKeyLength())) {
+ // Not in file -- what can I do now but break?
+ break;
+ }
+ search = scanner.getKeyValue();
+ } else {
+ // No candidates and no deleted or expired candidates. Give up.
+ break;
+ }
+ }
+
+ // Arriving here just means that we consumed the whole rest of the map
+ // without going "past" the key we're searching for. we can just fall
+ // through here.
+ }
+
+ private void rowAtOrBeforeWithCandidates(final StoreFile f,
+ final KeyValue targetkey,
+ final NavigableSet<KeyValue> candidates,
+ final NavigableSet<KeyValue> deletes, final long now)
+ throws IOException {
+ // if there are already candidate keys, we need to start our search
+ // at the earliest possible key so that we can discover any possible
+ // deletes for keys between the start and the search key. Back up to start
+ // of the row in case there are deletes for this candidate in this mapfile
+ // BUT do not backup before the first key in the store file.
+ KeyValue firstCandidateKey = candidates.first();
+ KeyValue search = null;
+ if (this.comparator.compareRows(firstCandidateKey, targetkey) < 0) {
+ search = targetkey;
+ } else {
+ search = firstCandidateKey;
+ }
+
+ // Seek to the exact row, or the one that would be immediately before it
+ HFileScanner scanner = f.getReader().getScanner();
+ int result = scanner.seekTo(search.getBuffer(), search.getKeyOffset(),
+ search.getKeyLength());
+ if (result < 0) {
+ // Key is before start of this file. Return.
+ return;
+ }
+ do {
+ KeyValue kv = scanner.getKeyValue();
+ // if we have an exact match on row, and it's not a delete, save this
+ // as a candidate key
+ if (this.comparator.matchingRows(kv, targetkey)) {
+ handleKey(kv, now, deletes, candidates);
+ } else if (this.comparator.compareRows(kv, targetkey) > 0 ) {
+ // if the row key we just read is beyond the key we're searching for,
+ // then we're done.
+ break;
+ } else {
+ // So, the row key doesn't match, but we haven't gone past the row
+ // we're seeking yet, so this row is a candidate for closest
+ // (assuming that it isn't a delete).
+ handleKey(kv, now, deletes, candidates);
+ }
+ } while(scanner.next());
+ }
+
+ /*
+ * Used calculating keys at or just before a passed key.
+ * @param readkey
+ * @param now
+ * @param deletes Set with Comparator that ignores key type.
+ * @param candidate Set with Comprator that ignores key type.
+ */
+ private void handleKey(final KeyValue readkey, final long now,
+ final NavigableSet<KeyValue> deletes,
+ final NavigableSet<KeyValue> candidates) {
+ if (!readkey.isDeleteType()) {
+ handleNonDelete(readkey, now, deletes, candidates);
+ } else {
+ handleDeletes(readkey, candidates, deletes);
+ }
+ }
+
+ /*
+ * Used calculating keys at or just before a passed key.
+ * @param readkey
+ * @param now
+ * @param deletes Set with Comparator that ignores key type.
+ * @param candidates Set with Comparator that ignores key type.
+ * @return True if we added a candidate.
+ */
+ private boolean handleNonDelete(final KeyValue readkey, final long now,
+ final NavigableSet<KeyValue> deletes,
+ final NavigableSet<KeyValue> candidates) {
+ if (notExpiredAndNotInDeletes(this.ttl, readkey, now, deletes)) {
+ candidates.add(readkey);
+ return true;
+ }
+ return false;
+ }
+
+ /**
+ * Handle keys whose values hold deletes.
+ * Add to the set of deletes and then if the candidate keys contain any that
+ * might match, then check for a match and remove it. Implies candidates
+ * is made with a Comparator that ignores key type.
+ * @param k
+ * @param candidates
+ * @param deletes
+ * @return True if we removed <code>k</code> from <code>candidates</code>.
+ */
+ static boolean handleDeletes(final KeyValue k,
+ final NavigableSet<KeyValue> candidates,
+ final NavigableSet<KeyValue> deletes) {
+ deletes.add(k);
+ return candidates.remove(k);
+ }
+
+ /**
+ * Determines if HStore can be split
+ * @param force Whether to force a split or not.
+ * @return a StoreSize if store can be split, null otherwise.
+ */
+ StoreSize checkSplit(final boolean force) {
+ this.lock.readLock().lock();
+ try {
+ // Iterate through all store files
+ if (this.storefiles.size() <= 0) {
+ return null;
+ }
+ if (!force && (storeSize < this.desiredMaxFileSize)) {
+ return null;
+ }
+ // Not splitable if we find a reference store file present in the store.
+ boolean splitable = true;
+ long maxSize = 0L;
+ Long mapIndex = Long.valueOf(0L);
+ for (Map.Entry<Long, StoreFile> e: storefiles.entrySet()) {
+ StoreFile sf = e.getValue();
+ if (splitable) {
+ splitable = !sf.isReference();
+ if (!splitable) {
+ // RETURN IN MIDDLE OF FUNCTION!!! If not splitable, just return.
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(sf + " is not splittable");
+ }
+ return null;
+ }
+ }
+ long size = sf.getReader().length();
+ if (size > maxSize) {
+ // This is the largest one so far
+ maxSize = size;
+ mapIndex = e.getKey();
+ }
+ }
+
+ HFile.Reader r = this.storefiles.get(mapIndex).getReader();
+ // Get first, last, and mid keys. Midkey is the key that starts block
+ // in middle of hfile. Has column and timestamp. Need to return just
+ // the row we want to split on as midkey.
+ byte [] midkey = r.midkey();
+ if (midkey != null) {
+ KeyValue mk = KeyValue.createKeyValueFromKey(midkey, 0, midkey.length);
+ byte [] fk = r.getFirstKey();
+ KeyValue firstKey = KeyValue.createKeyValueFromKey(fk, 0, fk.length);
+ byte [] lk = r.getLastKey();
+ KeyValue lastKey = KeyValue.createKeyValueFromKey(lk, 0, lk.length);
+ // if the midkey is the same as the first and last keys, then we cannot
+ // (ever) split this region.
+ if (this.comparator.compareRows(mk, firstKey) == 0 &&
+ this.comparator.compareRows(mk, lastKey) == 0) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("cannot split because midkey is the same as first or " +
+ "last row");
+ }
+ return null;
+ }
+ return new StoreSize(maxSize, mk.getRow());
+ }
+ } catch(IOException e) {
+ LOG.warn("Failed getting store size for " + this.storeNameStr, e);
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ return null;
+ }
+
+ /** @return aggregate size of HStore */
+ public long getSize() {
+ return storeSize;
+ }
+
+ //////////////////////////////////////////////////////////////////////////////
+ // File administration
+ //////////////////////////////////////////////////////////////////////////////
+
+ /**
+ * Return a scanner for both the memcache and the HStore files
+ */
+ protected InternalScanner getScanner(long timestamp,
+ final NavigableSet<byte []> targetCols,
+ byte [] firstRow, RowFilterInterface filter)
+ throws IOException {
+ lock.readLock().lock();
+ try {
+ return new StoreScanner(this, targetCols, firstRow, timestamp, filter);
+ } finally {
+ lock.readLock().unlock();
+ }
+ }
+
+ @Override
+ public String toString() {
+ return this.storeNameStr;
+ }
+
+ /**
+ * @return Count of store files
+ */
+ int getStorefilesCount() {
+ return this.storefiles.size();
+ }
+
+ /**
+ * @return The size of the store file indexes, in bytes.
+ * @throws IOException if there was a problem getting file sizes from the
+ * filesystem
+ */
+ long getStorefilesIndexSize() throws IOException {
+ long size = 0;
+ for (StoreFile s: storefiles.values())
+ size += s.getReader().indexSize();
+ return size;
+ }
+
+ /*
+ * Datastructure that holds size and row to split a file around.
+ * TODO: Take a KeyValue rather than row.
+ */
+ static class StoreSize {
+ private final long size;
+ private final byte [] row;
+
+ StoreSize(long size, byte [] row) {
+ this.size = size;
+ this.row = row;
+ }
+ /* @return the size */
+ long getSize() {
+ return size;
+ }
+
+ byte [] getSplitRow() {
+ return this.row;
+ }
+ }
+
+ HRegionInfo getHRegionInfo() {
+ return this.regioninfo;
+ }
+
+ /**
+ * Convenience method that implements the old MapFile.getClosest on top of
+ * HFile Scanners. getClosest used seek to the asked-for key or just after
+ * (HFile seeks to the key or just before).
+ * @param s Scanner to use
+ * @param kv Key to find.
+ * @return True if we were able to seek the scanner to <code>b</code> or to
+ * the key just after.
+ * @throws IOException
+ */
+ static boolean getClosest(final HFileScanner s, final KeyValue kv)
+ throws IOException {
+ // Pass offsets to key content of a KeyValue; thats whats in the hfile index.
+ int result = s.seekTo(kv.getBuffer(), kv.getKeyOffset(), kv.getKeyLength());
+ if (result < 0) {
+ // Not in file. Will the first key do?
+ if (!s.seekTo()) {
+ return false;
+ }
+ } else if (result > 0) {
+ // Less than what was asked for but maybe < because we're asking for
+ // r/c/LATEST_TIMESTAMP -- what was returned was r/c-1/SOME_TS...
+ // A next will get us a r/c/SOME_TS.
+ if (!s.next()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ /**
+ * @param kv
+ * @param columns Can be null
+ * @return True if column matches.
+ */
+ static boolean matchingColumns(final KeyValue kv, final Set<byte []> columns) {
+ if (columns == null) {
+ return true;
+ }
+ // Only instantiate columns if lots of columns to test.
+ if (columns.size() > 100) {
+ return columns.contains(kv.getColumn());
+ }
+ for (byte [] column: columns) {
+ if (kv.matchingColumn(column)) {
+ return true;
+ }
+ }
+ return false;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/StoreFile.java b/src/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
new file mode 100644
index 0000000..038bb7d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
@@ -0,0 +1,491 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.Map;
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.HalfHFileReader;
+import org.apache.hadoop.hbase.io.Reference;
+import org.apache.hadoop.hbase.io.hfile.BlockCache;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A Store data file. Stores usually have one or more of these files. They
+ * are produced by flushing the memcache to disk. To
+ * create, call {@link #getWriter(FileSystem, Path)} and append data. Be
+ * sure to add any metadata before calling close on the Writer
+ * (Use the appendMetadata convenience methods). On close, a StoreFile is
+ * sitting in the Filesystem. To refer to it, create a StoreFile instance
+ * passing filesystem and path. To read, call {@link #getReader()}.
+ * <p>StoreFiles may also reference store files in another Store.
+ */
+public class StoreFile implements HConstants {
+ static final Log LOG = LogFactory.getLog(StoreFile.class.getName());
+
+ // Make default block size for StoreFiles 8k while testing. TODO: FIX!
+ // Need to make it 8k for testing.
+ private static final int DEFAULT_BLOCKSIZE_SMALL = 8 * 1024;
+
+ private final FileSystem fs;
+ // This file's path.
+ private final Path path;
+ // If this storefile references another, this is the reference instance.
+ private Reference reference;
+ // If this StoreFile references another, this is the other files path.
+ private Path referencePath;
+
+ // Keys for metadata stored in backing HFile.
+ private static final byte [] MAX_SEQ_ID_KEY = Bytes.toBytes("MAX_SEQ_ID_KEY");
+ // Set when we obtain a Reader.
+ private long sequenceid = -1;
+
+ private static final byte [] MAJOR_COMPACTION_KEY =
+ Bytes.toBytes("MAJOR_COMPACTION_KEY");
+ // If true, this file was product of a major compaction. Its then set
+ // whenever you get a Reader.
+ private AtomicBoolean majorCompaction = null;
+
+ /*
+ * Regex that will work for straight filenames and for reference names.
+ * If reference, then the regex has more than just one group. Group 1 is
+ * this files id. Group 2 the referenced region name, etc.
+ */
+ private static final Pattern REF_NAME_PARSER =
+ Pattern.compile("^(\\d+)(?:\\.(.+))?$");
+
+ private volatile HFile.Reader reader;
+
+ // Used making file ids.
+ private final static Random rand = new Random();
+
+ /**
+ * Constructor.
+ * Loads up a Reader (and its indices, etc.).
+ * @param fs Filesystem.
+ * @param p qualified path
+ * @throws IOException
+ */
+ StoreFile(final FileSystem fs, final Path p)
+ throws IOException {
+ this.fs = fs;
+ this.path = p;
+ if (isReference(p)) {
+ this.reference = Reference.read(fs, p);
+ this.referencePath = getReferredToFile(this.path);
+ }
+ this.reader = open();
+ }
+
+ /**
+ * @return Path or null if this StoreFile was made with a Stream.
+ */
+ Path getPath() {
+ return this.path;
+ }
+
+ /**
+ * @return The Store/ColumnFamily this file belongs to.
+ */
+ byte [] getFamily() {
+ return Bytes.toBytes(this.path.getParent().getName());
+ }
+
+ /**
+ * @return True if this is a StoreFile Reference; call after {@link #open()}
+ * else may get wrong answer.
+ */
+ boolean isReference() {
+ return this.reference != null;
+ }
+
+ /**
+ * @param p Path to check.
+ * @return True if the path has format of a HStoreFile reference.
+ */
+ public static boolean isReference(final Path p) {
+ return isReference(p, REF_NAME_PARSER.matcher(p.getName()));
+ }
+
+ /**
+ * @param p Path to check.
+ * @param m Matcher to use.
+ * @return True if the path has format of a HStoreFile reference.
+ */
+ public static boolean isReference(final Path p, final Matcher m) {
+ if (m == null || !m.matches()) {
+ LOG.warn("Failed match of store file name " + p.toString());
+ throw new RuntimeException("Failed match of store file name " +
+ p.toString());
+ }
+ return m.groupCount() > 1 && m.group(2) != null;
+ }
+
+ /*
+ * Return path to the file referred to by a Reference. Presumes a directory
+ * hierarchy of <code>${hbase.rootdir}/tablename/regionname/familyname</code>.
+ * @param p Path to a Reference file.
+ * @return Calculated path to parent region file.
+ * @throws IOException
+ */
+ static Path getReferredToFile(final Path p) {
+ Matcher m = REF_NAME_PARSER.matcher(p.getName());
+ if (m == null || !m.matches()) {
+ LOG.warn("Failed match of store file name " + p.toString());
+ throw new RuntimeException("Failed match of store file name " +
+ p.toString());
+ }
+ // Other region name is suffix on the passed Reference file name
+ String otherRegion = m.group(2);
+ // Tabledir is up two directories from where Reference was written.
+ Path tableDir = p.getParent().getParent().getParent();
+ String nameStrippedOfSuffix = m.group(1);
+ // Build up new path with the referenced region in place of our current
+ // region in the reference path. Also strip regionname suffix from name.
+ return new Path(new Path(new Path(tableDir, otherRegion),
+ p.getParent().getName()), nameStrippedOfSuffix);
+ }
+
+ /**
+ * @return True if this file was made by a major compaction.
+ */
+ boolean isMajorCompaction() {
+ if (this.majorCompaction == null) {
+ throw new NullPointerException("This has not been set yet");
+ }
+ return this.majorCompaction.get();
+ }
+
+ /**
+ * @return This files maximum edit sequence id.
+ */
+ public long getMaxSequenceId() {
+ if (this.sequenceid == -1) {
+ throw new IllegalAccessError("Has not been initialized");
+ }
+ return this.sequenceid;
+ }
+
+ /**
+ * Opens reader on this store file. Called by Constructor.
+ * @return Reader for the store file.
+ * @throws IOException
+ * @see #close()
+ */
+ protected HFile.Reader open()
+ throws IOException {
+ if (this.reader != null) {
+ throw new IllegalAccessError("Already open");
+ }
+ if (isReference()) {
+ this.reader = new HalfHFileReader(this.fs, this.referencePath, null,
+ this.reference);
+ } else {
+ this.reader = new StoreFileReader(this.fs, this.path, null);
+ }
+ // Load up indices and fileinfo.
+ Map<byte [], byte []> map = this.reader.loadFileInfo();
+ // Read in our metadata.
+ byte [] b = map.get(MAX_SEQ_ID_KEY);
+ if (b != null) {
+ // By convention, if halfhfile, top half has a sequence number > bottom
+ // half. Thats why we add one in below. Its done for case the two halves
+ // are ever merged back together --rare. Without it, on open of store,
+ // since store files are distingushed by sequence id, the one half would
+ // subsume the other.
+ this.sequenceid = Bytes.toLong(b);
+ if (isReference()) {
+ if (Reference.isTopFileRegion(this.reference.getFileRegion())) {
+ this.sequenceid += 1;
+ }
+ }
+
+ }
+ b = map.get(MAJOR_COMPACTION_KEY);
+ if (b != null) {
+ boolean mc = Bytes.toBoolean(b);
+ if (this.majorCompaction == null) {
+ this.majorCompaction = new AtomicBoolean(mc);
+ } else {
+ this.majorCompaction.set(mc);
+ }
+ }
+ return this.reader;
+ }
+
+ /**
+ * Override to add some customization on HFile.Reader
+ */
+ static class StoreFileReader extends HFile.Reader {
+ public StoreFileReader(FileSystem fs, Path path, BlockCache cache)
+ throws IOException {
+ super(fs, path, cache);
+ }
+
+ @Override
+ protected String toStringFirstKey() {
+ return KeyValue.keyToString(getFirstKey());
+ }
+
+ @Override
+ protected String toStringLastKey() {
+ return KeyValue.keyToString(getLastKey());
+ }
+ }
+
+ /**
+ * Override to add some customization on HalfHFileReader.
+ */
+ static class HalfStoreFileReader extends HalfHFileReader {
+ public HalfStoreFileReader(FileSystem fs, Path p, BlockCache c, Reference r)
+ throws IOException {
+ super(fs, p, c, r);
+ }
+
+ @Override
+ public String toString() {
+ return super.toString() + (isTop()? ", half=top": ", half=bottom");
+ }
+
+ @Override
+ protected String toStringFirstKey() {
+ return KeyValue.keyToString(getFirstKey());
+ }
+
+ @Override
+ protected String toStringLastKey() {
+ return KeyValue.keyToString(getLastKey());
+ }
+ }
+
+ /**
+ * @return Current reader. Must call open first.
+ */
+ public HFile.Reader getReader() {
+ if (this.reader == null) {
+ throw new IllegalAccessError("Call open first");
+ }
+ return this.reader;
+ }
+
+ /**
+ * @throws IOException
+ */
+ public synchronized void close() throws IOException {
+ if (this.reader != null) {
+ this.reader.close();
+ this.reader = null;
+ }
+ }
+
+ @Override
+ public String toString() {
+ return this.path.toString() +
+ (isReference()? "-" + this.referencePath + "-" + reference.toString(): "");
+ }
+
+ /**
+ * Delete this file
+ * @throws IOException
+ */
+ public void delete() throws IOException {
+ close();
+ this.fs.delete(getPath(), true);
+ }
+
+ /**
+ * Utility to help with rename.
+ * @param fs
+ * @param src
+ * @param tgt
+ * @return True if succeeded.
+ * @throws IOException
+ */
+ public static Path rename(final FileSystem fs, final Path src,
+ final Path tgt)
+ throws IOException {
+ if (!fs.exists(src)) {
+ throw new FileNotFoundException(src.toString());
+ }
+ if (!fs.rename(src, tgt)) {
+ throw new IOException("Failed rename of " + src + " to " + tgt);
+ }
+ return tgt;
+ }
+
+ /**
+ * Get a store file writer. Client is responsible for closing file when done.
+ * If metadata, add BEFORE closing using
+ * {@link #appendMetadata(org.apache.hadoop.hbase.io.hfile.HFile.Writer, long)}.
+ * @param fs
+ * @param dir Path to family directory. Makes the directory if doesn't exist.
+ * Creates a file with a unique name in this directory.
+ * @return HFile.Writer
+ * @throws IOException
+ */
+ public static HFile.Writer getWriter(final FileSystem fs, final Path dir)
+ throws IOException {
+ return getWriter(fs, dir, DEFAULT_BLOCKSIZE_SMALL, null, null, false);
+ }
+
+ /**
+ * Get a store file writer. Client is responsible for closing file when done.
+ * If metadata, add BEFORE closing using
+ * {@link #appendMetadata(org.apache.hadoop.hbase.io.hfile.HFile.Writer, long)}.
+ * @param fs
+ * @param dir Path to family directory. Makes the directory if doesn't exist.
+ * Creates a file with a unique name in this directory.
+ * @param blocksize
+ * @param algorithm Pass null to get default.
+ * @param c Pass null to get default.
+ * @param bloomfilter
+ * @return HFile.Writer
+ * @throws IOException
+ */
+ public static HFile.Writer getWriter(final FileSystem fs, final Path dir,
+ final int blocksize, final Compression.Algorithm algorithm,
+ final KeyValue.KeyComparator c, final boolean bloomfilter)
+ throws IOException {
+ if (!fs.exists(dir)) {
+ fs.mkdirs(dir);
+ }
+ Path path = getUniqueFile(fs, dir);
+ return new HFile.Writer(fs, path, blocksize,
+ algorithm == null? HFile.DEFAULT_COMPRESSION_ALGORITHM: algorithm,
+ c == null? KeyValue.KEY_COMPARATOR: c, bloomfilter);
+ }
+
+ /**
+ * @param fs
+ * @param p
+ * @return random filename inside passed <code>dir</code>
+ */
+ static Path getUniqueFile(final FileSystem fs, final Path p)
+ throws IOException {
+ if (!fs.getFileStatus(p).isDir()) {
+ throw new IOException("Expecting a directory");
+ }
+ return fs.getFileStatus(p).isDir()? getRandomFilename(fs, p): p;
+ }
+
+ /**
+ * @param fs
+ * @param dir
+ * @param encodedRegionName
+ * @param family
+ * @return Path to a file that doesn't exist at time of this invocation.
+ * @throws IOException
+ */
+ static Path getRandomFilename(final FileSystem fs, final Path dir)
+ throws IOException {
+ return getRandomFilename(fs, dir, null);
+ }
+
+ /**
+ * @param fs
+ * @param dir
+ * @param encodedRegionName
+ * @param family
+ * @param suffix
+ * @return Path to a file that doesn't exist at time of this invocation.
+ * @throws IOException
+ */
+ static Path getRandomFilename(final FileSystem fs, final Path dir,
+ final String suffix)
+ throws IOException {
+ long id = -1;
+ Path p = null;
+ do {
+ id = Math.abs(rand.nextLong());
+ p = new Path(dir, Long.toString(id) +
+ ((suffix == null || suffix.length() <= 0)? "": suffix));
+ } while(fs.exists(p));
+ return p;
+ }
+
+ /**
+ * Write file metadata.
+ * Call before you call close on the passed <code>w</code> since its written
+ * as metadata to that file.
+ *
+ * @param filesystem file system
+ * @param maxSequenceId Maximum sequence id.
+ * @throws IOException
+ */
+ static void appendMetadata(final HFile.Writer w, final long maxSequenceId)
+ throws IOException {
+ appendMetadata(w, maxSequenceId, false);
+ }
+
+ /**
+ * Writes metadata.
+ * Call before you call close on the passed <code>w</code> since its written
+ * as metadata to that file.
+ * @param maxSequenceId Maximum sequence id.
+ * @param mc True if this file is product of a major compaction
+ * @throws IOException
+ */
+ static void appendMetadata(final HFile.Writer w, final long maxSequenceId,
+ final boolean mc)
+ throws IOException {
+ w.appendFileInfo(MAX_SEQ_ID_KEY, Bytes.toBytes(maxSequenceId));
+ w.appendFileInfo(MAJOR_COMPACTION_KEY, Bytes.toBytes(mc));
+ }
+
+ /*
+ * Write out a split reference.
+ * @param fs
+ * @param splitDir Presumes path format is actually
+ * <code>SOME_DIRECTORY/REGIONNAME/FAMILY</code>.
+ * @param f File to split.
+ * @param splitRow
+ * @param range
+ * @return Path to created reference.
+ * @throws IOException
+ */
+ static Path split(final FileSystem fs, final Path splitDir,
+ final StoreFile f, final byte [] splitRow, final Reference.Range range)
+ throws IOException {
+ // A reference to the bottom half of the hsf store file.
+ Reference r = new Reference(splitRow, range);
+ // Add the referred-to regions name as a dot separated suffix.
+ // See REF_NAME_PARSER regex above. The referred-to regions name is
+ // up in the path of the passed in <code>f</code> -- parentdir is family,
+ // then the directory above is the region name.
+ String parentRegionName = f.getPath().getParent().getParent().getName();
+ // Write reference with same file id only with the other region name as
+ // suffix and into the new region location (under same family).
+ Path p = new Path(splitDir, f.getPath().getName() + "." + parentRegionName);
+ return r.write(fs, p);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java b/src/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
new file mode 100644
index 0000000..7f44f73
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
@@ -0,0 +1,326 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+
+/**
+ * A scanner that iterates through HStore files
+ */
+class StoreFileScanner extends HAbstractScanner
+implements ChangedReadersObserver {
+ // Keys retrieved from the sources
+ private volatile KeyValue keys[];
+
+ // Readers we go against.
+ private volatile HFileScanner [] scanners;
+
+ // Store this scanner came out of.
+ private final Store store;
+
+ // Used around replacement of Readers if they change while we're scanning.
+ private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+
+ private final long now = System.currentTimeMillis();
+
+ /**
+ * @param store
+ * @param timestamp
+ * @param columns
+ * @param firstRow
+ * @param deletes Set of running deletes
+ * @throws IOException
+ */
+ public StoreFileScanner(final Store store, final long timestamp,
+ final NavigableSet<byte []> columns, final byte [] firstRow)
+ throws IOException {
+ super(timestamp, columns);
+ this.store = store;
+ this.store.addChangedReaderObserver(this);
+ try {
+ openScanner(firstRow);
+ } catch (Exception ex) {
+ close();
+ IOException e = new IOException("HStoreScanner failed construction");
+ e.initCause(ex);
+ throw e;
+ }
+ }
+
+ /*
+ * Go open new scanners and cue them at <code>firstRow</code>.
+ * Closes existing Readers if any.
+ * @param firstRow
+ * @throws IOException
+ */
+ private void openScanner(final byte [] firstRow) throws IOException {
+ List<HFileScanner> s =
+ new ArrayList<HFileScanner>(this.store.getStorefiles().size());
+ Map<Long, StoreFile> map = this.store.getStorefiles().descendingMap();
+ for (StoreFile f: map.values()) {
+ s.add(f.getReader().getScanner());
+ }
+ this.scanners = s.toArray(new HFileScanner [] {});
+ this.keys = new KeyValue[this.scanners.length];
+ // Advance the readers to the first pos.
+ KeyValue firstKey = (firstRow != null && firstRow.length > 0)?
+ new KeyValue(firstRow, HConstants.LATEST_TIMESTAMP): null;
+ for (int i = 0; i < this.scanners.length; i++) {
+ if (firstKey != null) {
+ if (seekTo(i, firstKey)) {
+ continue;
+ }
+ }
+ while (getNext(i)) {
+ if (columnMatch(i)) {
+ break;
+ }
+ }
+ }
+ }
+
+ /**
+ * For a particular column i, find all the matchers defined for the column.
+ * Compare the column family and column key using the matchers. The first one
+ * that matches returns true. If no matchers are successful, return false.
+ *
+ * @param i index into the keys array
+ * @return true if any of the matchers for the column match the column family
+ * and the column key.
+ * @throws IOException
+ */
+ boolean columnMatch(int i) throws IOException {
+ return columnMatch(keys[i]);
+ }
+
+ /**
+ * Get the next set of values for this scanner.
+ *
+ * @param key The key that matched
+ * @param results All the results for <code>key</code>
+ * @return true if a match was found
+ * @throws IOException
+ *
+ * @see org.apache.hadoop.hbase.regionserver.InternalScanner#next(org.apache.hadoop.hbase.HStoreKey, java.util.SortedMap)
+ */
+ @Override
+ public boolean next(List<KeyValue> results)
+ throws IOException {
+ if (this.scannerClosed) {
+ return false;
+ }
+ this.lock.readLock().lock();
+ try {
+ // Find the next viable row label (and timestamp).
+ KeyValue viable = getNextViableRow();
+ if (viable == null) {
+ return false;
+ }
+
+ // Grab all the values that match this row/timestamp
+ boolean addedItem = false;
+ for (int i = 0; i < keys.length; i++) {
+ // Fetch the data
+ while ((keys[i] != null) &&
+ (this.store.comparator.compareRows(this.keys[i], viable) == 0)) {
+ // If we are doing a wild card match or there are multiple matchers
+ // per column, we need to scan all the older versions of this row
+ // to pick up the rest of the family members
+ if(!isWildcardScanner()
+ && !isMultipleMatchScanner()
+ && (keys[i].getTimestamp() != viable.getTimestamp())) {
+ break;
+ }
+ if (columnMatch(i)) {
+ // We only want the first result for any specific family member
+ // TODO: Do we have to keep a running list of column entries in
+ // the results across all of the StoreScanner? Like we do
+ // doing getFull?
+ if (!results.contains(keys[i])) {
+ results.add(keys[i]);
+ addedItem = true;
+ }
+ }
+
+ if (!getNext(i)) {
+ closeSubScanner(i);
+ }
+ }
+ // Advance the current scanner beyond the chosen row, to
+ // a valid timestamp, so we're ready next time.
+ while ((keys[i] != null) &&
+ ((this.store.comparator.compareRows(this.keys[i], viable) <= 0) ||
+ (keys[i].getTimestamp() > this.timestamp) ||
+ !columnMatch(i))) {
+ getNext(i);
+ }
+ }
+ return addedItem;
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /*
+ * @return An instance of <code>ViableRow</code>
+ * @throws IOException
+ */
+ private KeyValue getNextViableRow() throws IOException {
+ // Find the next viable row label (and timestamp).
+ KeyValue viable = null;
+ long viableTimestamp = -1;
+ long ttl = store.ttl;
+ for (int i = 0; i < keys.length; i++) {
+ // The first key that we find that matches may have a timestamp greater
+ // than the one we're looking for. We have to advance to see if there
+ // is an older version present, since timestamps are sorted descending
+ while (keys[i] != null &&
+ keys[i].getTimestamp() > this.timestamp &&
+ columnMatch(i) &&
+ getNext(i)) {
+ if (columnMatch(i)) {
+ break;
+ }
+ }
+ if((keys[i] != null)
+ // If we get here and keys[i] is not null, we already know that the
+ // column matches and the timestamp of the row is less than or equal
+ // to this.timestamp, so we do not need to test that here
+ && ((viable == null) ||
+ (this.store.comparator.compareRows(this.keys[i], viable) < 0) ||
+ ((this.store.comparator.compareRows(this.keys[i], viable) == 0) &&
+ (keys[i].getTimestamp() > viableTimestamp)))) {
+ if (ttl == HConstants.FOREVER || now < keys[i].getTimestamp() + ttl) {
+ viable = keys[i];
+ } else {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("getNextViableRow :" + keys[i] + ": expired, skipped");
+ }
+ }
+ }
+ }
+ return viable;
+ }
+
+ /*
+ * The user didn't want to start scanning at the first row. This method
+ * seeks to the requested row.
+ *
+ * @param i which iterator to advance
+ * @param firstRow seek to this row
+ * @return true if we found the first row and so the scanner is properly
+ * primed or true if the row was not found and this scanner is exhausted.
+ */
+ private boolean seekTo(int i, final KeyValue firstKey)
+ throws IOException {
+ if (firstKey == null) {
+ if (!this.scanners[i].seekTo()) {
+ closeSubScanner(i);
+ return true;
+ }
+ } else {
+ // TODO: sort columns and pass in column as part of key so we get closer.
+ if (!Store.getClosest(this.scanners[i], firstKey)) {
+ closeSubScanner(i);
+ return true;
+ }
+ }
+ this.keys[i] = this.scanners[i].getKeyValue();
+ return isGoodKey(this.keys[i]);
+ }
+
+ /**
+ * Get the next value from the specified reader.
+ *
+ * @param i which reader to fetch next value from
+ * @return true if there is more data available
+ */
+ private boolean getNext(int i) throws IOException {
+ boolean result = false;
+ while (true) {
+ if ((this.scanners[i].isSeeked() && !this.scanners[i].next()) ||
+ (!this.scanners[i].isSeeked() && !this.scanners[i].seekTo())) {
+ closeSubScanner(i);
+ break;
+ }
+ this.keys[i] = this.scanners[i].getKeyValue();
+ if (isGoodKey(this.keys[i])) {
+ result = true;
+ break;
+ }
+ }
+ return result;
+ }
+
+ /*
+ * @param kv
+ * @return True if good key candidate.
+ */
+ private boolean isGoodKey(final KeyValue kv) {
+ return !Store.isExpired(kv, this.store.ttl, this.now);
+ }
+
+ /** Close down the indicated reader. */
+ private void closeSubScanner(int i) {
+ this.scanners[i] = null;
+ this.keys[i] = null;
+ }
+
+ /** Shut it down! */
+ public void close() {
+ if (!this.scannerClosed) {
+ this.store.deleteChangedReaderObserver(this);
+ try {
+ for(int i = 0; i < this.scanners.length; i++) {
+ closeSubScanner(i);
+ }
+ } finally {
+ this.scannerClosed = true;
+ }
+ }
+ }
+
+ // Implementation of ChangedReadersObserver
+
+ public void updateReaders() throws IOException {
+ this.lock.writeLock().lock();
+ try {
+ // The keys are currently lined up at the next row to fetch. Pass in
+ // the current row as 'first' row and readers will be opened and cue'd
+ // up so future call to next will start here.
+ KeyValue viable = getNextViableRow();
+ openScanner(viable.getRow());
+ LOG.debug("Replaced Scanner Readers at row " +
+ viable.getRow().toString());
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java b/src/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
new file mode 100644
index 0000000..5d0bdc4
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -0,0 +1,314 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Scanner scans both the memcache and the HStore
+ */
+class StoreScanner implements InternalScanner, ChangedReadersObserver {
+ static final Log LOG = LogFactory.getLog(StoreScanner.class);
+
+ private InternalScanner [] scanners;
+ private List<KeyValue> [] resultSets;
+ private boolean wildcardMatch = false;
+ private boolean multipleMatchers = false;
+ private RowFilterInterface dataFilter;
+ private Store store;
+ private final long timestamp;
+ private final NavigableSet<byte []> columns;
+
+ // Indices for memcache scanner and hstorefile scanner.
+ private static final int MEMS_INDEX = 0;
+ private static final int HSFS_INDEX = MEMS_INDEX + 1;
+
+ // Used around transition from no storefile to the first.
+ private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+
+ // Used to indicate that the scanner has closed (see HBASE-1107)
+ private final AtomicBoolean closing = new AtomicBoolean(false);
+
+ /** Create an Scanner with a handle on the memcache and HStore files. */
+ @SuppressWarnings("unchecked")
+ StoreScanner(Store store, final NavigableSet<byte []> targetCols,
+ byte [] firstRow, long timestamp, RowFilterInterface filter)
+ throws IOException {
+ this.store = store;
+ this.dataFilter = filter;
+ if (null != dataFilter) {
+ dataFilter.reset();
+ }
+ this.scanners = new InternalScanner[2];
+ this.resultSets = new List[scanners.length];
+ // Save these args in case we need them later handling change in readers
+ // See updateReaders below.
+ this.timestamp = timestamp;
+ this.columns = targetCols;
+ try {
+ scanners[MEMS_INDEX] =
+ store.memcache.getScanner(timestamp, targetCols, firstRow);
+ scanners[HSFS_INDEX] =
+ new StoreFileScanner(store, timestamp, targetCols, firstRow);
+ for (int i = MEMS_INDEX; i < scanners.length; i++) {
+ checkScannerFlags(i);
+ }
+ } catch (IOException e) {
+ doClose();
+ throw e;
+ }
+
+ // Advance to the first key in each scanner.
+ // All results will match the required column-set and scanTime.
+ for (int i = MEMS_INDEX; i < scanners.length; i++) {
+ setupScanner(i);
+ }
+ this.store.addChangedReaderObserver(this);
+ }
+
+ /*
+ * @param i Index.
+ */
+ private void checkScannerFlags(final int i) {
+ if (this.scanners[i].isWildcardScanner()) {
+ this.wildcardMatch = true;
+ }
+ if (this.scanners[i].isMultipleMatchScanner()) {
+ this.multipleMatchers = true;
+ }
+ }
+
+ /*
+ * Do scanner setup.
+ * @param i
+ * @throws IOException
+ */
+ private void setupScanner(final int i) throws IOException {
+ this.resultSets[i] = new ArrayList<KeyValue>();
+ if (this.scanners[i] != null && !this.scanners[i].next(this.resultSets[i])) {
+ closeScanner(i);
+ }
+ }
+
+ /** @return true if the scanner is a wild card scanner */
+ public boolean isWildcardScanner() {
+ return this.wildcardMatch;
+ }
+
+ /** @return true if the scanner is a multiple match scanner */
+ public boolean isMultipleMatchScanner() {
+ return this.multipleMatchers;
+ }
+
+ public boolean next(List<KeyValue> results)
+ throws IOException {
+ this.lock.readLock().lock();
+ try {
+ // Filtered flag is set by filters. If a cell has been 'filtered out'
+ // -- i.e. it is not to be returned to the caller -- the flag is 'true'.
+ boolean filtered = true;
+ boolean moreToFollow = true;
+ while (filtered && moreToFollow) {
+ // Find the lowest-possible key.
+ KeyValue chosen = null;
+ long chosenTimestamp = -1;
+ for (int i = 0; i < this.scanners.length; i++) {
+ KeyValue kv = this.resultSets[i] == null || this.resultSets[i].isEmpty()?
+ null: this.resultSets[i].get(0);
+ if (kv == null) {
+ continue;
+ }
+ if (scanners[i] != null &&
+ (chosen == null ||
+ (this.store.comparator.compareRows(kv, chosen) < 0) ||
+ ((this.store.comparator.compareRows(kv, chosen) == 0) &&
+ (kv.getTimestamp() > chosenTimestamp)))) {
+ chosen = kv;
+ chosenTimestamp = chosen.getTimestamp();
+ }
+ }
+
+ // Filter whole row by row key?
+ filtered = dataFilter == null || chosen == null? false:
+ dataFilter.filterRowKey(chosen.getBuffer(), chosen.getRowOffset(),
+ chosen.getRowLength());
+
+ // Store results for each sub-scanner.
+ if (chosenTimestamp >= 0 && !filtered) {
+ NavigableSet<KeyValue> deletes =
+ new TreeSet<KeyValue>(this.store.comparatorIgnoringType);
+ for (int i = 0; i < scanners.length && !filtered; i++) {
+ if ((scanners[i] != null && !filtered && moreToFollow &&
+ this.resultSets[i] != null && !this.resultSets[i].isEmpty())) {
+ // Test this resultset is for the 'chosen' row.
+ KeyValue firstkv = resultSets[i].get(0);
+ if (!this.store.comparator.matchingRows(firstkv, chosen)) {
+ continue;
+ }
+ // Its for the 'chosen' row, work it.
+ for (KeyValue kv: resultSets[i]) {
+ if (kv.isDeleteType()) {
+ deletes.add(kv);
+ } else if ((deletes.isEmpty() || !deletes.contains(kv)) &&
+ !filtered && moreToFollow && !results.contains(kv)) {
+ if (this.dataFilter != null) {
+ // Filter whole row by column data?
+ int rowlength = kv.getRowLength();
+ int columnoffset = kv.getColumnOffset(rowlength);
+ filtered = dataFilter.filterColumn(kv.getBuffer(),
+ kv.getRowOffset(), rowlength,
+ kv.getBuffer(), columnoffset, kv.getColumnLength(columnoffset),
+ kv.getBuffer(), kv.getValueOffset(), kv.getValueLength());
+ if (filtered) {
+ results.clear();
+ break;
+ }
+ }
+ results.add(kv);
+ /* REMOVING BECAUSE COULD BE BUNCH OF DELETES IN RESULTS
+ AND WE WANT TO INCLUDE THEM -- below short-circuit is
+ probably not wanted.
+ // If we are doing a wild card match or there are multiple
+ // matchers per column, we need to scan all the older versions of
+ // this row to pick up the rest of the family members
+ if (!wildcardMatch && !multipleMatchers &&
+ (kv.getTimestamp() != chosenTimestamp)) {
+ break;
+ }
+ */
+ }
+ }
+ // Move on to next row.
+ resultSets[i].clear();
+ if (!scanners[i].next(resultSets[i])) {
+ closeScanner(i);
+ }
+ }
+ }
+ }
+
+ moreToFollow = chosenTimestamp >= 0;
+ if (dataFilter != null) {
+ if (dataFilter.filterAllRemaining()) {
+ moreToFollow = false;
+ }
+ }
+
+ if (results.isEmpty() && !filtered) {
+ // There were no results found for this row. Marked it as
+ // 'filtered'-out otherwise we will not move on to the next row.
+ filtered = true;
+ }
+ }
+
+ // If we got no results, then there is no more to follow.
+ if (results == null || results.isEmpty()) {
+ moreToFollow = false;
+ }
+
+ // Make sure scanners closed if no more results
+ if (!moreToFollow) {
+ for (int i = 0; i < scanners.length; i++) {
+ if (null != scanners[i]) {
+ closeScanner(i);
+ }
+ }
+ }
+
+ return moreToFollow;
+ } finally {
+ this.lock.readLock().unlock();
+ }
+ }
+
+ /** Shut down a single scanner */
+ void closeScanner(int i) {
+ try {
+ try {
+ scanners[i].close();
+ } catch (IOException e) {
+ LOG.warn(Bytes.toString(store.storeName) + " failed closing scanner " +
+ i, e);
+ }
+ } finally {
+ scanners[i] = null;
+ resultSets[i] = null;
+ }
+ }
+
+ public void close() {
+ this.closing.set(true);
+ this.store.deleteChangedReaderObserver(this);
+ doClose();
+ }
+
+ private void doClose() {
+ for (int i = MEMS_INDEX; i < scanners.length; i++) {
+ if (scanners[i] != null) {
+ closeScanner(i);
+ }
+ }
+ }
+
+ // Implementation of ChangedReadersObserver
+
+ public void updateReaders() throws IOException {
+ if (this.closing.get()) {
+ return;
+ }
+ this.lock.writeLock().lock();
+ try {
+ Map<Long, StoreFile> map = this.store.getStorefiles();
+ if (this.scanners[HSFS_INDEX] == null && map != null && map.size() > 0) {
+ // Presume that we went from no readers to at least one -- need to put
+ // a HStoreScanner in place.
+ try {
+ // I think its safe getting key from mem at this stage -- it shouldn't have
+ // been flushed yet
+ // TODO: MAKE SURE WE UPDATE FROM TRUNNK.
+ this.scanners[HSFS_INDEX] = new StoreFileScanner(this.store,
+ this.timestamp, this. columns, this.resultSets[MEMS_INDEX].get(0).getRow());
+ checkScannerFlags(HSFS_INDEX);
+ setupScanner(HSFS_INDEX);
+ LOG.debug("Added a StoreFileScanner to outstanding HStoreScanner");
+ } catch (IOException e) {
+ doClose();
+ throw e;
+ }
+ }
+ } finally {
+ this.lock.writeLock().unlock();
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java b/src/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java
new file mode 100644
index 0000000..acff9fe
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+/**
+ * Thrown when a request contains a key which is not part of this region
+ */
+public class WrongRegionException extends IOException {
+ private static final long serialVersionUID = 993179627856392526L;
+
+ /** constructor */
+ public WrongRegionException() {
+ super();
+ }
+
+ /**
+ * Constructor
+ * @param s message
+ */
+ public WrongRegionException(String s) {
+ super(s);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java b/src/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java
new file mode 100644
index 0000000..9a73f55
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java
@@ -0,0 +1,163 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.metrics;
+
+import java.lang.management.ManagementFactory;
+import java.lang.management.MemoryUsage;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.metrics.MetricsRate;
+import org.apache.hadoop.hbase.util.Strings;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.jvm.JvmMetrics;
+import org.apache.hadoop.metrics.util.MetricsIntValue;
+import org.apache.hadoop.metrics.util.MetricsTimeVaryingRate;
+
+/**
+ * This class is for maintaining the various regionserver statistics
+ * and publishing them through the metrics interfaces.
+ * <p>
+ * This class has a number of metrics variables that are publicly accessible;
+ * these variables (objects) have methods to update their values.
+ */
+public class RegionServerMetrics implements Updater {
+ private final Log LOG = LogFactory.getLog(this.getClass());
+ private final MetricsRecord metricsRecord;
+ private long lastUpdate = System.currentTimeMillis();
+ private static final int MB = 1024*1024;
+
+ public final MetricsTimeVaryingRate atomicIncrementTime =
+ new MetricsTimeVaryingRate("atomicIncrementTime");
+
+ /**
+ * Count of regions carried by this regionserver
+ */
+ public final MetricsIntValue regions = new MetricsIntValue("regions");
+
+ /*
+ * Count of requests to the regionservers since last call to metrics update
+ */
+ private final MetricsRate requests = new MetricsRate("requests");
+
+ /**
+ * Count of stores open on the regionserver.
+ */
+ public final MetricsIntValue stores = new MetricsIntValue("stores");
+
+ /**
+ * Count of storefiles open on the regionserver.
+ */
+ public final MetricsIntValue storefiles = new MetricsIntValue("storefiles");
+
+ /**
+ * Sum of all the storefile index sizes in this regionserver in MB
+ */
+ public final MetricsIntValue storefileIndexSizeMB =
+ new MetricsIntValue("storefileIndexSizeMB");
+
+ /**
+ * Sum of all the memcache sizes in this regionserver in MB
+ */
+ public final MetricsIntValue memcacheSizeMB =
+ new MetricsIntValue("memcacheSizeMB");
+
+ public RegionServerMetrics() {
+ MetricsContext context = MetricsUtil.getContext("hbase");
+ metricsRecord = MetricsUtil.createRecord(context, "regionserver");
+ String name = Thread.currentThread().getName();
+ metricsRecord.setTag("RegionServer", name);
+ context.registerUpdater(this);
+ // Add jvmmetrics.
+ JvmMetrics.init("RegionServer", name);
+ LOG.info("Initialized");
+ }
+
+ public void shutdown() {
+ // nought to do.
+ }
+
+ /**
+ * Since this object is a registered updater, this method will be called
+ * periodically, e.g. every 5 seconds.
+ * @param unused
+ */
+ public void doUpdates(MetricsContext unused) {
+ synchronized (this) {
+ this.stores.pushMetric(this.metricsRecord);
+ this.storefiles.pushMetric(this.metricsRecord);
+ this.storefileIndexSizeMB.pushMetric(this.metricsRecord);
+ this.memcacheSizeMB.pushMetric(this.metricsRecord);
+ this.regions.pushMetric(this.metricsRecord);
+ this.requests.pushMetric(this.metricsRecord);
+ }
+ this.metricsRecord.update();
+ this.lastUpdate = System.currentTimeMillis();
+ }
+
+ public void resetAllMinMax() {
+ // Nothing to do
+ }
+
+ /**
+ * @return Count of requests.
+ */
+ public float getRequests() {
+ return this.requests.getPreviousIntervalValue();
+ }
+
+ /**
+ * @param inc How much to add to requests.
+ */
+ public void incrementRequests(final int inc) {
+ this.requests.inc(inc);
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ int seconds = (int)((System.currentTimeMillis() - this.lastUpdate)/1000);
+ if (seconds == 0) {
+ seconds = 1;
+ }
+ sb = Strings.appendKeyValue(sb, "request",
+ Float.valueOf(this.requests.getPreviousIntervalValue()));
+ sb = Strings.appendKeyValue(sb, "regions",
+ Integer.valueOf(this.regions.get()));
+ sb = Strings.appendKeyValue(sb, "stores",
+ Integer.valueOf(this.stores.get()));
+ sb = Strings.appendKeyValue(sb, "storefiles",
+ Integer.valueOf(this.storefiles.get()));
+ sb = Strings.appendKeyValue(sb, "storefileIndexSize",
+ Integer.valueOf(this.storefileIndexSizeMB.get()));
+ sb = Strings.appendKeyValue(sb, "memcacheSize",
+ Integer.valueOf(this.memcacheSizeMB.get()));
+ // Duplicate from jvmmetrics because metrics are private there so
+ // inaccessible.
+ MemoryUsage memory =
+ ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
+ sb = Strings.appendKeyValue(sb, "usedHeap",
+ Long.valueOf(memory.getUsed()/MB));
+ sb = Strings.appendKeyValue(sb, "maxHeap",
+ Long.valueOf(memory.getMax()/MB));
+ return sb.toString();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/tableindexed/IndexedRegion.java b/src/java/org/apache/hadoop/hbase/regionserver/tableindexed/IndexedRegion.java
new file mode 100644
index 0000000..28e2481
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/tableindexed/IndexedRegion.java
@@ -0,0 +1,346 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.tableindexed;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.NavigableSet;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.Map.Entry;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.tableindexed.IndexSpecification;
+import org.apache.hadoop.hbase.client.tableindexed.IndexedTable;
+import org.apache.hadoop.hbase.io.BatchOperation;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.regionserver.FlushRequester;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+
+class IndexedRegion extends TransactionalRegion {
+
+ private static final Log LOG = LogFactory.getLog(IndexedRegion.class);
+
+ private final HBaseConfiguration conf;
+ private Map<IndexSpecification, HTable> indexSpecToTable = new HashMap<IndexSpecification, HTable>();
+
+ public IndexedRegion(final Path basedir, final HLog log, final FileSystem fs,
+ final HBaseConfiguration conf, final HRegionInfo regionInfo,
+ final FlushRequester flushListener) {
+ super(basedir, log, fs, conf, regionInfo, flushListener);
+ this.conf = conf;
+ }
+
+ private synchronized HTable getIndexTable(IndexSpecification index)
+ throws IOException {
+ HTable indexTable = indexSpecToTable.get(index);
+ if (indexTable == null) {
+ indexTable = new HTable(conf, index.getIndexedTableName(super
+ .getRegionInfo().getTableDesc().getName()));
+ indexSpecToTable.put(index, indexTable);
+ }
+ return indexTable;
+ }
+
+ private Collection<IndexSpecification> getIndexes() {
+ return super.getRegionInfo().getTableDesc().getIndexes();
+ }
+
+ /**
+ * @param batchUpdate
+ * @param lockid
+ * @param writeToWAL if true, then we write this update to the log
+ * @throws IOException
+ */
+ @Override
+ public void batchUpdate(BatchUpdate batchUpdate, Integer lockid, boolean writeToWAL)
+ throws IOException {
+ updateIndexes(batchUpdate); // Do this first because will want to see the old row
+ super.batchUpdate(batchUpdate, lockid, writeToWAL);
+ }
+
+ private void updateIndexes(BatchUpdate batchUpdate) throws IOException {
+ List<IndexSpecification> indexesToUpdate = new LinkedList<IndexSpecification>();
+
+ // Find the indexes we need to update
+ for (IndexSpecification index : getIndexes()) {
+ if (possiblyAppliesToIndex(index, batchUpdate)) {
+ indexesToUpdate.add(index);
+ }
+ }
+
+ if (indexesToUpdate.size() == 0) {
+ return;
+ }
+
+ NavigableSet<byte[]> neededColumns = getColumnsForIndexes(indexesToUpdate);
+
+ NavigableMap<byte[], byte[]> newColumnValues =
+ getColumnsFromBatchUpdate(batchUpdate);
+ Map<byte[], Cell> oldColumnCells = super.getFull(batchUpdate.getRow(),
+ neededColumns, HConstants.LATEST_TIMESTAMP, 1, null);
+
+ // Handle delete batch updates. Go back and get the next older values
+ for (BatchOperation op : batchUpdate) {
+ if (!op.isPut()) {
+ Cell current = oldColumnCells.get(op.getColumn());
+ if (current != null) {
+ // TODO: Fix this profligacy!!! St.Ack
+ Cell [] older = Cell.createSingleCellArray(super.get(batchUpdate.getRow(),
+ op.getColumn(), current.getTimestamp(), 1));
+ if (older != null && older.length > 0) {
+ newColumnValues.put(op.getColumn(), older[0].getValue());
+ }
+ }
+ }
+ }
+
+ // Add the old values to the new if they are not there
+ for (Entry<byte[], Cell> oldEntry : oldColumnCells.entrySet()) {
+ if (!newColumnValues.containsKey(oldEntry.getKey())) {
+ newColumnValues.put(oldEntry.getKey(), oldEntry.getValue().getValue());
+ }
+ }
+
+
+
+ Iterator<IndexSpecification> indexIterator = indexesToUpdate.iterator();
+ while (indexIterator.hasNext()) {
+ IndexSpecification indexSpec = indexIterator.next();
+ if (!doesApplyToIndex(indexSpec, newColumnValues)) {
+ indexIterator.remove();
+ }
+ }
+
+ SortedMap<byte[], byte[]> oldColumnValues = convertToValueMap(oldColumnCells);
+
+ for (IndexSpecification indexSpec : indexesToUpdate) {
+ removeOldIndexEntry(indexSpec, batchUpdate.getRow(), oldColumnValues);
+ updateIndex(indexSpec, batchUpdate.getRow(), newColumnValues);
+ }
+ }
+
+ /** Return the columns needed for the update. */
+ private NavigableSet<byte[]> getColumnsForIndexes(Collection<IndexSpecification> indexes) {
+ NavigableSet<byte[]> neededColumns = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+ for (IndexSpecification indexSpec : indexes) {
+ for (byte[] col : indexSpec.getAllColumns()) {
+ neededColumns.add(col);
+ }
+ }
+ return neededColumns;
+ }
+
+ private void removeOldIndexEntry(IndexSpecification indexSpec, byte[] row,
+ SortedMap<byte[], byte[]> oldColumnValues) throws IOException {
+ for (byte[] indexedCol : indexSpec.getIndexedColumns()) {
+ if (!oldColumnValues.containsKey(indexedCol)) {
+ LOG.debug("Index [" + indexSpec.getIndexId()
+ + "] not trying to remove old entry for row ["
+ + Bytes.toString(row) + "] because col ["
+ + Bytes.toString(indexedCol) + "] is missing");
+ return;
+ }
+ }
+
+ byte[] oldIndexRow = indexSpec.getKeyGenerator().createIndexKey(row,
+ oldColumnValues);
+ LOG.debug("Index [" + indexSpec.getIndexId() + "] removing old entry ["
+ + Bytes.toString(oldIndexRow) + "]");
+ getIndexTable(indexSpec).deleteAll(oldIndexRow);
+ }
+
+ private NavigableMap<byte[], byte[]> getColumnsFromBatchUpdate(BatchUpdate b) {
+ NavigableMap<byte[], byte[]> columnValues = new TreeMap<byte[], byte[]>(
+ Bytes.BYTES_COMPARATOR);
+ for (BatchOperation op : b) {
+ if (op.isPut()) {
+ columnValues.put(op.getColumn(), op.getValue());
+ }
+ }
+ return columnValues;
+ }
+
+ /** Ask if this update *could* apply to the index. It may actually apply if some of the columns needed are missing.
+ *
+ * @param indexSpec
+ * @param b
+ * @return true if possibly apply.
+ */
+ private boolean possiblyAppliesToIndex(IndexSpecification indexSpec, BatchUpdate b) {
+ for (BatchOperation op : b) {
+ if (indexSpec.containsColumn(op.getColumn())) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /** Ask if this update does apply to the index.
+ *
+ * @param indexSpec
+ * @param b
+ * @return true if possibly apply.
+ */
+ private boolean doesApplyToIndex(IndexSpecification indexSpec, SortedMap<byte[], byte[]> columnValues) {
+
+ for (byte [] neededCol : indexSpec.getIndexedColumns()) {
+ if (! columnValues.containsKey(neededCol)) {
+ LOG.debug("Index [" + indexSpec.getIndexId() + "] can't be updated because ["
+ + Bytes.toString(neededCol) + "] is missing");
+ return false;
+ }
+ }
+ return true;
+ }
+
+ private void updateIndex(IndexSpecification indexSpec, byte[] row,
+ SortedMap<byte[], byte[]> columnValues) throws IOException {
+ BatchUpdate indexUpdate = createIndexUpdate(indexSpec, row, columnValues);
+ getIndexTable(indexSpec).commit(indexUpdate);
+ LOG.debug("Index [" + indexSpec.getIndexId() + "] adding new entry ["
+ + Bytes.toString(indexUpdate.getRow()) + "] for row ["
+ + Bytes.toString(row) + "]");
+
+ }
+
+ private BatchUpdate createIndexUpdate(IndexSpecification indexSpec,
+ byte[] row, SortedMap<byte[], byte[]> columnValues) {
+ byte[] indexRow = indexSpec.getKeyGenerator().createIndexKey(row,
+ columnValues);
+ BatchUpdate update = new BatchUpdate(indexRow);
+
+ update.put(IndexedTable.INDEX_BASE_ROW_COLUMN, row);
+
+ for (byte[] col : indexSpec.getIndexedColumns()) {
+ byte[] val = columnValues.get(col);
+ if (val == null) {
+ throw new RuntimeException("Unexpected missing column value. ["+Bytes.toString(col)+"]");
+ }
+ update.put(col, val);
+ }
+
+ for (byte [] col : indexSpec.getAdditionalColumns()) {
+ byte[] val = columnValues.get(col);
+ if (val != null) {
+ update.put(col, val);
+ }
+ }
+
+ return update;
+ }
+
+ @Override
+ public void deleteAll(final byte[] row, final long ts, final Integer lockid)
+ throws IOException {
+
+ if (getIndexes().size() != 0) {
+
+ // Need all columns
+ NavigableSet<byte[]> neededColumns = getColumnsForIndexes(getIndexes());
+
+ Map<byte[], Cell> oldColumnCells = super.getFull(row,
+ neededColumns, HConstants.LATEST_TIMESTAMP, 1, null);
+ SortedMap<byte[], byte[]> oldColumnValues = convertToValueMap(oldColumnCells);
+
+
+ for (IndexSpecification indexSpec : getIndexes()) {
+ removeOldIndexEntry(indexSpec, row, oldColumnValues);
+ }
+
+ // Handle if there is still a version visible.
+ if (ts != HConstants.LATEST_TIMESTAMP) {
+ Map<byte[], Cell> currentColumnCells = super.getFull(row,
+ neededColumns, ts, 1, null);
+ SortedMap<byte[], byte[]> currentColumnValues = convertToValueMap(currentColumnCells);
+
+ for (IndexSpecification indexSpec : getIndexes()) {
+ if (doesApplyToIndex(indexSpec, currentColumnValues)) {
+ updateIndex(indexSpec, row, currentColumnValues);
+ }
+ }
+ }
+ }
+ super.deleteAll(row, ts, lockid);
+ }
+
+ private SortedMap<byte[], byte[]> convertToValueMap(
+ Map<byte[], Cell> cellMap) {
+ SortedMap<byte[], byte[]> currentColumnValues = new TreeMap<byte[], byte[]>(Bytes.BYTES_COMPARATOR);
+ for(Entry<byte[], Cell> entry : cellMap.entrySet()) {
+ currentColumnValues.put(entry.getKey(), entry.getValue().getValue());
+ }
+ return currentColumnValues;
+ }
+
+ @Override
+ public void deleteAll(final byte[] row, byte[] column, final long ts,
+ final Integer lockid) throws IOException {
+ List<IndexSpecification> indexesToUpdate = new LinkedList<IndexSpecification>();
+
+ for(IndexSpecification indexSpec : getIndexes()) {
+ if (indexSpec.containsColumn(column)) {
+ indexesToUpdate.add(indexSpec);
+ }
+ }
+
+ NavigableSet<byte[]> neededColumns = getColumnsForIndexes(indexesToUpdate);
+ Map<byte[], Cell> oldColumnCells = super.getFull(row,
+ neededColumns, HConstants.LATEST_TIMESTAMP, 1, null);
+ SortedMap<byte [], byte[]> oldColumnValues = convertToValueMap(oldColumnCells);
+
+ for (IndexSpecification indexSpec : indexesToUpdate) {
+ removeOldIndexEntry(indexSpec, row, oldColumnValues);
+ }
+
+ // Handle if there is still a version visible.
+ if (ts != HConstants.LATEST_TIMESTAMP) {
+ Map<byte[], Cell> currentColumnCells = super.getFull(row,
+ neededColumns, ts, 1, null);
+ SortedMap<byte[], byte[]> currentColumnValues = convertToValueMap(currentColumnCells);
+
+ for (IndexSpecification indexSpec : getIndexes()) {
+ if (doesApplyToIndex(indexSpec, currentColumnValues)) {
+ updateIndex(indexSpec, row, currentColumnValues);
+ }
+ }
+ }
+
+ super.deleteAll(row, column, ts, lockid);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/tableindexed/IndexedRegionServer.java b/src/java/org/apache/hadoop/hbase/regionserver/tableindexed/IndexedRegionServer.java
new file mode 100644
index 0000000..71fc1dd
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/tableindexed/IndexedRegionServer.java
@@ -0,0 +1,74 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.tableindexed;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.ipc.IndexedRegionInterface;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegionServer;
+import org.apache.hadoop.util.Progressable;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+
+/**
+ * RegionServer which maintains secondary indexes.
+ *
+ **/
+public class IndexedRegionServer extends TransactionalRegionServer implements
+ IndexedRegionInterface {
+
+ public IndexedRegionServer(HBaseConfiguration conf) throws IOException {
+ this(new HServerAddress(conf.get(REGIONSERVER_ADDRESS,
+ DEFAULT_REGIONSERVER_ADDRESS)), conf);
+ }
+
+ public IndexedRegionServer(HServerAddress serverAddress,
+ HBaseConfiguration conf) throws IOException {
+ super(serverAddress, conf);
+ }
+
+ @Override
+ public long getProtocolVersion(final String protocol, final long clientVersion)
+ throws IOException {
+ if (protocol.equals(IndexedRegionInterface.class.getName())) {
+ return HBaseRPCProtocolVersion.versionID;
+ }
+ return super.getProtocolVersion(protocol, clientVersion);
+ }
+
+ @Override
+ protected HRegion instantiateRegion(final HRegionInfo regionInfo)
+ throws IOException {
+ HRegion r = new IndexedRegion(HTableDescriptor.getTableDir(super
+ .getRootDir(), regionInfo.getTableDesc().getName()), super.log, super
+ .getFileSystem(), super.conf, regionInfo, super.getFlushRequester());
+ r.initialize(null, new Progressable() {
+ public void progress() {
+ addProcessingMessage(regionInfo);
+ }
+ });
+ return r;
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/transactional/CleanOldTransactionsChore.java b/src/java/org/apache/hadoop/hbase/regionserver/transactional/CleanOldTransactionsChore.java
new file mode 100644
index 0000000..a2a522a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/transactional/CleanOldTransactionsChore.java
@@ -0,0 +1,57 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.transactional;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+
+/**
+ * Cleans up committed transactions when they are no longer needed to verify
+ * pending transactions.
+ */
+class CleanOldTransactionsChore extends Chore {
+
+ private static final String SLEEP_CONF = "hbase.transaction.clean.sleep";
+ private static final int DEFAULT_SLEEP = 60 * 1000;
+
+ private final TransactionalRegionServer regionServer;
+
+ /**
+ * @param regionServer
+ * @param stopRequest
+ */
+ public CleanOldTransactionsChore(
+ final TransactionalRegionServer regionServer,
+ final AtomicBoolean stopRequest) {
+ super(regionServer.getConfiguration().getInt(SLEEP_CONF, DEFAULT_SLEEP),
+ stopRequest);
+ this.regionServer = regionServer;
+ }
+
+ @Override
+ protected void chore() {
+ for (HRegion region : regionServer.getOnlineRegions()) {
+ ((TransactionalRegion) region).removeUnNeededCommitedTransactions();
+ }
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionState.java b/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionState.java
new file mode 100644
index 0000000..28c0a56
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionState.java
@@ -0,0 +1,362 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.transactional;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.RowFilterSet;
+import org.apache.hadoop.hbase.filter.StopRowFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.io.BatchOperation;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Holds the state of a transaction.
+ */
+class TransactionState {
+
+ private static final Log LOG = LogFactory.getLog(TransactionState.class);
+
+ /** Current status. */
+ public enum Status {
+ /** Initial status, still performing operations. */
+ PENDING,
+ /**
+ * Checked if we can commit, and said yes. Still need to determine the
+ * global decision.
+ */
+ COMMIT_PENDING,
+ /** Committed. */
+ COMMITED,
+ /** Aborted. */
+ ABORTED
+ }
+
+ /**
+ * Simple container of the range of the scanners we've opened. Used to check
+ * for conflicting writes.
+ */
+ private static class ScanRange {
+ protected byte[] startRow;
+ protected byte[] endRow;
+
+ public ScanRange(byte[] startRow, byte[] endRow) {
+ this.startRow = startRow;
+ this.endRow = endRow;
+ }
+
+ /**
+ * Check if this scan range contains the given key.
+ *
+ * @param rowKey
+ * @return boolean
+ */
+ public boolean contains(byte[] rowKey) {
+ if (startRow != null && Bytes.compareTo(rowKey, startRow) < 0) {
+ return false;
+ }
+ if (endRow != null && Bytes.compareTo(endRow, rowKey) < 0) {
+ return false;
+ }
+ return true;
+ }
+ }
+
+ private final HRegionInfo regionInfo;
+ private final long hLogStartSequenceId;
+ private final long transactionId;
+ private Status status;
+ private SortedSet<byte[]> readSet = new TreeSet<byte[]>(
+ Bytes.BYTES_COMPARATOR);
+ private List<BatchUpdate> writeSet = new LinkedList<BatchUpdate>();
+ private List<ScanRange> scanSet = new LinkedList<ScanRange>();
+ private Set<TransactionState> transactionsToCheck = new HashSet<TransactionState>();
+ private int startSequenceNumber;
+ private Integer sequenceNumber;
+
+ TransactionState(final long transactionId, final long rLogStartSequenceId,
+ HRegionInfo regionInfo) {
+ this.transactionId = transactionId;
+ this.hLogStartSequenceId = rLogStartSequenceId;
+ this.regionInfo = regionInfo;
+ this.status = Status.PENDING;
+ }
+
+ void addRead(final byte[] rowKey) {
+ readSet.add(rowKey);
+ }
+
+ Set<byte[]> getReadSet() {
+ return readSet;
+ }
+
+ void addWrite(final BatchUpdate write) {
+ writeSet.add(write);
+ }
+
+ List<BatchUpdate> getWriteSet() {
+ return writeSet;
+ }
+
+ /**
+ * GetFull from the writeSet.
+ *
+ * @param row
+ * @param columns
+ * @param timestamp
+ * @return
+ */
+ Map<byte[], Cell> localGetFull(final byte[] row, final Set<byte[]> columns,
+ final long timestamp) {
+ Map<byte[], Cell> results = new TreeMap<byte[], Cell>(
+ Bytes.BYTES_COMPARATOR); // Must use the Bytes Conparator because
+ for (BatchUpdate b : writeSet) {
+ if (!Bytes.equals(row, b.getRow())) {
+ continue;
+ }
+ if (b.getTimestamp() > timestamp) {
+ continue;
+ }
+ for (BatchOperation op : b) {
+ if (!op.isPut()
+ || (columns != null && !columns.contains(op.getColumn()))) {
+ continue;
+ }
+ results.put(op.getColumn(), new Cell(op.getValue(), b.getTimestamp()));
+ }
+ }
+ return results.size() == 0 ? null : results;
+ }
+
+ /**
+ * Get from the writeSet.
+ *
+ * @param row
+ * @param column
+ * @param timestamp
+ * @return
+ */
+ Cell[] localGet(final byte[] row, final byte[] column, final long timestamp) {
+ ArrayList<Cell> results = new ArrayList<Cell>();
+
+ // Go in reverse order to put newest updates first in list
+ for (int i = writeSet.size() - 1; i >= 0; i--) {
+ BatchUpdate b = writeSet.get(i);
+
+ if (!Bytes.equals(row, b.getRow())) {
+ continue;
+ }
+ if (b.getTimestamp() > timestamp) {
+ continue;
+ }
+ for (BatchOperation op : b) {
+ if (!op.isPut() || !Bytes.equals(column, op.getColumn())) {
+ continue;
+ }
+ results.add(new Cell(op.getValue(), b.getTimestamp()));
+ }
+ }
+ return results.size() == 0 ? null : results
+ .toArray(new Cell[results.size()]);
+ }
+
+ void addTransactionToCheck(final TransactionState transaction) {
+ transactionsToCheck.add(transaction);
+ }
+
+ boolean hasConflict() {
+ for (TransactionState transactionState : transactionsToCheck) {
+ if (hasConflict(transactionState)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ private boolean hasConflict(final TransactionState checkAgainst) {
+ if (checkAgainst.getStatus().equals(TransactionState.Status.ABORTED)) {
+ return false; // Cannot conflict with aborted transactions
+ }
+
+ for (BatchUpdate otherUpdate : checkAgainst.getWriteSet()) {
+ if (this.getReadSet().contains(otherUpdate.getRow())) {
+ LOG.debug("Transaction [" + this.toString()
+ + "] has read which conflicts with [" + checkAgainst.toString()
+ + "]: region [" + regionInfo.getRegionNameAsString() + "], row["
+ + Bytes.toString(otherUpdate.getRow()) + "]");
+ return true;
+ }
+ for (ScanRange scanRange : this.scanSet) {
+ if (scanRange.contains(otherUpdate.getRow())) {
+ LOG.debug("Transaction [" + this.toString()
+ + "] has scan which conflicts with [" + checkAgainst.toString()
+ + "]: region [" + regionInfo.getRegionNameAsString() + "], row["
+ + Bytes.toString(otherUpdate.getRow()) + "]");
+ return true;
+ }
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Get the status.
+ *
+ * @return Return the status.
+ */
+ Status getStatus() {
+ return status;
+ }
+
+ /**
+ * Set the status.
+ *
+ * @param status The status to set.
+ */
+ void setStatus(final Status status) {
+ this.status = status;
+ }
+
+ /**
+ * Get the startSequenceNumber.
+ *
+ * @return Return the startSequenceNumber.
+ */
+ int getStartSequenceNumber() {
+ return startSequenceNumber;
+ }
+
+ /**
+ * Set the startSequenceNumber.
+ *
+ * @param startSequenceNumber
+ */
+ void setStartSequenceNumber(final int startSequenceNumber) {
+ this.startSequenceNumber = startSequenceNumber;
+ }
+
+ /**
+ * Get the sequenceNumber.
+ *
+ * @return Return the sequenceNumber.
+ */
+ Integer getSequenceNumber() {
+ return sequenceNumber;
+ }
+
+ /**
+ * Set the sequenceNumber.
+ *
+ * @param sequenceNumber The sequenceNumber to set.
+ */
+ void setSequenceNumber(final Integer sequenceNumber) {
+ this.sequenceNumber = sequenceNumber;
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder result = new StringBuilder();
+ result.append("[transactionId: ");
+ result.append(transactionId);
+ result.append(" status: ");
+ result.append(status.name());
+ result.append(" read Size: ");
+ result.append(readSet.size());
+ result.append(" scan Size: ");
+ result.append(scanSet.size());
+ result.append(" write Size: ");
+ result.append(writeSet.size());
+ result.append(" startSQ: ");
+ result.append(startSequenceNumber);
+ if (sequenceNumber != null) {
+ result.append(" commitedSQ:");
+ result.append(sequenceNumber);
+ }
+ result.append("]");
+
+ return result.toString();
+ }
+
+ /**
+ * Get the transactionId.
+ *
+ * @return Return the transactionId.
+ */
+ long getTransactionId() {
+ return transactionId;
+ }
+
+ /**
+ * Get the startSequenceId.
+ *
+ * @return Return the startSequenceId.
+ */
+ long getHLogStartSequenceId() {
+ return hLogStartSequenceId;
+ }
+
+ void addScan(byte[] firstRow, RowFilterInterface filter) {
+ ScanRange scanRange = new ScanRange(firstRow, getEndRow(filter));
+ LOG.trace(String.format(
+ "Adding scan for transcaction [%s], from [%s] to [%s]", transactionId,
+ scanRange.startRow == null ? "null" : Bytes
+ .toString(scanRange.startRow), scanRange.endRow == null ? "null"
+ : Bytes.toString(scanRange.endRow)));
+ scanSet.add(scanRange);
+ }
+
+ private byte[] getEndRow(RowFilterInterface filter) {
+ if (filter instanceof WhileMatchRowFilter) {
+ WhileMatchRowFilter wmrFilter = (WhileMatchRowFilter) filter;
+ if (wmrFilter.getInternalFilter() instanceof StopRowFilter) {
+ StopRowFilter stopFilter = (StopRowFilter) wmrFilter
+ .getInternalFilter();
+ return stopFilter.getStopRowKey();
+ }
+ } else if (filter instanceof RowFilterSet) {
+ RowFilterSet rowFilterSet = (RowFilterSet) filter;
+ if (rowFilterSet.getOperator()
+ .equals(RowFilterSet.Operator.MUST_PASS_ALL)) {
+ for (RowFilterInterface subFilter : rowFilterSet.getFilters()) {
+ byte[] endRow = getEndRow(subFilter);
+ if (endRow != null) {
+ return endRow;
+ }
+ }
+ }
+ }
+ return null;
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalHLogManager.java b/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalHLogManager.java
new file mode 100644
index 0000000..ab6668f
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalHLogManager.java
@@ -0,0 +1,307 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.transactional;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.BatchOperation;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.HLogKey;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.util.Progressable;
+
+/**
+ * Responsible for writing and reading (recovering) transactional information
+ * to/from the HLog.
+ */
+class TransactionalHLogManager {
+ /** If transactional log entry, these are the op codes */
+ // TODO: Make these into types on the KeyValue!!! -- St.Ack
+ public enum TransactionalOperation {
+ /** start transaction */
+ START,
+ /** Equivalent to append in non-transactional environment */
+ WRITE,
+ /** Transaction commit entry */
+ COMMIT,
+ /** Abort transaction entry */
+ ABORT
+ }
+
+ private static final Log LOG = LogFactory
+ .getLog(TransactionalHLogManager.class);
+
+ private final HLog hlog;
+ private final FileSystem fileSystem;
+ private final HRegionInfo regionInfo;
+ private final HBaseConfiguration conf;
+
+ /**
+ * @param region
+ */
+ public TransactionalHLogManager(final TransactionalRegion region) {
+ this.hlog = region.getLog();
+ this.fileSystem = region.getFilesystem();
+ this.regionInfo = region.getRegionInfo();
+ this.conf = region.getConf();
+ }
+
+ // For Testing
+ TransactionalHLogManager(final HLog hlog, final FileSystem fileSystem,
+ final HRegionInfo regionInfo, final HBaseConfiguration conf) {
+ this.hlog = hlog;
+ this.fileSystem = fileSystem;
+ this.regionInfo = regionInfo;
+ this.conf = conf;
+ }
+
+ /**
+ * @param transactionId
+ * @throws IOException
+ */
+ public void writeStartToLog(final long transactionId) throws IOException {
+ /*
+ HLogEdit logEdit;
+ logEdit = new HLogEdit(transactionId, TransactionalOperation.START);
+*/
+ hlog.append(regionInfo, null/*logEdit*/);
+ }
+
+ /**
+ * @param transactionId
+ * @param update
+ * @throws IOException
+ */
+ public void writeUpdateToLog(final long transactionId,
+ final BatchUpdate update) throws IOException {
+
+ long commitTime = update.getTimestamp() == HConstants.LATEST_TIMESTAMP ? System
+ .currentTimeMillis()
+ : update.getTimestamp();
+
+ for (BatchOperation op : update) {
+ // COMMENTED OUT HLogEdit logEdit = new HLogEdit(transactionId, update.getRow(), op, commitTime);
+ hlog.append(regionInfo, update.getRow(), null /*logEdit*/);
+ }
+ }
+
+ /**
+ * @param transactionId
+ * @throws IOException
+ */
+ public void writeCommitToLog(final long transactionId) throws IOException {
+ /*HLogEdit logEdit;
+ logEdit = new HLogEdit(transactionId,
+ HLogEdit.TransactionalOperation.COMMIT);
+*/
+ hlog.append(regionInfo, null /*logEdit*/);
+ }
+
+ /**
+ * @param transactionId
+ * @throws IOException
+ */
+ public void writeAbortToLog(final long transactionId) throws IOException {
+ /*HLogEdit logEdit;
+ logEdit = new HLogEdit(transactionId, HLogEdit.TransactionalOperation.ABORT);
+*/
+ hlog.append(regionInfo, null /*logEdit*/);
+ }
+
+ /**
+ * @param reconstructionLog
+ * @param maxSeqID
+ * @param reporter
+ * @return map of batch updates
+ * @throws UnsupportedEncodingException
+ * @throws IOException
+ */
+ public Map<Long, List<BatchUpdate>> getCommitsFromLog(
+ final Path reconstructionLog, final long maxSeqID,
+ final Progressable reporter) throws UnsupportedEncodingException,
+ IOException {
+ if (reconstructionLog == null || !fileSystem.exists(reconstructionLog)) {
+ // Nothing to do.
+ return null;
+ }
+ // Check its not empty.
+ FileStatus[] stats = fileSystem.listStatus(reconstructionLog);
+ if (stats == null || stats.length == 0) {
+ LOG.warn("Passed reconstruction log " + reconstructionLog
+ + " is zero-length");
+ return null;
+ }
+
+ SortedMap<Long, List<BatchUpdate>> pendingTransactionsById = new TreeMap<Long, List<BatchUpdate>>();
+ SortedMap<Long, List<BatchUpdate>> commitedTransactionsById = new TreeMap<Long, List<BatchUpdate>>();
+ Set<Long> abortedTransactions = new HashSet<Long>();
+
+ SequenceFile.Reader logReader = new SequenceFile.Reader(fileSystem,
+ reconstructionLog, conf);
+ /*
+ try {
+ HLogKey key = new HLogKey();
+ KeyValue val = new KeyValue();
+ long skippedEdits = 0;
+ long totalEdits = 0;
+ long startCount = 0;
+ long writeCount = 0;
+ long abortCount = 0;
+ long commitCount = 0;
+ // How many edits to apply before we send a progress report.
+ int reportInterval = conf.getInt("hbase.hstore.report.interval.edits",
+ 2000);
+
+ while (logReader.next(key, val)) {
+ LOG.debug("Processing edit: key: " + key.toString() + " val: "
+ + val.toString());
+ if (key.getLogSeqNum() < maxSeqID) {
+ skippedEdits++;
+ continue;
+ }
+ // TODO: Change all below so we are not doing a getRow and getColumn
+ // against a KeyValue. Each invocation creates a new instance. St.Ack.
+
+ // Check this edit is for me.
+
+ byte[] column = val.getKeyValue().getColumn();
+ Long transactionId = val.getTransactionId();
+ if (!val.isTransactionEntry() || HLog.isMetaColumn(column)
+ || !Bytes.equals(key.getRegionName(), regionInfo.getRegionName())) {
+ continue;
+ }
+
+ List<BatchUpdate> updates = pendingTransactionsById.get(transactionId);
+ switch (val.getOperation()) {
+
+ case START:
+ if (updates != null || abortedTransactions.contains(transactionId)
+ || commitedTransactionsById.containsKey(transactionId)) {
+ LOG.error("Processing start for transaction: " + transactionId
+ + ", but have already seen start message");
+ throw new IOException("Corrupted transaction log");
+ }
+ updates = new LinkedList<BatchUpdate>();
+ pendingTransactionsById.put(transactionId, updates);
+ startCount++;
+ break;
+
+ case WRITE:
+ if (updates == null) {
+ LOG.error("Processing edit for transaction: " + transactionId
+ + ", but have not seen start message");
+ throw new IOException("Corrupted transaction log");
+ }
+
+ BatchUpdate tranUpdate = new BatchUpdate(val.getKeyValue().getRow());
+ if (val.getKeyValue().getValue() != null) {
+ tranUpdate.put(val.getKeyValue().getColumn(),
+ val.getKeyValue().getValue());
+ } else {
+ tranUpdate.delete(val.getKeyValue().getColumn());
+ }
+ updates.add(tranUpdate);
+ writeCount++;
+ break;
+
+ case ABORT:
+ if (updates == null) {
+ LOG.error("Processing abort for transaction: " + transactionId
+ + ", but have not seen start message");
+ throw new IOException("Corrupted transaction log");
+ }
+ abortedTransactions.add(transactionId);
+ pendingTransactionsById.remove(transactionId);
+ abortCount++;
+ break;
+
+ case COMMIT:
+ if (updates == null) {
+ LOG.error("Processing commit for transaction: " + transactionId
+ + ", but have not seen start message");
+ throw new IOException("Corrupted transaction log");
+ }
+ if (abortedTransactions.contains(transactionId)) {
+ LOG.error("Processing commit for transaction: " + transactionId
+ + ", but also have abort message");
+ throw new IOException("Corrupted transaction log");
+ }
+ if (updates.size() == 0) {
+ LOG
+ .warn("Transaciton " + transactionId
+ + " has no writes in log. ");
+ }
+ if (commitedTransactionsById.containsKey(transactionId)) {
+ LOG.error("Processing commit for transaction: " + transactionId
+ + ", but have already commited transaction with that id");
+ throw new IOException("Corrupted transaction log");
+ }
+ pendingTransactionsById.remove(transactionId);
+ commitedTransactionsById.put(transactionId, updates);
+ commitCount++;
+
+ }
+ totalEdits++;
+
+ if (reporter != null && (totalEdits % reportInterval) == 0) {
+ reporter.progress();
+ }
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Read " + totalEdits + " tranasctional operations (skipped "
+ + skippedEdits + " because sequence id <= " + maxSeqID + "): "
+ + startCount + " starts, " + writeCount + " writes, " + abortCount
+ + " aborts, and " + commitCount + " commits.");
+ }
+ } finally {
+ logReader.close();
+ }
+
+ if (pendingTransactionsById.size() > 0) {
+ LOG
+ .info("Region log has "
+ + pendingTransactionsById.size()
+ + " unfinished transactions. Going to the transaction log to resolve");
+ throw new RuntimeException("Transaction log not yet implemented");
+ }
+ */
+
+ return commitedTransactionsById;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalRegion.java b/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalRegion.java
new file mode 100644
index 0000000..589a151
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalRegion.java
@@ -0,0 +1,718 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.transactional;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.LeaseException;
+import org.apache.hadoop.hbase.LeaseListener;
+import org.apache.hadoop.hbase.Leases;
+import org.apache.hadoop.hbase.Leases.LeaseStillHeldException;
+import org.apache.hadoop.hbase.client.transactional.UnknownTransactionException;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.regionserver.FlushRequester;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.transactional.TransactionState.Status;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.util.Progressable;
+
+/**
+ * Regionserver which provides transactional support for atomic transactions.
+ * This is achieved with optimistic concurrency control (see
+ * http://www.seas.upenn.edu/~zives/cis650/papers/opt-cc.pdf). We keep track
+ * read and write sets for each transaction, and hold off on processing the
+ * writes. To decide to commit a transaction we check its read sets with all
+ * transactions that have committed while it was running for overlaps.
+ * <p>
+ * Because transactions can span multiple regions, all regions must agree to
+ * commit a transactions. The client side of this commit protocol is encoded in
+ * org.apache.hadoop.hbase.client.transactional.TransactionManger
+ * <p>
+ * In the event of an failure of the client mid-commit, (after we voted yes), we
+ * will have to consult the transaction log to determine the final decision of
+ * the transaction. This is not yet implemented.
+ */
+public class TransactionalRegion extends HRegion {
+
+ private static final String LEASE_TIME = "hbase.transaction.leaseTime";
+ private static final int DEFAULT_LEASE_TIME = 60 * 1000;
+ private static final int LEASE_CHECK_FREQUENCY = 1000;
+
+ private static final String OLD_TRANSACTION_FLUSH = "hbase.transaction.flush";
+ private static final int DEFAULT_OLD_TRANSACTION_FLUSH = 100; // Do a flush if we have this many old transactions..
+
+
+ static final Log LOG = LogFactory.getLog(TransactionalRegion.class);
+
+ // Collection of active transactions (PENDING) keyed by id.
+ protected Map<String, TransactionState> transactionsById = new HashMap<String, TransactionState>();
+
+ // Map of recent transactions that are COMMIT_PENDING or COMMITED keyed by
+ // their sequence number
+ private SortedMap<Integer, TransactionState> commitedTransactionsBySequenceNumber = Collections
+ .synchronizedSortedMap(new TreeMap<Integer, TransactionState>());
+
+ // Collection of transactions that are COMMIT_PENDING
+ private Set<TransactionState> commitPendingTransactions = Collections
+ .synchronizedSet(new HashSet<TransactionState>());
+
+ private final Leases transactionLeases;
+ private AtomicInteger nextSequenceId = new AtomicInteger(0);
+ private Object commitCheckLock = new Object();
+ private TransactionalHLogManager logManager;
+ private final int oldTransactionFlushTrigger;
+
+ /**
+ * @param basedir
+ * @param log
+ * @param fs
+ * @param conf
+ * @param regionInfo
+ * @param flushListener
+ */
+ public TransactionalRegion(final Path basedir, final HLog log,
+ final FileSystem fs, final HBaseConfiguration conf,
+ final HRegionInfo regionInfo, final FlushRequester flushListener) {
+ super(basedir, log, fs, conf, regionInfo, flushListener);
+ transactionLeases = new Leases(conf.getInt(LEASE_TIME, DEFAULT_LEASE_TIME),
+ LEASE_CHECK_FREQUENCY);
+ logManager = new TransactionalHLogManager(this);
+ oldTransactionFlushTrigger = conf.getInt(OLD_TRANSACTION_FLUSH, DEFAULT_OLD_TRANSACTION_FLUSH);
+ }
+
+ @Override
+ protected void doReconstructionLog(final Path oldLogFile,
+ final long minSeqId, final long maxSeqId, final Progressable reporter)
+ throws UnsupportedEncodingException, IOException {
+ super.doReconstructionLog(oldLogFile, minSeqId, maxSeqId, reporter);
+
+ Map<Long, List<BatchUpdate>> commitedTransactionsById = logManager
+ .getCommitsFromLog(oldLogFile, minSeqId, reporter);
+
+ if (commitedTransactionsById != null && commitedTransactionsById.size() > 0) {
+ LOG.debug("found " + commitedTransactionsById.size()
+ + " COMMITED transactions");
+
+ for (Entry<Long, List<BatchUpdate>> entry : commitedTransactionsById
+ .entrySet()) {
+ LOG.debug("Writing " + entry.getValue().size()
+ + " updates for transaction " + entry.getKey());
+ for (BatchUpdate b : entry.getValue()) {
+ super.batchUpdate(b, true); // These are walled so they live forever
+ }
+ }
+
+ // LOG.debug("Flushing cache"); // We must trigger a cache flush,
+ // otherwise
+ // we will would ignore the log on subsequent failure
+ // if (!super.flushcache()) {
+ // LOG.warn("Did not flush cache");
+ // }
+ }
+ }
+
+ /**
+ * We need to make sure that we don't complete a cache flush between running
+ * transactions. If we did, then we would not find all log messages needed to
+ * restore the transaction, as some of them would be before the last
+ * "complete" flush id.
+ */
+ @Override
+ protected long getCompleteCacheFlushSequenceId(final long currentSequenceId) {
+ long minPendingStartSequenceId = currentSequenceId;
+ for (TransactionState transactionState : transactionsById.values()) {
+ minPendingStartSequenceId = Math.min(minPendingStartSequenceId,
+ transactionState.getHLogStartSequenceId());
+ }
+ return minPendingStartSequenceId;
+ }
+
+ /**
+ * @param transactionId
+ * @throws IOException
+ */
+ public void beginTransaction(final long transactionId) throws IOException {
+ String key = String.valueOf(transactionId);
+ if (transactionsById.get(key) != null) {
+ TransactionState alias = getTransactionState(transactionId);
+ if (alias != null) {
+ alias.setStatus(Status.ABORTED);
+ retireTransaction(alias);
+ }
+ LOG.error("Existing trasaction with id ["+key+"] in region ["+super.getRegionInfo().getRegionNameAsString()+"]");
+ throw new IOException("Already exiting transaction id: " + key);
+ }
+
+ TransactionState state = new TransactionState(transactionId, super.getLog()
+ .getSequenceNumber(), super.getRegionInfo());
+
+ // Order is important here ...
+ List<TransactionState> commitPendingCopy = new LinkedList<TransactionState>(commitPendingTransactions);
+ for (TransactionState commitPending : commitPendingCopy) {
+ state.addTransactionToCheck(commitPending);
+ }
+ state.setStartSequenceNumber(nextSequenceId.get());
+
+ transactionsById.put(String.valueOf(key), state);
+ try {
+ transactionLeases.createLease(key, new TransactionLeaseListener(key));
+ } catch (LeaseStillHeldException e) {
+ LOG.error("Lease still held for ["+key+"] in region ["+super.getRegionInfo().getRegionNameAsString()+"]");
+ throw new RuntimeException(e);
+ }
+ LOG.debug("Begining transaction " + key + " in region "
+ + super.getRegionInfo().getRegionNameAsString());
+ logManager.writeStartToLog(transactionId);
+
+ maybeTriggerOldTransactionFlush();
+ }
+
+ /**
+ * Fetch a single data item.
+ *
+ * @param transactionId
+ * @param row
+ * @param column
+ * @return column value
+ * @throws IOException
+ */
+ public Cell get(final long transactionId, final byte[] row,
+ final byte[] column) throws IOException {
+ Cell[] results = get(transactionId, row, column, 1);
+ return (results == null || results.length == 0) ? null : results[0];
+ }
+
+ /**
+ * Fetch multiple versions of a single data item
+ *
+ * @param transactionId
+ * @param row
+ * @param column
+ * @param numVersions
+ * @return array of values one element per version
+ * @throws IOException
+ */
+ public Cell[] get(final long transactionId, final byte[] row,
+ final byte[] column, final int numVersions) throws IOException {
+ return get(transactionId, row, column, Long.MAX_VALUE, numVersions);
+ }
+
+ /**
+ * Fetch multiple versions of a single data item, with timestamp.
+ *
+ * @param transactionId
+ * @param row
+ * @param column
+ * @param timestamp
+ * @param numVersions
+ * @return array of values one element per version that matches the timestamp
+ * @throws IOException
+ */
+ public Cell[] get(final long transactionId, final byte[] row,
+ final byte[] column, final long timestamp, final int numVersions)
+ throws IOException {
+ TransactionState state = getTransactionState(transactionId);
+
+ state.addRead(row);
+
+ Cell[] localCells = state.localGet(row, column, timestamp);
+
+ if (localCells != null && localCells.length > 0) {
+ LOG
+ .trace("Transactional get of something we've written in the same transaction "
+ + transactionId);
+ LOG.trace("row: " + Bytes.toString(row));
+ LOG.trace("col: " + Bytes.toString(column));
+ LOG.trace("numVersions: " + numVersions);
+ for (Cell cell : localCells) {
+ LOG.trace("cell: " + Bytes.toString(cell.getValue()));
+ }
+
+ if (numVersions > 1) {
+ // FIX THIS PROFLIGACY CONVERTING RESULT OF get.
+ Cell[] globalCells = Cell.createSingleCellArray(get(row, column, timestamp, numVersions - 1));
+ Cell[] result = new Cell[globalCells.length + localCells.length];
+ System.arraycopy(localCells, 0, result, 0, localCells.length);
+ System.arraycopy(globalCells, 0, result, localCells.length,
+ globalCells.length);
+ return result;
+ }
+ return localCells;
+ }
+
+ return Cell.createSingleCellArray(get(row, column, timestamp, numVersions));
+ }
+
+ /**
+ * Fetch all the columns for the indicated row at a specified timestamp.
+ * Returns a TreeMap that maps column names to values.
+ *
+ * @param transactionId
+ * @param row
+ * @param columns Array of columns you'd like to retrieve. When null, get all.
+ * @param ts
+ * @return Map<columnName, Cell> values
+ * @throws IOException
+ */
+ public Map<byte[], Cell> getFull(final long transactionId, final byte[] row,
+ final NavigableSet<byte[]> columns, final long ts) throws IOException {
+ TransactionState state = getTransactionState(transactionId);
+
+ state.addRead(row);
+
+ Map<byte[], Cell> localCells = state.localGetFull(row, columns, ts);
+
+ if (localCells != null && localCells.size() > 0) {
+ LOG
+ .trace("Transactional get of something we've written in the same transaction "
+ + transactionId);
+ LOG.trace("row: " + Bytes.toString(row));
+ for (Entry<byte[], Cell> entry : localCells.entrySet()) {
+ LOG.trace("col: " + Bytes.toString(entry.getKey()));
+ LOG.trace("cell: " + Bytes.toString(entry.getValue().getValue()));
+ }
+
+ Map<byte[], Cell> internalResults = getFull(row, columns, ts, 1, null);
+ internalResults.putAll(localCells);
+ return internalResults;
+ }
+
+ return getFull(row, columns, ts, 1, null);
+ }
+
+ /**
+ * Return an iterator that scans over the HRegion, returning the indicated
+ * columns for only the rows that match the data filter. This Iterator must be
+ * closed by the caller.
+ *
+ * @param transactionId
+ * @param cols columns to scan. If column name is a column family, all columns
+ * of the specified column family are returned. Its also possible to pass a
+ * regex in the column qualifier. A column qualifier is judged to be a regex
+ * if it contains at least one of the following characters:
+ * <code>\+|^&*$[]]}{)(</code>.
+ * @param firstRow row which is the starting point of the scan
+ * @param timestamp only return rows whose timestamp is <= this value
+ * @param filter row filter
+ * @return InternalScanner
+ * @throws IOException
+ */
+ public InternalScanner getScanner(final long transactionId,
+ final byte[][] cols, final byte[] firstRow, final long timestamp,
+ final RowFilterInterface filter) throws IOException {
+ TransactionState state = getTransactionState(transactionId);
+ state.addScan(firstRow, filter);
+ return new ScannerWrapper(transactionId, super.getScanner(cols, firstRow,
+ timestamp, filter));
+ }
+
+ /**
+ * Add a write to the transaction. Does not get applied until commit process.
+ *
+ * @param transactionId
+ * @param b
+ * @throws IOException
+ */
+ public void batchUpdate(final long transactionId, final BatchUpdate b)
+ throws IOException {
+ TransactionState state = getTransactionState(transactionId);
+ state.addWrite(b);
+ logManager.writeUpdateToLog(transactionId, b);
+ }
+
+ /**
+ * Add a delete to the transaction. Does not get applied until commit process.
+ * FIXME, not sure about this approach
+ *
+ * @param transactionId
+ * @param row
+ * @param timestamp
+ * @throws IOException
+ */
+ public void deleteAll(final long transactionId, final byte[] row,
+ final long timestamp) throws IOException {
+ TransactionState state = getTransactionState(transactionId);
+ long now = System.currentTimeMillis();
+
+ for (Store store : super.stores.values()) {
+ List<KeyValue> keyvalues = new ArrayList<KeyValue>();
+ store.getFull(new KeyValue(row, timestamp),
+ null, null, ALL_VERSIONS, null, keyvalues, now);
+ BatchUpdate deleteUpdate = new BatchUpdate(row, timestamp);
+
+ for (KeyValue key : keyvalues) {
+ deleteUpdate.delete(key.getColumn());
+ }
+
+ state.addWrite(deleteUpdate);
+ logManager.writeUpdateToLog(transactionId, deleteUpdate);
+
+ }
+
+ }
+
+ /**
+ * @param transactionId
+ * @return true if commit is successful
+ * @throws IOException
+ */
+ public boolean commitRequest(final long transactionId) throws IOException {
+ synchronized (commitCheckLock) {
+ TransactionState state = getTransactionState(transactionId);
+ if (state == null) {
+ return false;
+ }
+
+ if (hasConflict(state)) {
+ state.setStatus(Status.ABORTED);
+ retireTransaction(state);
+ return false;
+ }
+
+ // No conflicts, we can commit.
+ LOG.trace("No conflicts for transaction " + transactionId
+ + " found in region " + super.getRegionInfo().getRegionNameAsString()
+ + ". Voting for commit");
+ state.setStatus(Status.COMMIT_PENDING);
+
+ // If there are writes we must keep record off the transaction
+ if (state.getWriteSet().size() > 0) {
+ // Order is important
+ commitPendingTransactions.add(state);
+ state.setSequenceNumber(nextSequenceId.getAndIncrement());
+ commitedTransactionsBySequenceNumber.put(state.getSequenceNumber(),
+ state);
+ }
+
+ return true;
+ }
+ }
+
+ private boolean hasConflict(final TransactionState state) {
+ // Check transactions that were committed while we were running
+ for (int i = state.getStartSequenceNumber(); i < nextSequenceId.get(); i++) {
+ TransactionState other = commitedTransactionsBySequenceNumber.get(i);
+ if (other == null) {
+ continue;
+ }
+ state.addTransactionToCheck(other);
+ }
+
+ return state.hasConflict();
+ }
+
+ /**
+ * Commit the transaction.
+ *
+ * @param transactionId
+ * @throws IOException
+ */
+ public void commit(final long transactionId) throws IOException {
+ TransactionState state;
+ try {
+ state = getTransactionState(transactionId);
+ } catch (UnknownTransactionException e) {
+ LOG.fatal("Asked to commit unknown transaction: " + transactionId
+ + " in region " + super.getRegionInfo().getRegionNameAsString());
+ // FIXME Write to the transaction log that this transaction was corrupted
+ throw e;
+ }
+
+ if (!state.getStatus().equals(Status.COMMIT_PENDING)) {
+ LOG.fatal("Asked to commit a non pending transaction");
+ // FIXME Write to the transaction log that this transaction was corrupted
+ throw new IOException("commit failure");
+ }
+
+ commit(state);
+ }
+
+ /**
+ * Commit the transaction.
+ *
+ * @param transactionId
+ * @throws IOException
+ */
+ public void abort(final long transactionId) throws IOException {
+ TransactionState state;
+ try {
+ state = getTransactionState(transactionId);
+ } catch (UnknownTransactionException e) {
+ LOG.error("Asked to abort unknown transaction: " + transactionId);
+ return;
+ }
+
+ state.setStatus(Status.ABORTED);
+
+ if (state.getWriteSet().size() > 0) {
+ logManager.writeAbortToLog(state.getTransactionId());
+ }
+
+ // Following removes needed if we have voted
+ if (state.getSequenceNumber() != null) {
+ commitedTransactionsBySequenceNumber.remove(state.getSequenceNumber());
+ }
+ commitPendingTransactions.remove(state);
+
+ retireTransaction(state);
+ }
+
+ private void commit(final TransactionState state) throws IOException {
+
+ LOG.debug("Commiting transaction: " + state.toString() + " to "
+ + super.getRegionInfo().getRegionNameAsString());
+
+ if (state.getWriteSet().size() > 0) {
+ logManager.writeCommitToLog(state.getTransactionId());
+ }
+
+ for (BatchUpdate update : state.getWriteSet()) {
+ this.batchUpdate(update, false); // Don't need to WAL these
+ // FIME, maybe should be walled so we don't need to look so far back.
+ }
+
+ state.setStatus(Status.COMMITED);
+ if (state.getWriteSet().size() > 0
+ && !commitPendingTransactions.remove(state)) {
+ LOG
+ .fatal("Commiting a non-query transaction that is not in commitPendingTransactions");
+ throw new IOException("commit failure"); // FIXME, how to handle?
+ }
+ retireTransaction(state);
+ }
+
+ // Cancel leases, and removed from lease lookup. This transaction may still
+ // live in commitedTransactionsBySequenceNumber and commitPendingTransactions
+ private void retireTransaction(final TransactionState state) {
+ String key = String.valueOf(state.getTransactionId());
+ try {
+ transactionLeases.cancelLease(key);
+ } catch (LeaseException e) {
+ // Ignore
+ }
+
+ transactionsById.remove(key);
+ }
+
+ protected TransactionState getTransactionState(final long transactionId)
+ throws UnknownTransactionException {
+ String key = String.valueOf(transactionId);
+ TransactionState state = null;
+
+ state = transactionsById.get(key);
+
+ if (state == null) {
+ LOG.trace("Unknown transaction: " + key);
+ throw new UnknownTransactionException(key);
+ }
+
+ try {
+ transactionLeases.renewLease(key);
+ } catch (LeaseException e) {
+ throw new RuntimeException(e);
+ }
+
+ return state;
+ }
+
+ private void maybeTriggerOldTransactionFlush() {
+ if (commitedTransactionsBySequenceNumber.size() > oldTransactionFlushTrigger) {
+ removeUnNeededCommitedTransactions();
+ }
+ }
+
+ /**
+ * Cleanup references to committed transactions that are no longer needed.
+ *
+ */
+ synchronized void removeUnNeededCommitedTransactions() {
+ Integer minStartSeqNumber = getMinStartSequenceNumber();
+ if (minStartSeqNumber == null) {
+ minStartSeqNumber = Integer.MAX_VALUE; // Remove all
+ }
+
+ int numRemoved = 0;
+ // Copy list to avoid conc update exception
+ for (Entry<Integer, TransactionState> entry : new LinkedList<Entry<Integer, TransactionState>>(
+ commitedTransactionsBySequenceNumber.entrySet())) {
+ if (entry.getKey() >= minStartSeqNumber) {
+ break;
+ }
+ numRemoved = numRemoved
+ + (commitedTransactionsBySequenceNumber.remove(entry.getKey()) == null ? 0
+ : 1);
+ numRemoved++;
+ }
+
+ if (LOG.isDebugEnabled()) {
+ StringBuilder debugMessage = new StringBuilder();
+ if (numRemoved > 0) {
+ debugMessage.append("Removed ").append(numRemoved).append(
+ " commited transactions");
+
+ if (minStartSeqNumber == Integer.MAX_VALUE) {
+ debugMessage.append("with any sequence number");
+ } else {
+ debugMessage.append("with sequence lower than ").append(
+ minStartSeqNumber).append(".");
+ }
+ if (!commitedTransactionsBySequenceNumber.isEmpty()) {
+ debugMessage.append(" Still have ").append(
+ commitedTransactionsBySequenceNumber.size()).append(" left.");
+ } else {
+ debugMessage.append("None left.");
+ }
+ LOG.debug(debugMessage.toString());
+ } else if (commitedTransactionsBySequenceNumber.size() > 0) {
+ debugMessage.append(
+ "Could not remove any transactions, and still have ").append(
+ commitedTransactionsBySequenceNumber.size()).append(" left");
+ LOG.debug(debugMessage.toString());
+ }
+ }
+ }
+
+ private Integer getMinStartSequenceNumber() {
+ Integer min = null;
+ for (TransactionState transactionState : transactionsById.values()) {
+ if (min == null || transactionState.getStartSequenceNumber() < min) {
+ min = transactionState.getStartSequenceNumber();
+ }
+ }
+ return min;
+ }
+
+ // TODO, resolve from the global transaction log
+ protected void resolveTransactionFromLog() {
+ throw new RuntimeException("Globaql transaction log is not Implemented");
+ }
+
+ private class TransactionLeaseListener implements LeaseListener {
+ private final String transactionName;
+
+ TransactionLeaseListener(final String n) {
+ this.transactionName = n;
+ }
+
+ public void leaseExpired() {
+ LOG.info("Transaction " + this.transactionName + " lease expired");
+ TransactionState s = null;
+ synchronized (transactionsById) {
+ s = transactionsById.remove(transactionName);
+ }
+ if (s == null) {
+ LOG.warn("Unknown transaction expired " + this.transactionName);
+ return;
+ }
+
+ switch (s.getStatus()) {
+ case PENDING:
+ s.setStatus(Status.ABORTED); // Other transactions may have a ref
+ break;
+ case COMMIT_PENDING:
+ LOG.info("Transaction " + s.getTransactionId()
+ + " expired in COMMIT_PENDING state");
+ LOG.info("Checking transaction status in transaction log");
+ resolveTransactionFromLog();
+ break;
+ default:
+ LOG.warn("Unexpected status on expired lease");
+ }
+ }
+ }
+
+ /** Wrapper which keeps track of rows returned by scanner. */
+ private class ScannerWrapper implements InternalScanner {
+ private long transactionId;
+ private InternalScanner scanner;
+
+ /**
+ * @param transactionId
+ * @param scanner
+ * @throws UnknownTransactionException
+ */
+ public ScannerWrapper(final long transactionId,
+ final InternalScanner scanner) throws UnknownTransactionException {
+
+ this.transactionId = transactionId;
+ this.scanner = scanner;
+ }
+
+ public void close() throws IOException {
+ scanner.close();
+ }
+
+ public boolean isMultipleMatchScanner() {
+ return scanner.isMultipleMatchScanner();
+ }
+
+ public boolean isWildcardScanner() {
+ return scanner.isWildcardScanner();
+ }
+
+ public boolean next(List<KeyValue> results) throws IOException {
+ boolean result = scanner.next(results);
+ TransactionState state = getTransactionState(transactionId);
+
+ if (result) {
+ // TODO: Is this right???? St.Ack
+ byte [] row = results.get(0).getRow();
+ Map<byte[], Cell> localWrites = state.localGetFull(row, null,
+ Integer.MAX_VALUE);
+ if (localWrites != null) {
+ LOG.info("Scanning over row that has been writen to " + transactionId);
+ for (Entry<byte[], Cell> entry : localWrites.entrySet()) {
+ // TODO: Is this right???
+ results.add(new KeyValue(row, entry.getKey(),
+ entry.getValue().getTimestamp(), entry.getValue().getValue()));
+ }
+ }
+ }
+
+ return result;
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalRegionServer.java b/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalRegionServer.java
new file mode 100644
index 0000000..9ac5a3f
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/regionserver/transactional/TransactionalRegionServer.java
@@ -0,0 +1,304 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.transactional;
+
+import java.io.IOException;
+import java.lang.Thread.UncaughtExceptionHandler;
+import java.util.Arrays;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+import org.apache.hadoop.hbase.ipc.TransactionalRegionInterface;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.util.Progressable;
+
+/**
+ * RegionServer with support for transactions. Transactional logic is at the
+ * region level, so we mostly just delegate to the appropriate
+ * TransactionalRegion.
+ */
+public class TransactionalRegionServer extends HRegionServer implements
+ TransactionalRegionInterface {
+ static final Log LOG = LogFactory.getLog(TransactionalRegionServer.class);
+
+ private final CleanOldTransactionsChore cleanOldTransactionsThread;
+
+ /**
+ * @param conf
+ * @throws IOException
+ */
+ public TransactionalRegionServer(final HBaseConfiguration conf)
+ throws IOException {
+ this(new HServerAddress(conf.get(REGIONSERVER_ADDRESS,
+ DEFAULT_REGIONSERVER_ADDRESS)), conf);
+ }
+
+ /**
+ * @param address
+ * @param conf
+ * @throws IOException
+ */
+ public TransactionalRegionServer(final HServerAddress address,
+ final HBaseConfiguration conf) throws IOException {
+ super(address, conf);
+ cleanOldTransactionsThread = new CleanOldTransactionsChore(this,
+ super.stopRequested);
+ }
+
+ @Override
+ public long getProtocolVersion(final String protocol, final long clientVersion)
+ throws IOException {
+ if (protocol.equals(TransactionalRegionInterface.class.getName())) {
+ return HBaseRPCProtocolVersion.versionID;
+ }
+ return super.getProtocolVersion(protocol, clientVersion);
+ }
+
+ @Override
+ protected void init(final MapWritable c) throws IOException {
+ super.init(c);
+ String n = Thread.currentThread().getName();
+ UncaughtExceptionHandler handler = new UncaughtExceptionHandler() {
+ public void uncaughtException(final Thread t, final Throwable e) {
+ abort();
+ LOG.fatal("Set stop flag in " + t.getName(), e);
+ }
+ };
+ Threads.setDaemonThreadRunning(this.cleanOldTransactionsThread, n
+ + ".oldTransactionCleaner", handler);
+
+ }
+
+ @Override
+ protected HRegion instantiateRegion(final HRegionInfo regionInfo)
+ throws IOException {
+ HRegion r = new TransactionalRegion(HTableDescriptor.getTableDir(super
+ .getRootDir(), regionInfo.getTableDesc().getName()), super.log, super
+ .getFileSystem(), super.conf, regionInfo, super.getFlushRequester());
+ r.initialize(null, new Progressable() {
+ public void progress() {
+ addProcessingMessage(regionInfo);
+ }
+ });
+ return r;
+ }
+
+ protected TransactionalRegion getTransactionalRegion(final byte[] regionName)
+ throws NotServingRegionException {
+ return (TransactionalRegion) super.getRegion(regionName);
+ }
+
+ public void abort(final byte[] regionName, final long transactionId)
+ throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ getTransactionalRegion(regionName).abort(transactionId);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public void batchUpdate(final long transactionId, final byte[] regionName,
+ final BatchUpdate b) throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ getTransactionalRegion(regionName).batchUpdate(transactionId, b);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public void commit(final byte[] regionName, final long transactionId)
+ throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ getTransactionalRegion(regionName).commit(transactionId);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public boolean commitRequest(final byte[] regionName, final long transactionId)
+ throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ return getTransactionalRegion(regionName).commitRequest(transactionId);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public Cell get(final long transactionId, final byte[] regionName,
+ final byte[] row, final byte[] column) throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ return getTransactionalRegion(regionName).get(transactionId, row, column);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public Cell[] get(final long transactionId, final byte[] regionName,
+ final byte[] row, final byte[] column, final int numVersions)
+ throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ return getTransactionalRegion(regionName).get(transactionId, row, column,
+ numVersions);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public Cell[] get(final long transactionId, final byte[] regionName,
+ final byte[] row, final byte[] column, final long timestamp,
+ final int numVersions) throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ return getTransactionalRegion(regionName).get(transactionId, row, column,
+ timestamp, numVersions);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public RowResult getRow(final long transactionId, final byte[] regionName,
+ final byte[] row, final long ts) throws IOException {
+ return getRow(transactionId, regionName, row, null, ts);
+ }
+
+ public RowResult getRow(final long transactionId, final byte[] regionName,
+ final byte[] row, final byte[][] columns) throws IOException {
+ return getRow(transactionId, regionName, row, columns,
+ HConstants.LATEST_TIMESTAMP);
+ }
+
+ public RowResult getRow(final long transactionId, final byte[] regionName,
+ final byte[] row, final byte[][] columns, final long ts)
+ throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ // convert the columns array into a set so it's easy to check later.
+ NavigableSet<byte[]> columnSet = null;
+ if (columns != null) {
+ columnSet = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+ columnSet.addAll(Arrays.asList(columns));
+ }
+
+ TransactionalRegion region = getTransactionalRegion(regionName);
+ Map<byte[], Cell> map = region.getFull(transactionId, row, columnSet, ts);
+ HbaseMapWritable<byte[], Cell> result = new HbaseMapWritable<byte[], Cell>();
+ result.putAll(map);
+ return new RowResult(row, result);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+
+ }
+
+ public void deleteAll(final long transactionId, final byte[] regionName,
+ final byte[] row, final long timestamp) throws IOException {
+ checkOpen();
+ super.getRequestCount().incrementAndGet();
+ try {
+ TransactionalRegion region = getTransactionalRegion(regionName);
+ region.deleteAll(transactionId, row, timestamp);
+ } catch (IOException e) {
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public long openScanner(final long transactionId, final byte[] regionName,
+ final byte[][] cols, final byte[] firstRow, final long timestamp,
+ final RowFilterInterface filter) throws IOException {
+ checkOpen();
+ NullPointerException npe = null;
+ if (regionName == null) {
+ npe = new NullPointerException("regionName is null");
+ } else if (cols == null) {
+ npe = new NullPointerException("columns to scan is null");
+ } else if (firstRow == null) {
+ npe = new NullPointerException("firstRow for scanner is null");
+ }
+ if (npe != null) {
+ IOException io = new IOException("Invalid arguments to openScanner");
+ io.initCause(npe);
+ throw io;
+ }
+ super.getRequestCount().incrementAndGet();
+ try {
+ TransactionalRegion r = getTransactionalRegion(regionName);
+ long scannerId = -1L;
+ InternalScanner s = r.getScanner(transactionId, cols, firstRow,
+ timestamp, filter);
+ scannerId = super.addScanner(s);
+ return scannerId;
+ } catch (IOException e) {
+ LOG.error("Error opening scanner (fsOk: " + this.fsOk + ")",
+ RemoteExceptionHandler.checkIOException(e));
+ checkFileSystem();
+ throw e;
+ }
+ }
+
+ public void beginTransaction(final long transactionId, final byte[] regionName)
+ throws IOException {
+ getTransactionalRegion(regionName).beginTransaction(transactionId);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/AbstractController.java b/src/java/org/apache/hadoop/hbase/rest/AbstractController.java
new file mode 100644
index 0000000..689f284
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/AbstractController.java
@@ -0,0 +1,68 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.parser.IHBaseRestParser;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public abstract class AbstractController implements RESTConstants {
+ protected Configuration conf;
+ protected AbstractModel model;
+
+ public void initialize(HBaseConfiguration conf, HBaseAdmin admin) {
+ this.conf = conf;
+ this.model = generateModel(conf, admin);
+ }
+
+ public abstract void get(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException;
+
+ public abstract void post(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException;
+
+ public abstract void put(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException;
+
+ public abstract void delete(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException;
+
+ protected abstract AbstractModel generateModel(HBaseConfiguration conf,
+ HBaseAdmin a);
+
+ protected byte[][] getColumnsFromQueryMap(Map<String, String[]> queryMap) {
+ byte[][] columns = null;
+ String[] columnArray = queryMap.get(RESTConstants.COLUMN);
+ if (columnArray != null) {
+ columns = new byte[columnArray.length][];
+ for (int i = 0; i < columnArray.length; i++) {
+ columns[i] = Bytes.toBytes(columnArray[i]);
+ }
+ }
+ return columns;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/AbstractModel.java b/src/java/org/apache/hadoop/hbase/rest/AbstractModel.java
new file mode 100644
index 0000000..33ac736
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/AbstractModel.java
@@ -0,0 +1,99 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.Collection;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public abstract class AbstractModel {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(AbstractModel.class);
+ protected HBaseConfiguration conf;
+ protected HBaseAdmin admin;
+
+ protected static class Encodings {
+
+ protected interface Encoding {
+
+ String encode(byte[] b) throws HBaseRestException;
+ }
+
+ public static Encoding EBase64 = new Encoding() {
+
+ public String encode(byte[] b) throws HBaseRestException {
+ return Base64.encodeBytes(b);
+ }
+ };
+ public static Encoding EUTF8 = new Encoding() {
+
+ public String encode(byte[] b) throws HBaseRestException {
+ return new String(b);
+ }
+ };
+ }
+
+ protected static final Encodings.Encoding encoding = Encodings.EUTF8;
+
+ public void initialize(HBaseConfiguration conf, HBaseAdmin admin) {
+ this.conf = conf;
+ this.admin = admin;
+ }
+
+ protected byte[][] getColumns(byte[] tableName) throws HBaseRestException {
+ try {
+ HTable h = new HTable(tableName);
+ Collection<HColumnDescriptor> columns = h.getTableDescriptor()
+ .getFamilies();
+ byte[][] resultant = new byte[columns.size()][];
+ int count = 0;
+
+ for (HColumnDescriptor c : columns) {
+ resultant[count++] = c.getNameWithColon();
+ }
+
+ return resultant;
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ protected static final byte COLON = Bytes.toBytes(":")[0];
+
+ protected boolean isColumnFamily(byte[] columnName) {
+ for (int i = 0; i < columnName.length; i++) {
+ if (columnName[i] == COLON) {
+ return true;
+ }
+ }
+
+ return false;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/DatabaseController.java b/src/java/org/apache/hadoop/hbase/rest/DatabaseController.java
new file mode 100644
index 0000000..c9ea02f
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/DatabaseController.java
@@ -0,0 +1,83 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.parser.IHBaseRestParser;
+
+public class DatabaseController extends AbstractController {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(DatabaseController.class);
+
+ protected DatabaseModel getModel() {
+ return (DatabaseModel) model;
+ }
+
+ @Override
+ protected AbstractModel generateModel(HBaseConfiguration conf,
+ HBaseAdmin admin) {
+ return new DatabaseModel(conf, admin);
+ }
+
+ @Override
+ public void get(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ s.setNoQueryResults();
+ DatabaseModel innerModel = getModel();
+
+ if (queryMap.size() == 0) {
+ s.setOK(innerModel.getDatabaseMetadata());
+ } else {
+ s.setBadRequest("Unknown query.");
+ }
+ s.respond();
+ }
+
+ @Override
+ public void post(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ s.setMethodNotImplemented();
+ s.respond();
+
+ }
+
+ @Override
+ public void put(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ s.setMethodNotImplemented();
+ s.respond();
+ }
+
+ @Override
+ public void delete(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ s.setMethodNotImplemented();
+ s.respond();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/DatabaseModel.java b/src/java/org/apache/hadoop/hbase/rest/DatabaseModel.java
new file mode 100644
index 0000000..1c7a4e8
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/DatabaseModel.java
@@ -0,0 +1,85 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+
+import agilejson.TOJSON;
+
+public class DatabaseModel extends AbstractModel {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(DatabaseModel.class);
+
+ public DatabaseModel(HBaseConfiguration conf, HBaseAdmin admin) {
+ super.initialize(conf, admin);
+ }
+
+ public static class DatabaseMetadata implements ISerializable {
+ protected boolean master_running;
+ protected HTableDescriptor[] tables;
+
+ public DatabaseMetadata(HBaseAdmin a) throws IOException {
+ master_running = a.isMasterRunning();
+ tables = a.listTables();
+ }
+
+ @TOJSON(prefixLength = 2)
+ public boolean isMasterRunning() {
+ return master_running;
+ }
+
+ @TOJSON
+ public HTableDescriptor[] getTables() {
+ return tables;
+ }
+
+ public void restSerialize(IRestSerializer serializer)
+ throws HBaseRestException {
+ serializer.serializeDatabaseMetadata(this);
+ }
+ }
+
+ // Serialize admin ourselves to json object
+ // rather than returning the admin object for obvious reasons
+ public DatabaseMetadata getMetadata() throws HBaseRestException {
+ return getDatabaseMetadata();
+ }
+
+ protected DatabaseMetadata getDatabaseMetadata() throws HBaseRestException {
+ DatabaseMetadata databaseMetadata = null;
+ try {
+ databaseMetadata = new DatabaseMetadata(this.admin);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+
+ return databaseMetadata;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/Dispatcher.java b/src/java/org/apache/hadoop/hbase/rest/Dispatcher.java
new file mode 100644
index 0000000..649607a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/Dispatcher.java
@@ -0,0 +1,495 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.parser.HBaseRestParserFactory;
+import org.apache.hadoop.hbase.rest.parser.IHBaseRestParser;
+import org.apache.hadoop.hbase.rest.serializer.RestSerializerFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.InfoServer;
+import org.apache.hadoop.mapred.StatusHttpServer;
+import org.mortbay.http.NCSARequestLog;
+import org.mortbay.http.SocketListener;
+import org.mortbay.jetty.servlet.WebApplicationContext;
+
+import javax.servlet.ServletException;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Map;
+
+/**
+ * Servlet implementation class for hbase REST interface. Presumes container
+ * ensures single thread through here at any one time (Usually the default
+ * configuration). In other words, code is not written thread-safe.
+ * <p>
+ * This servlet has explicit dependency on Jetty server; it uses the jetty
+ * implementation of MultipartResponse.
+ *
+ * <p>
+ * TODO:
+ * <ul>
+ * <li>multipart/related response is not correct; the servlet setContentType is
+ * broken. I am unable to add parameters such as boundary or start to
+ * multipart/related. They get stripped.</li>
+ * <li>Currently creating a scanner, need to specify a column. Need to make it
+ * so the HTable instance has current table's metadata to-hand so easy to find
+ * the list of all column families so can make up list of columns if none
+ * specified.</li>
+ * <li>Minor items are we are decoding URLs in places where probably already
+ * done and how to timeout scanners that are in the scanner list.</li>
+ * </ul>
+ *
+ * @see <a href="http://wiki.apache.org/lucene-hadoop/Hbase/HbaseRest">Hbase
+ * REST Specification</a>
+ */
+public class Dispatcher extends javax.servlet.http.HttpServlet {
+
+ /**
+ *
+ */
+ private static final long serialVersionUID = -8075335435797071569L;
+ private static final Log LOG = LogFactory.getLog(Dispatcher.class);
+ protected DatabaseController dbController;
+ protected TableController tableController;
+ protected RowController rowController;
+ protected ScannerController scannercontroller;
+ protected TimestampController tsController;
+
+ public enum ContentType {
+ XML("text/xml"), JSON("application/json"), PLAIN("text/plain"), MIME(
+ "multipart/related"), NOT_ACCEPTABLE("");
+
+ private final String type;
+
+ private ContentType(final String t) {
+ this.type = t;
+ }
+
+ @Override
+ public String toString() {
+ return this.type;
+ }
+
+ /**
+ * Utility method used looking at Accept header content.
+ *
+ * @param t
+ * The content type to examine.
+ * @return The enum that matches the prefix of <code>t</code> or the default
+ * enum if <code>t</code> is empty. If unsupported type, we return
+ * NOT_ACCEPTABLE.
+ */
+ public static ContentType getContentType(final String t) {
+ // Default to text/plain. Curl sends */*.
+ if (t == null || t.equals("*/*")) {
+ return ContentType.XML;
+ }
+ String lowerCased = t.toLowerCase();
+ ContentType[] values = ContentType.values();
+ ContentType result = null;
+ for (int i = 0; i < values.length; i++) {
+ if (lowerCased.startsWith(values[i].type)) {
+ result = values[i];
+ break;
+ }
+ }
+ return result == null ? NOT_ACCEPTABLE : result;
+ }
+ }
+
+ /**
+ * Default constructor
+ */
+ public Dispatcher() {
+ super();
+ }
+
+ @Override
+ public void init() throws ServletException {
+ super.init();
+
+ HBaseConfiguration conf = new HBaseConfiguration();
+ HBaseAdmin admin = null;
+
+ try {
+ admin = new HBaseAdmin(conf);
+ createControllers();
+
+ dbController.initialize(conf, admin);
+ tableController.initialize(conf, admin);
+ rowController.initialize(conf, admin);
+ tsController.initialize(conf, admin);
+ scannercontroller.initialize(conf, admin);
+
+ LOG.debug("no errors in init.");
+ } catch (Exception e) {
+ System.out.println(e.toString());
+ throw new ServletException(e);
+ }
+ }
+
+ protected void createControllers() {
+ dbController = new DatabaseController();
+ tableController = new TableController();
+ rowController = new RowController();
+ tsController = new TimestampController();
+ scannercontroller = new ScannerController();
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ protected void doGet(HttpServletRequest request, HttpServletResponse response)
+ throws IOException, ServletException {
+ try {
+ Status s = this.createStatus(request, response);
+ byte[][] pathSegments = getPathSegments(request);
+ Map<String, String[]> queryMap = request.getParameterMap();
+
+ if (pathSegments.length == 0 || pathSegments[0].length <= 0) {
+ // if it was a root request, then get some metadata about
+ // the entire instance.
+ dbController.get(s, pathSegments, queryMap);
+ } else {
+ if (pathSegments.length >= 2
+ && pathSegments.length <= 3
+ && pathSegments[0].length > 0
+ && Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.ROW)) {
+ // if it has table name and row path segments
+ rowController.get(s, pathSegments, queryMap);
+ } else if (pathSegments.length == 4
+ && Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.ROW)) {
+ tsController.get(s, pathSegments, queryMap);
+ } else {
+ // otherwise, it must be a GET request suitable for the
+ // table handler.
+ tableController.get(s, pathSegments, queryMap);
+ }
+ }
+ LOG.debug("GET - No Error");
+ } catch (HBaseRestException e) {
+ LOG.debug("GET - Error: " + e.toString());
+ try {
+ Status sError = createStatus(request, response);
+ sError.setInternalError(e);
+ sError.respond();
+ } catch (HBaseRestException f) {
+ response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
+ }
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ protected void doPost(HttpServletRequest request, HttpServletResponse response)
+ throws IOException, ServletException {
+ try {
+
+ Status s = createStatus(request, response);
+ byte[][] pathSegments = getPathSegments(request);
+ Map<String, String[]> queryMap = request.getParameterMap();
+ byte[] input = readInputBuffer(request);
+ IHBaseRestParser parser = this.getParser(request);
+
+ if ((pathSegments.length >= 0 && pathSegments.length <= 1)
+ || Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.ENABLE)
+ || Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.DISABLE)) {
+ // this is a table request
+ tableController.post(s, pathSegments, queryMap, input, parser);
+ } else {
+ // there should be at least two path segments (table name and row or
+ // scanner)
+ if (pathSegments.length >= 2 && pathSegments[0].length > 0) {
+ if (Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.SCANNER)) {
+ scannercontroller.post(s, pathSegments, queryMap, input, parser);
+ return;
+ } else if (Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.ROW)
+ && pathSegments.length >= 3) {
+ rowController.post(s, pathSegments, queryMap, input, parser);
+ return;
+ }
+ }
+ }
+ } catch (HBaseRestException e) {
+ LOG.debug("POST - Error: " + e.toString());
+ try {
+ Status s_error = createStatus(request, response);
+ s_error.setInternalError(e);
+ s_error.respond();
+ } catch (HBaseRestException f) {
+ response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
+ }
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ protected void doPut(HttpServletRequest request, HttpServletResponse response)
+ throws ServletException, IOException {
+ try {
+ byte[][] pathSegments = getPathSegments(request);
+ if(pathSegments.length == 0) {
+ throw new HBaseRestException("method not supported");
+ } else if (pathSegments.length == 1 && pathSegments[0].length > 0) {
+ // if it has only table name
+ Status s = createStatus(request, response);
+ Map<String, String[]> queryMap = request.getParameterMap();
+ IHBaseRestParser parser = this.getParser(request);
+ byte[] input = readInputBuffer(request);
+ tableController.put(s, pathSegments, queryMap, input, parser);
+ } else {
+ // Equate PUT with a POST.
+ doPost(request, response);
+ }
+ } catch (HBaseRestException e) {
+ try {
+ Status s_error = createStatus(request, response);
+ s_error.setInternalError(e);
+ s_error.respond();
+ } catch (HBaseRestException f) {
+ response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
+ }
+ }
+ }
+
+ @SuppressWarnings("unchecked")
+ @Override
+ protected void doDelete(HttpServletRequest request,
+ HttpServletResponse response) throws IOException, ServletException {
+ try {
+ Status s = createStatus(request, response);
+ byte[][] pathSegments = getPathSegments(request);
+ Map<String, String[]> queryMap = request.getParameterMap();
+
+ if(pathSegments.length == 0) {
+ throw new HBaseRestException("method not supported");
+ } else if (pathSegments.length == 1 && pathSegments[0].length > 0) {
+ // if it only has only table name
+ tableController.delete(s, pathSegments, queryMap);
+ return;
+ } else if (pathSegments.length >= 3 && pathSegments[0].length > 0) {
+ // must be at least two path segments (table name and row or scanner)
+ if (Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.SCANNER)
+ && pathSegments.length == 3 && pathSegments[2].length > 0) {
+ // DELETE to a scanner requires at least three path segments
+ scannercontroller.delete(s, pathSegments, queryMap);
+ return;
+ } else if (Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.ROW)
+ && pathSegments.length >= 3) {
+ rowController.delete(s, pathSegments, queryMap);
+ return;
+ } else if (pathSegments.length == 4) {
+ tsController.delete(s, pathSegments, queryMap);
+ }
+ }
+ } catch (HBaseRestException e) {
+ LOG.debug("POST - Error: " + e.toString());
+ try {
+ Status s_error = createStatus(request, response);
+ s_error.setInternalError(e);
+ s_error.respond();
+ } catch (HBaseRestException f) {
+ response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
+ }
+ }
+ }
+
+ /**
+ * This method will get the path segments from the HttpServletRequest. Please
+ * note that if the first segment of the path is /api this is removed from the
+ * returning byte array.
+ *
+ * @param request
+ *
+ * @return request pathinfo split on the '/' ignoring the first '/' so first
+ * element in pathSegment is not the empty string.
+ */
+ protected byte[][] getPathSegments(final HttpServletRequest request) {
+ int context_len = request.getContextPath().length() + 1;
+
+ byte[][] pathSegments = Bytes.toByteArrays(request.getRequestURI().substring(context_len)
+ .split("/"));
+ byte[] apiAsBytes = "api".getBytes();
+ if (Arrays.equals(apiAsBytes, pathSegments[0])) {
+ byte[][] newPathSegments = new byte[pathSegments.length - 1][];
+ for(int i = 0; i < newPathSegments.length; i++) {
+ newPathSegments[i] = pathSegments[i + 1];
+ }
+ pathSegments = newPathSegments;
+ }
+ return pathSegments;
+ }
+
+ protected byte[] readInputBuffer(HttpServletRequest request)
+ throws HBaseRestException {
+ try {
+ String resultant = "";
+ BufferedReader r = request.getReader();
+
+ int maxLength = 5000; // tie to conf
+ int bufferLength = 640;
+
+ // TODO make s maxLength and c size values in configuration
+ if (!r.ready()) {
+ Thread.sleep(1000); // If r is not ready wait 1 second
+ if (!r.ready()) { // If r still is not ready something is wrong, return
+ // blank.
+ return new byte[0];
+ }
+ }
+ char[] c;// 40 characters * sizeof(UTF16)
+ while (r.ready()) {
+ c = new char[bufferLength];
+ int n = r.read(c, 0, bufferLength);
+ resultant += new String(c);
+ if (n != bufferLength) {
+ break;
+ } else if (resultant.length() > maxLength) {
+ resultant = resultant.substring(0, maxLength);
+ break;
+ }
+ }
+ return Bytes.toBytes(resultant.trim());
+ } catch (Exception e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ protected IHBaseRestParser getParser(HttpServletRequest request) {
+ return HBaseRestParserFactory.getParser(ContentType.getContentType(request
+ .getHeader("content-type")));
+ }
+
+ protected Status createStatus(HttpServletRequest request,
+ HttpServletResponse response) throws HBaseRestException {
+ return new Status(response, RestSerializerFactory.getSerializer(request,
+ response), this.getPathSegments(request));
+ }
+
+ //
+ // Main program and support routines
+ //
+ protected static void printUsageAndExit() {
+ printUsageAndExit(null);
+ }
+
+ protected static void printUsageAndExit(final String message) {
+ if (message != null) {
+ System.err.println(message);
+ }
+ System.out.println("Usage: java org.apache.hadoop.hbase.rest.Dispatcher "
+ + "--help | [--port=PORT] [--bind=ADDR] start");
+ System.out.println("Arguments:");
+ System.out.println(" start Start REST server");
+ System.out.println(" stop Stop REST server");
+ System.out.println("Options:");
+ System.out.println(" port Port to listen on. Default: 60050.");
+ System.out.println(" bind Address to bind on. Default: 0.0.0.0.");
+ System.out
+ .println(" max-num-threads The maximum number of threads for Jetty to run. Defaults to 256.");
+ System.out.println(" help Print this message and exit.");
+
+ System.exit(0);
+ }
+
+ /*
+ * Start up the REST servlet in standalone mode.
+ *
+ * @param args
+ */
+ protected static void doMain(final String[] args) throws Exception {
+ if (args.length < 1) {
+ printUsageAndExit();
+ }
+
+ int port = 60050;
+ String bindAddress = "0.0.0.0";
+ int numThreads = 256;
+
+ // Process command-line args. TODO: Better cmd-line processing
+ // (but hopefully something not as painful as cli options).
+ final String addressArgKey = "--bind=";
+ final String portArgKey = "--port=";
+ final String numThreadsKey = "--max-num-threads=";
+ for (String cmd : args) {
+ if (cmd.startsWith(addressArgKey)) {
+ bindAddress = cmd.substring(addressArgKey.length());
+ continue;
+ } else if (cmd.startsWith(portArgKey)) {
+ port = Integer.parseInt(cmd.substring(portArgKey.length()));
+ continue;
+ } else if (cmd.equals("--help") || cmd.equals("-h")) {
+ printUsageAndExit();
+ } else if (cmd.equals("start")) {
+ continue;
+ } else if (cmd.equals("stop")) {
+ printUsageAndExit("To shutdown the REST server run "
+ + "bin/hbase-daemon.sh stop rest or send a kill signal to "
+ + "the REST server pid");
+ } else if (cmd.startsWith(numThreadsKey)) {
+ numThreads = Integer.parseInt(cmd.substring(numThreadsKey.length()));
+ continue;
+ }
+
+ // Print out usage if we get to here.
+ printUsageAndExit();
+ }
+ org.mortbay.jetty.Server webServer = new org.mortbay.jetty.Server();
+ SocketListener listener = new SocketListener();
+ listener.setPort(port);
+ listener.setHost(bindAddress);
+ listener.setMaxThreads(numThreads);
+ webServer.addListener(listener);
+ NCSARequestLog ncsa = new NCSARequestLog();
+ ncsa.setLogLatency(true);
+ webServer.setRequestLog(ncsa);
+ WebApplicationContext context =
+ webServer.addWebApplication("/api", InfoServer.getWebAppDir("rest"));
+ context.addServlet("stacks", "/stacks",
+ StatusHttpServer.StackServlet.class.getName());
+ context.addServlet("logLevel", "/logLevel",
+ org.apache.hadoop.log.LogLevel.Servlet.class.getName());
+ webServer.start();
+ }
+
+ /**
+ * @param args
+ * @throws Exception
+ */
+ public static void main(String[] args) throws Exception {
+ System.out.println("Starting restServer");
+ doMain(args);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/RESTConstants.java b/src/java/org/apache/hadoop/hbase/rest/RESTConstants.java
new file mode 100644
index 0000000..42b0d19
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/RESTConstants.java
@@ -0,0 +1,110 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import org.apache.hadoop.hbase.rest.filter.RowFilterSetFactory;
+import org.apache.hadoop.hbase.rest.filter.StopRowFilterFactory;
+import org.apache.hadoop.hbase.rest.filter.WhileMatchRowFilterFactory;
+import org.apache.hadoop.hbase.rest.filter.PageRowFilterFactory;
+import org.apache.hadoop.hbase.rest.filter.ColumnValueFilterFactory;
+import org.apache.hadoop.hbase.rest.filter.RegExpRowFilterFactory;
+import org.apache.hadoop.hbase.rest.filter.InclusiveStopRowFilterFactory;
+import java.util.HashMap;
+import org.apache.hadoop.hbase.rest.filter.FilterFactory;
+
+public interface RESTConstants {
+ final static String TRUE = "true";
+ final static String FALSE = "false";
+ // Used for getting all data from a column specified in that order.
+ final static String COLUMNS = "columns";
+ final static String COLUMN = "column";
+ // Used with TableExists
+ final static String EXISTS = "exists";
+ // Maps to Transaction ID
+ final static String TRANSACTION = "transaction";
+ // Transaction Operation Key.
+ final static String TRANSACTION_OPERATION = "transaction_op";
+ // Transaction Operation Values
+ final static String TRANSACTION_OPERATION_COMMIT = "commit";
+ final static String TRANSACTION_OPERATION_CREATE = "create";
+ final static String TRANSACTION_OPERATION_ABORT = "abort";
+ // Filter Key
+ final static String FILTER = "filter";
+ final static String FILTER_TYPE = "type";
+ final static String FILTER_VALUE = "value";
+ final static String FILTER_RANK = "rank";
+ // Scanner Key
+ final static String SCANNER = "scanner";
+ // The amount of rows to return at one time.
+ final static String SCANNER_RESULT_SIZE = "result_size";
+ final static String SCANNER_START_ROW = "start_row";
+ final static String SCANNER_STOP_ROW = "stop_row";
+ final static String SCANNER_FILTER = "filter";
+ final static String SCANNER_TIMESTAMP = "timestamp";
+ final static String NUM_VERSIONS = "num_versions";
+ final static String SCANNER_COLUMN = "column";
+ // static items used on the path
+ static final String DISABLE = "disable";
+ static final String ENABLE = "enable";
+ static final String REGIONS = "regions";
+ static final String ROW = "row";
+ static final String TIME_STAMPS = "timestamps";
+ static final String METADATA = "metadata";
+
+ static final String NAME = "name";
+ static final String VALUE = "value";
+ static final String ROWS = "rows";
+
+ static final FactoryMap filterFactories = FactoryMap.getFactoryMap();
+ static final String LIMIT = "limit";
+
+ static class FactoryMap {
+
+ static boolean created = false;
+ protected HashMap<String, FilterFactory> map = new HashMap<String, FilterFactory>();
+
+ protected FactoryMap() {
+ }
+
+ public static FactoryMap getFactoryMap() {
+ if (!created) {
+ created = true;
+ FactoryMap f = new FactoryMap();
+ f.initialize();
+ return f;
+ }
+ return null;
+ }
+
+ public FilterFactory get(String c) {
+ return map.get(c);
+ }
+
+ protected void initialize() {
+ map.put("ColumnValueFilter", new ColumnValueFilterFactory());
+ map.put("InclusiveStopRowFilter", new InclusiveStopRowFilterFactory());
+ map.put("PageRowFilter", new PageRowFilterFactory());
+ map.put("RegExpRowFilter", new RegExpRowFilterFactory());
+ map.put("RowFilterSet", new RowFilterSetFactory());
+ map.put("StopRowFilter", new StopRowFilterFactory());
+ map.put("WhileMatchRowFilter", new WhileMatchRowFilterFactory());
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/RowController.java b/src/java/org/apache/hadoop/hbase/rest/RowController.java
new file mode 100644
index 0000000..35d3c9a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/RowController.java
@@ -0,0 +1,135 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.rest.descriptors.RowUpdateDescriptor;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.parser.IHBaseRestParser;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class RowController extends AbstractController {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(RowController.class);
+
+ protected RowModel getModel() {
+ return (RowModel) model;
+ }
+
+ @Override
+ protected AbstractModel generateModel(HBaseConfiguration conf,
+ HBaseAdmin admin) {
+ return new RowModel(conf, admin);
+ }
+
+ @Override
+ public void get(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ RowModel innerModel = getModel();
+ s.setNoQueryResults();
+
+ byte[] tableName;
+ byte[] rowName;
+
+ tableName = pathSegments[0];
+ rowName = pathSegments[2];
+ RowResult row = null;
+
+ if (queryMap.size() == 0 && pathSegments.length <= 3) {
+ row = innerModel.get(tableName, rowName);
+ } else if (pathSegments.length == 4
+ && Bytes.toString(pathSegments[3]).toLowerCase().equals(
+ RESTConstants.TIME_STAMPS)) {
+ innerModel.getTimestamps(tableName, rowName);
+ } else {
+ row = innerModel.get(tableName, rowName, this.getColumnsFromQueryMap(queryMap));
+ }
+ if(row == null) {
+ throw new HBaseRestException("row not found");
+ }
+ s.setOK(row);
+ s.respond();
+ }
+
+ @Override
+ public void post(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ RowModel innerModel = getModel();
+
+ BatchUpdate b;
+ RowUpdateDescriptor rud = parser
+ .getRowUpdateDescriptor(input, pathSegments);
+
+ if (input.length == 0) {
+ s.setUnsupportedMediaType("no data send with post request");
+ s.respond();
+ return;
+ }
+
+ b = new BatchUpdate(rud.getRowName());
+
+ for (byte[] key : rud.getColVals().keySet()) {
+ b.put(key, rud.getColVals().get(key));
+ }
+
+ try {
+ innerModel.post(rud.getTableName().getBytes(), b);
+ s.setOK();
+ } catch (HBaseRestException e) {
+ s.setUnsupportedMediaType(e.getMessage());
+ }
+ s.respond();
+ }
+
+ @Override
+ public void put(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ s.setMethodNotImplemented();
+ s.respond();
+ }
+
+ @Override
+ public void delete(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ RowModel innerModel = getModel();
+ byte[] tableName;
+ byte[] rowName;
+
+ tableName = pathSegments[0];
+ rowName = pathSegments[2];
+ if(queryMap.size() == 0) {
+ innerModel.delete(tableName, rowName);
+ } else {
+ innerModel.delete(tableName, rowName, this.getColumnsFromQueryMap(queryMap));
+ }
+ s.setOK();
+ s.respond();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/RowModel.java b/src/java/org/apache/hadoop/hbase/rest/RowModel.java
new file mode 100644
index 0000000..1b8ce8c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/RowModel.java
@@ -0,0 +1,140 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.rest.descriptors.TimestampsDescriptor;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+public class RowModel extends AbstractModel {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(RowModel.class);
+
+ public RowModel(HBaseConfiguration conf, HBaseAdmin admin) {
+ super.initialize(conf, admin);
+ }
+
+ public RowResult get(byte[] tableName, byte[] rowName)
+ throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ return table.getRow(rowName);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public RowResult get(byte[] tableName, byte[] rowName, byte[][] columns)
+ throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ return table.getRow(rowName, columns);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public RowResult get(byte[] tableName, byte[] rowName, byte[][] columns,
+ long timestamp) throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ return table.getRow(rowName, columns, timestamp);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public RowResult get(byte[] tableName, byte[] rowName, long timestamp)
+ throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ return table.getRow(rowName, timestamp);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public TimestampsDescriptor getTimestamps(
+ @SuppressWarnings("unused") byte[] tableName,
+ @SuppressWarnings("unused") byte[] rowName) throws HBaseRestException {
+ // try {
+ // TimestampsDescriptor tsd = new TimestampsDescriptor();
+ // HTable table = new HTable(tableName);
+ // RowResult row = table.getRow(rowName);
+
+ throw new HBaseRestException("operation currently unsupported");
+
+ // } catch (IOException e) {
+ // throw new HBaseRestException("Error finding timestamps for row: "
+ // + Bytes.toString(rowName), e);
+ // }
+
+ }
+
+ public void post(byte[] tableName, BatchUpdate b) throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ table.commit(b);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public void post(byte[] tableName, List<BatchUpdate> b)
+ throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ table.commit(b);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public void delete(byte[] tableName, byte[] rowName)
+ throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ table.deleteAll(rowName);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public void delete(byte[] tableName, byte[] rowName, byte[][] columns) throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ for (byte[] column : columns) {
+ table.deleteAll(rowName, column);
+ }
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/ScannerController.java b/src/java/org/apache/hadoop/hbase/rest/ScannerController.java
new file mode 100644
index 0000000..d8f17fc
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/ScannerController.java
@@ -0,0 +1,358 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.RowFilterSet;
+import org.apache.hadoop.hbase.filter.StopRowFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerDescriptor;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerIdentifier;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.filter.FilterFactory;
+import org.apache.hadoop.hbase.rest.parser.IHBaseRestParser;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ *
+ */
+public class ScannerController extends AbstractController {
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.AbstractController#delete(org.apache.hadoop
+ * .hbase.rest.Status, byte[][], java.util.Map)
+ */
+ @Override
+ public void delete(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ ScannerModel innerModel = this.getModel();
+ if (pathSegments.length == 3
+ && Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.SCANNER)) {
+ // get the scannerId
+ Integer scannerId = null;
+ String scannerIdString = new String(pathSegments[2]);
+ if (!Pattern.matches("^\\d+$", scannerIdString)) {
+ throw new HBaseRestException(
+ "the scannerid in the path and must be an integer");
+ }
+ scannerId = Integer.parseInt(scannerIdString);
+
+ try {
+ innerModel.scannerClose(scannerId);
+ s.setOK();
+ } catch (HBaseRestException e) {
+ s.setNotFound();
+ }
+ } else {
+ s.setBadRequest("invalid query");
+ }
+ s.respond();
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.AbstractController#generateModel(org.apache
+ * .hadoop.hbase.HBaseConfiguration,
+ * org.apache.hadoop.hbase.client.HBaseAdmin)
+ */
+ @Override
+ protected AbstractModel generateModel(HBaseConfiguration conf, HBaseAdmin a) {
+ return new ScannerModel(conf, a);
+ }
+
+ protected ScannerModel getModel() {
+ return (ScannerModel) model;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.AbstractController#get(org.apache.hadoop.hbase
+ * .rest.Status, byte[][], java.util.Map)
+ */
+ @Override
+ public void get(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+
+ s.setBadRequest("invalid query");
+ s.respond();
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.AbstractController#post(org.apache.hadoop.
+ * hbase.rest.Status, byte[][], java.util.Map, byte[],
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser)
+ */
+ @Override
+ public void post(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ ScannerModel innerModel = this.getModel();
+ byte[] tableName;
+ tableName = pathSegments[0];
+
+ // Otherwise we interpret this request as a scanner request.
+ if (pathSegments.length == 2
+ && Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.SCANNER)) { // new scanner request
+ ScannerDescriptor sd = this.getScannerDescriptor(queryMap);
+ s.setScannerCreated(createScanner(innerModel, tableName, sd));
+ } else if (pathSegments.length == 3
+ && Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.SCANNER)) { // open scanner request
+ // first see if the limit variable is present
+ Long numRows = 1L;
+ String[] numRowsString = queryMap.get(RESTConstants.LIMIT);
+ if (numRowsString != null && Pattern.matches("^\\d+$", numRowsString[0])) {
+ numRows = Long.parseLong(numRowsString[0]);
+ }
+ // get the scannerId
+ Integer scannerId = null;
+ String scannerIdString = new String(pathSegments[2]);
+ if (!Pattern.matches("^\\d+$", scannerIdString)) {
+ throw new HBaseRestException(
+ "the scannerid in the path and must be an integer");
+ }
+ scannerId = Integer.parseInt(scannerIdString);
+
+ try {
+ s.setOK(innerModel.scannerGet(scannerId, numRows));
+ } catch (HBaseRestException e) {
+ s.setNotFound();
+ }
+ } else {
+ s.setBadRequest("Unknown Query.");
+ }
+ s.respond();
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.AbstractController#put(org.apache.hadoop.hbase
+ * .rest.Status, byte[][], java.util.Map, byte[],
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser)
+ */
+ @Override
+ public void put(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+
+ s.setBadRequest("invalid query");
+ s.respond();
+
+ }
+
+ private ScannerDescriptor getScannerDescriptor(Map<String, String[]> queryMap) {
+ long timestamp = 0L;
+ byte[] startRow = null;
+ byte[] stopRow = null;
+ String filters = null;
+
+ String[] timeStampString = queryMap.get(RESTConstants.SCANNER_TIMESTAMP);
+ if (timeStampString != null && timeStampString.length == 1) {
+ timestamp = Long.parseLong(timeStampString[0]);
+ }
+
+ String[] startRowString = queryMap.get(RESTConstants.SCANNER_START_ROW);
+ if (startRowString != null && startRowString.length == 1) {
+ startRow = Bytes.toBytes(startRowString[0]);
+ }
+
+ String[] stopRowString = queryMap.get(RESTConstants.SCANNER_STOP_ROW);
+ if (stopRowString != null && stopRowString.length == 1) {
+ stopRow = Bytes.toBytes(stopRowString[0]);
+ }
+
+ String[] filtersStrings = queryMap.get(RESTConstants.SCANNER_FILTER);
+ if (filtersStrings != null && filtersStrings.length > 0) {
+ filters = "";
+ for (@SuppressWarnings("unused")
+ String filter : filtersStrings) {
+ // TODO filters are not hooked up yet... And the String should probably
+ // be changed to a set
+ }
+ }
+ return new ScannerDescriptor(this.getColumnsFromQueryMap(queryMap),
+ timestamp, startRow, stopRow, filters);
+ }
+
+ protected ScannerIdentifier createScanner(ScannerModel innerModel,
+ byte[] tableName, ScannerDescriptor scannerDescriptor)
+ throws HBaseRestException {
+
+ RowFilterInterface filterSet = null;
+
+ // Might want to change this. I am doing this so that I can use
+ // a switch statement that is more efficient.
+ int switchInt = 0;
+ if (scannerDescriptor.getColumns() != null
+ && scannerDescriptor.getColumns().length > 0) {
+ switchInt += 1;
+ }
+ switchInt += (scannerDescriptor.getTimestamp() != 0L) ? (1 << 1) : 0;
+ switchInt += (scannerDescriptor.getStartRow().length > 0) ? (1 << 2) : 0;
+ switchInt += (scannerDescriptor.getStopRow().length > 0) ? (1 << 3) : 0;
+ if (scannerDescriptor.getFilters() != null
+ && !scannerDescriptor.getFilters().equals("")) {
+ switchInt += (scannerDescriptor.getFilters() != null) ? (1 << 4) : 0;
+ filterSet = unionFilters(scannerDescriptor.getFilters());
+ }
+
+ return scannerSwitch(switchInt, innerModel, tableName, scannerDescriptor
+ .getColumns(), scannerDescriptor.getTimestamp(), scannerDescriptor
+ .getStartRow(), scannerDescriptor.getStopRow(), filterSet);
+ }
+
+ public ScannerIdentifier scannerSwitch(int switchInt,
+ ScannerModel innerModel, byte[] tableName, byte[][] columns,
+ long timestamp, byte[] startRow, byte[] stopRow,
+ RowFilterInterface filterSet) throws HBaseRestException {
+ switch (switchInt) {
+ case 0:
+ return innerModel.scannerOpen(tableName);
+ case 1:
+ return innerModel.scannerOpen(tableName, columns);
+ case 2:
+ return innerModel.scannerOpen(tableName, timestamp);
+ case 3:
+ return innerModel.scannerOpen(tableName, columns, timestamp);
+ case 4:
+ return innerModel.scannerOpen(tableName, startRow);
+ case 5:
+ return innerModel.scannerOpen(tableName, columns, startRow);
+ case 6:
+ return innerModel.scannerOpen(tableName, startRow, timestamp);
+ case 7:
+ return innerModel.scannerOpen(tableName, columns, startRow, timestamp);
+ case 8:
+ return innerModel.scannerOpen(tableName, getStopRow(stopRow));
+ case 9:
+ return innerModel.scannerOpen(tableName, columns, getStopRow(stopRow));
+ case 10:
+ return innerModel.scannerOpen(tableName, timestamp, getStopRow(stopRow));
+ case 11:
+ return innerModel.scannerOpen(tableName, columns, timestamp,
+ getStopRow(stopRow));
+ case 12:
+ return innerModel.scannerOpen(tableName, startRow, getStopRow(stopRow));
+ case 13:
+ return innerModel.scannerOpen(tableName, columns, startRow,
+ getStopRow(stopRow));
+ case 14:
+ return innerModel.scannerOpen(tableName, startRow, timestamp,
+ getStopRow(stopRow));
+ case 15:
+ return innerModel.scannerOpen(tableName, columns, startRow, timestamp,
+ getStopRow(stopRow));
+ case 16:
+ return innerModel.scannerOpen(tableName, filterSet);
+ case 17:
+ return innerModel.scannerOpen(tableName, columns, filterSet);
+ case 18:
+ return innerModel.scannerOpen(tableName, timestamp, filterSet);
+ case 19:
+ return innerModel.scannerOpen(tableName, columns, timestamp, filterSet);
+ case 20:
+ return innerModel.scannerOpen(tableName, startRow, filterSet);
+ case 21:
+ return innerModel.scannerOpen(tableName, columns, startRow, filterSet);
+ case 22:
+ return innerModel.scannerOpen(tableName, startRow, timestamp, filterSet);
+ case 23:
+ return innerModel.scannerOpen(tableName, columns, startRow, timestamp,
+ filterSet);
+ case 24:
+ return innerModel.scannerOpen(tableName, getStopRowUnionFilter(stopRow,
+ filterSet));
+ case 25:
+ return innerModel.scannerOpen(tableName, columns, getStopRowUnionFilter(
+ stopRow, filterSet));
+ case 26:
+ return innerModel.scannerOpen(tableName, timestamp,
+ getStopRowUnionFilter(stopRow, filterSet));
+ case 27:
+ return innerModel.scannerOpen(tableName, columns, timestamp,
+ getStopRowUnionFilter(stopRow, filterSet));
+ case 28:
+ return innerModel.scannerOpen(tableName, startRow, getStopRowUnionFilter(
+ stopRow, filterSet));
+ case 29:
+ return innerModel.scannerOpen(tableName, columns, startRow,
+ getStopRowUnionFilter(stopRow, filterSet));
+ case 30:
+ return innerModel.scannerOpen(tableName, startRow, timestamp,
+ getStopRowUnionFilter(stopRow, filterSet));
+ case 31:
+ return innerModel.scannerOpen(tableName, columns, startRow, timestamp,
+ getStopRowUnionFilter(stopRow, filterSet));
+ default:
+ return null;
+ }
+ }
+
+ protected RowFilterInterface getStopRow(byte[] stopRow) {
+ return new WhileMatchRowFilter(new StopRowFilter(stopRow));
+ }
+
+ protected RowFilterInterface getStopRowUnionFilter(byte[] stopRow,
+ RowFilterInterface filter) {
+ Set<RowFilterInterface> filterSet = new HashSet<RowFilterInterface>();
+ filterSet.add(getStopRow(stopRow));
+ filterSet.add(filter);
+ return new RowFilterSet(filterSet);
+ }
+
+ /**
+ * Given a list of filters in JSON string form, returns a RowSetFilter that
+ * returns true if all input filters return true on a Row (aka an AND
+ * statement).
+ *
+ * @param filters
+ * array of input filters in a JSON String
+ * @return RowSetFilter with all input filters in an AND Statement
+ * @throws org.apache.hadoop.hbase.rest.exception.HBaseRestException
+ */
+ protected RowFilterInterface unionFilters(String filters)
+ throws HBaseRestException {
+ FilterFactory f = RESTConstants.filterFactories.get("RowFilterSet");
+ return f.getFilterFromJSON(filters);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/ScannerModel.java b/src/java/org/apache/hadoop/hbase/rest/ScannerModel.java
new file mode 100644
index 0000000..c951430
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/ScannerModel.java
@@ -0,0 +1,277 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerIdentifier;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+/**
+ *
+ */
+public class ScannerModel extends AbstractModel {
+
+ public ScannerModel(HBaseConfiguration config, HBaseAdmin admin) {
+ super.initialize(config, admin);
+ }
+
+ //
+ // Normal Scanner
+ //
+ protected static class ScannerMaster {
+
+ protected static final Map<Integer, Scanner> scannerMap = new ConcurrentHashMap<Integer, Scanner>();
+ protected static final AtomicInteger nextScannerId = new AtomicInteger(1);
+
+ public Integer addScanner(Scanner scanner) {
+ Integer i = Integer.valueOf(nextScannerId.getAndIncrement());
+ scannerMap.put(i, scanner);
+ return i;
+ }
+
+ public Scanner getScanner(Integer id) {
+ return scannerMap.get(id);
+ }
+
+ public Scanner removeScanner(Integer id) {
+ return scannerMap.remove(id);
+ }
+
+ /**
+ * @param id
+ * id of scanner to close
+ */
+ public void scannerClose(Integer id) {
+ Scanner s = scannerMap.remove(id);
+ s.close();
+ }
+ }
+
+ protected static final ScannerMaster scannerMaster = new ScannerMaster();
+
+ /**
+ * returns the next numResults RowResults from the Scaner mapped to Integer
+ * id. If the end of the table is reached, the scanner is closed and all
+ * succesfully retrieved rows are returned.
+ *
+ * @param id
+ * id target scanner is mapped to.
+ * @param numRows
+ * number of results to return.
+ * @return all successfully retrieved rows.
+ * @throws org.apache.hadoop.hbase.rest.exception.HBaseRestException
+ */
+ public RowResult[] scannerGet(Integer id, Long numRows)
+ throws HBaseRestException {
+ try {
+ ArrayList<RowResult> a;
+ Scanner s;
+ RowResult r;
+
+ a = new ArrayList<RowResult>();
+ s = scannerMaster.getScanner(id);
+
+ if (s == null) {
+ throw new HBaseRestException("ScannerId: " + id
+ + " is unavailable. Please create a new scanner");
+ }
+
+ for (int i = 0; i < numRows; i++) {
+ if ((r = s.next()) != null) {
+ a.add(r);
+ } else {
+ scannerMaster.scannerClose(id);
+ break;
+ }
+ }
+
+ return a.toArray(new RowResult[0]);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ /**
+ * Returns all rows inbetween the scanners current position and the end of the
+ * table.
+ *
+ * @param id
+ * id of scanner to use
+ * @return all rows till end of table
+ * @throws org.apache.hadoop.hbase.rest.exception.HBaseRestException
+ */
+ public RowResult[] scannerGet(Integer id) throws HBaseRestException {
+ try {
+ ArrayList<RowResult> a;
+ Scanner s;
+ RowResult r;
+
+ a = new ArrayList<RowResult>();
+ s = scannerMaster.getScanner(id);
+
+ while ((r = s.next()) != null) {
+ a.add(r);
+ }
+
+ scannerMaster.scannerClose(id);
+
+ return a.toArray(new RowResult[0]);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public boolean scannerClose(Integer id) throws HBaseRestException {
+ Scanner s = scannerMaster.removeScanner(id);
+
+ if (s == null) {
+ throw new HBaseRestException("Scanner id: " + id + " does not exist");
+ }
+ return true;
+ }
+
+ // Scanner Open Methods
+ // No Columns
+ public ScannerIdentifier scannerOpen(byte[] tableName)
+ throws HBaseRestException {
+ return scannerOpen(tableName, getColumns(tableName));
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, long timestamp)
+ throws HBaseRestException {
+ return scannerOpen(tableName, getColumns(tableName), timestamp);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[] startRow)
+ throws HBaseRestException {
+ return scannerOpen(tableName, getColumns(tableName), startRow);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[] startRow,
+ long timestamp) throws HBaseRestException {
+ return scannerOpen(tableName, getColumns(tableName), startRow, timestamp);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName,
+ RowFilterInterface filter) throws HBaseRestException {
+ return scannerOpen(tableName, getColumns(tableName), filter);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, long timestamp,
+ RowFilterInterface filter) throws HBaseRestException {
+ return scannerOpen(tableName, getColumns(tableName), timestamp, filter);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[] startRow,
+ RowFilterInterface filter) throws HBaseRestException {
+ return scannerOpen(tableName, getColumns(tableName), startRow, filter);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[] startRow,
+ long timestamp, RowFilterInterface filter) throws HBaseRestException {
+ return scannerOpen(tableName, getColumns(tableName), startRow, timestamp,
+ filter);
+ }
+
+ // With Columns
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[][] columns,
+ long timestamp) throws HBaseRestException {
+ try {
+ HTable table;
+ table = new HTable(tableName);
+ return new ScannerIdentifier(scannerMaster.addScanner(table.getScanner(
+ columns, HConstants.EMPTY_START_ROW, timestamp)));
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[][] columns)
+ throws HBaseRestException {
+ return scannerOpen(tableName, columns, HConstants.LATEST_TIMESTAMP);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[][] columns,
+ byte[] startRow, long timestamp) throws HBaseRestException {
+ try {
+ HTable table;
+ table = new HTable(tableName);
+ return new ScannerIdentifier(scannerMaster.addScanner(table.getScanner(
+ columns, startRow, timestamp)));
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[][] columns,
+ byte[] startRow) throws HBaseRestException {
+ return scannerOpen(tableName, columns, startRow,
+ HConstants.LATEST_TIMESTAMP);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[][] columns,
+ long timestamp, RowFilterInterface filter) throws HBaseRestException {
+ try {
+ HTable table;
+ table = new HTable(tableName);
+ return new ScannerIdentifier(scannerMaster.addScanner(table.getScanner(
+ columns, HConstants.EMPTY_START_ROW, timestamp, filter)));
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[][] columns,
+ RowFilterInterface filter) throws HBaseRestException {
+ return scannerOpen(tableName, columns, HConstants.LATEST_TIMESTAMP, filter);
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[][] columns,
+ byte[] startRow, long timestamp, RowFilterInterface filter)
+ throws HBaseRestException {
+ try {
+ HTable table;
+ table = new HTable(tableName);
+ return new ScannerIdentifier(scannerMaster.addScanner(table.getScanner(
+ columns, startRow, timestamp, filter)));
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public ScannerIdentifier scannerOpen(byte[] tableName, byte[][] columns,
+ byte[] startRow, RowFilterInterface filter) throws HBaseRestException {
+ return scannerOpen(tableName, columns, startRow,
+ HConstants.LATEST_TIMESTAMP, filter);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/Status.java b/src/java/org/apache/hadoop/hbase/rest/Status.java
new file mode 100644
index 0000000..9cc5e85
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/Status.java
@@ -0,0 +1,256 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.util.HashMap;
+
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerIdentifier;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import agilejson.TOJSON;
+
+public class Status {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(Status.class);
+
+ public static final HashMap<Integer, String> statNames = new HashMap<Integer, String>();
+
+ static {
+ statNames.put(HttpServletResponse.SC_CONTINUE, "continue");
+ statNames.put(HttpServletResponse.SC_SWITCHING_PROTOCOLS,
+ "switching protocols");
+ statNames.put(HttpServletResponse.SC_OK, "ok");
+ statNames.put(HttpServletResponse.SC_CREATED, "created");
+ statNames.put(HttpServletResponse.SC_ACCEPTED, "accepted");
+ statNames.put(HttpServletResponse.SC_NON_AUTHORITATIVE_INFORMATION,
+ "non-authoritative information");
+ statNames.put(HttpServletResponse.SC_NO_CONTENT, "no content");
+ statNames.put(HttpServletResponse.SC_RESET_CONTENT, "reset content");
+ statNames.put(HttpServletResponse.SC_PARTIAL_CONTENT, "partial content");
+ statNames.put(HttpServletResponse.SC_MULTIPLE_CHOICES, "multiple choices");
+ statNames
+ .put(HttpServletResponse.SC_MOVED_PERMANENTLY, "moved permanently");
+ statNames
+ .put(HttpServletResponse.SC_MOVED_TEMPORARILY, "moved temporarily");
+ statNames.put(HttpServletResponse.SC_FOUND, "found");
+ statNames.put(HttpServletResponse.SC_SEE_OTHER, "see other");
+ statNames.put(HttpServletResponse.SC_NOT_MODIFIED, "not modified");
+ statNames.put(HttpServletResponse.SC_USE_PROXY, "use proxy");
+ statNames.put(HttpServletResponse.SC_TEMPORARY_REDIRECT,
+ "temporary redirect");
+ statNames.put(HttpServletResponse.SC_BAD_REQUEST, "bad request");
+ statNames.put(HttpServletResponse.SC_UNAUTHORIZED, "unauthorized");
+ statNames.put(HttpServletResponse.SC_FORBIDDEN, "forbidden");
+ statNames.put(HttpServletResponse.SC_NOT_FOUND, "not found");
+ statNames.put(HttpServletResponse.SC_METHOD_NOT_ALLOWED,
+ "method not allowed");
+ statNames.put(HttpServletResponse.SC_NOT_ACCEPTABLE, "not acceptable");
+ statNames.put(HttpServletResponse.SC_PROXY_AUTHENTICATION_REQUIRED,
+ "proxy authentication required");
+ statNames.put(HttpServletResponse.SC_REQUEST_TIMEOUT, "request timeout");
+ statNames.put(HttpServletResponse.SC_CONFLICT, "conflict");
+ statNames.put(HttpServletResponse.SC_GONE, "gone");
+ statNames.put(HttpServletResponse.SC_LENGTH_REQUIRED, "length required");
+ statNames.put(HttpServletResponse.SC_PRECONDITION_FAILED,
+ "precondition failed");
+ statNames.put(HttpServletResponse.SC_REQUEST_ENTITY_TOO_LARGE,
+ "request entity too large");
+ statNames.put(HttpServletResponse.SC_REQUEST_URI_TOO_LONG,
+ "request uri too long");
+ statNames.put(HttpServletResponse.SC_UNSUPPORTED_MEDIA_TYPE,
+ "unsupported media type");
+ statNames.put(HttpServletResponse.SC_REQUESTED_RANGE_NOT_SATISFIABLE,
+ "requested range not satisfiable");
+ statNames.put(HttpServletResponse.SC_EXPECTATION_FAILED,
+ "expectation failed");
+ statNames.put(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
+ "internal server error");
+ statNames.put(HttpServletResponse.SC_NOT_IMPLEMENTED, "not implemented");
+ statNames.put(HttpServletResponse.SC_BAD_GATEWAY, "bad gateway");
+ statNames.put(HttpServletResponse.SC_SERVICE_UNAVAILABLE,
+ "service unavailable");
+ statNames.put(HttpServletResponse.SC_GATEWAY_TIMEOUT, "gateway timeout");
+ statNames.put(HttpServletResponse.SC_HTTP_VERSION_NOT_SUPPORTED,
+ "http version not supported");
+ }
+ protected int statusCode;
+ protected HttpServletResponse response;
+ protected Object message;
+ protected IRestSerializer serializer;
+ protected byte[][] pathSegments;
+
+ public int getStatusCode() {
+ return statusCode;
+ }
+
+ @TOJSON
+ public Object getMessage() {
+ return message;
+ }
+
+ public static class StatusMessage implements ISerializable {
+ int statusCode;
+ boolean error;
+ Object reason;
+
+ public StatusMessage(int statusCode, boolean error, Object o) {
+ this.statusCode = statusCode;
+ this.error = error;
+ reason = o;
+ }
+
+ @TOJSON
+ public int getStatusCode() {
+ return statusCode;
+ }
+
+ @TOJSON
+ public boolean getError() {
+ return error;
+ }
+
+ @TOJSON
+ public Object getMessage() {
+ return reason;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.xml.IOutputXML#toXML(org.apache.hadoop.hbase
+ * .rest.serializer.IRestSerializer)
+ */
+ public void restSerialize(IRestSerializer serializer)
+ throws HBaseRestException {
+ serializer.serializeStatusMessage(this);
+ }
+ }
+
+ public Status(HttpServletResponse r, IRestSerializer serializer, byte[][] bs) {
+ this.setOK();
+ this.response = r;
+ this.serializer = serializer;
+ this.pathSegments = bs;
+ }
+
+ // Good Messages
+ public void setOK() {
+ this.statusCode = HttpServletResponse.SC_OK;
+ this.message = new StatusMessage(HttpServletResponse.SC_OK, false, "success");
+ }
+
+ public void setOK(Object message) {
+ this.statusCode = HttpServletResponse.SC_OK;
+ this.message = message;
+ }
+
+ public void setAccepted() {
+ this.statusCode = HttpServletResponse.SC_ACCEPTED;
+ this.message = new StatusMessage(HttpServletResponse.SC_ACCEPTED, false, "success");
+ }
+
+ public void setExists(boolean error) {
+ this.statusCode = HttpServletResponse.SC_CONFLICT;
+ this.message = new StatusMessage(statusCode, error, "table already exists");
+ }
+
+ public void setCreated() {
+ this.statusCode = HttpServletResponse.SC_CREATED;
+ this.setOK();
+ }
+
+ public void setScannerCreated(ScannerIdentifier scannerIdentifier) {
+ this.statusCode = HttpServletResponse.SC_OK;
+ this.message = scannerIdentifier;
+ response.addHeader("Location", "/" + Bytes.toString(pathSegments[0])
+ + "/scanner/" + scannerIdentifier.getId());
+ }
+ // Bad Messages
+
+ public void setInternalError(Exception e) {
+ this.statusCode = HttpServletResponse.SC_INTERNAL_SERVER_ERROR;
+ this.message = new StatusMessage(statusCode, true, e);
+ }
+
+ public void setNoQueryResults() {
+ this.statusCode = HttpServletResponse.SC_NOT_FOUND;
+ this.message = new StatusMessage(statusCode, true, "no query results");
+ }
+
+ public void setConflict(Object message) {
+ this.statusCode = HttpServletResponse.SC_CONFLICT;
+ this.message = new StatusMessage(statusCode, true, message);
+ }
+
+ public void setNotFound(Object message) {
+ this.statusCode = HttpServletResponse.SC_NOT_FOUND;
+ this.message = new StatusMessage(statusCode, true, message);
+ }
+
+ public void setBadRequest(Object message) {
+ this.statusCode = HttpServletResponse.SC_BAD_REQUEST;
+ this.message = new StatusMessage(statusCode, true, message);
+ }
+
+ public void setNotFound() {
+ setNotFound("Unable to find requested URI");
+ }
+
+ public void setMethodNotImplemented() {
+ this.statusCode = HttpServletResponse.SC_METHOD_NOT_ALLOWED;
+ this.message = new StatusMessage(statusCode, true, "method not implemented");
+ }
+
+ public void setInvalidURI() {
+ setInvalidURI("Invalid URI");
+ }
+
+ public void setInvalidURI(Object message) {
+ this.statusCode = HttpServletResponse.SC_BAD_REQUEST;
+ this.message = new StatusMessage(statusCode, true, message);
+ }
+
+ public void setUnsupportedMediaType(Object message) {
+ this.statusCode = HttpServletResponse.SC_UNSUPPORTED_MEDIA_TYPE;
+ this.message = new StatusMessage(statusCode, true, message);
+ }
+
+ public void setGone() {
+ this.statusCode = HttpServletResponse.SC_GONE;
+ this.message = new StatusMessage(statusCode, true, "item no longer available");
+ }
+
+
+ // Utility
+ public void respond() throws HBaseRestException {
+ response.setStatus(this.statusCode);
+ this.serializer.writeOutput(this.message);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/TableController.java b/src/java/org/apache/hadoop/hbase/rest/TableController.java
new file mode 100644
index 0000000..a022041
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/TableController.java
@@ -0,0 +1,170 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.util.ArrayList;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.parser.IHBaseRestParser;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TableController extends AbstractController {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(TableController.class);
+
+ protected TableModel getModel() {
+ return (TableModel) model;
+ }
+
+ @Override
+ protected AbstractModel generateModel(
+ HBaseConfiguration conf, HBaseAdmin admin) {
+ return new TableModel(conf, admin);
+ }
+
+ @Override
+ public void get(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ TableModel innerModel = getModel();
+
+ byte[] tableName;
+
+ tableName = pathSegments[0];
+ if (pathSegments.length < 2) {
+ s.setOK(innerModel.getTableMetadata(Bytes.toString(tableName)));
+ } else {
+ if (Bytes.toString(pathSegments[1]).toLowerCase().equals(REGIONS)) {
+ s.setOK(innerModel.getTableRegions(Bytes.toString(tableName)));
+ } else {
+ s.setBadRequest("unknown query.");
+ }
+ }
+ s.respond();
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @param input column descriptor JSON. Should be of the form: <pre>
+ * {"column_families":[ { "name":STRING, "bloomfilter":BOOLEAN,
+ * "max_versions":INTEGER, "compression_type":STRING, "in_memory":BOOLEAN,
+ * "block_cache_enabled":BOOLEAN, "max_value_length":INTEGER,
+ * "time_to_live":INTEGER ]} </pre> If any of the json object fields (except
+ * name) are not included the default values will be included instead. The
+ * default values are: <pre> bloomfilter => false max_versions => 3
+ * compression_type => NONE in_memory => false block_cache_enabled => false
+ * max_value_length => 2147483647 time_to_live => Integer.MAX_VALUE </pre>
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.AbstractController#post(org.apache.hadoop.
+ * hbase.rest.Status, byte[][], java.util.Map, byte[],
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser)
+ */
+ @Override
+ public void post(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ TableModel innerModel = getModel();
+
+ byte[] tableName;
+
+ if (pathSegments.length == 0) {
+ // If no input, we don't know columnfamily schema, so send
+ // no data
+ if (input.length == 0) {
+ s.setBadRequest("no data send with post request");
+ } else {
+ HTableDescriptor htd = parser.getTableDescriptor(input);
+ // Send to innerModel. If iM returns false, means the
+ // table already exists so return conflict.
+ if (!innerModel.post(htd.getName(), htd)) {
+ s.setConflict("table already exists");
+ } else {
+ // Otherwise successfully created table. Return "created":true
+ s.setCreated();
+ }
+ }
+ } else if (Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.ENABLE)) {
+ tableName = pathSegments[0];
+ innerModel.enableTable(tableName);
+ s.setAccepted();
+ } else if (Bytes.toString(pathSegments[1]).toLowerCase().equals(
+ RESTConstants.DISABLE)) {
+ tableName = pathSegments[0];
+ innerModel.disableTable(tableName);
+ s.setAccepted();
+ } else {
+ s.setBadRequest("Unknown Query.");
+ }
+ s.respond();
+ }
+
+ @Override
+ public void put(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ if (pathSegments.length != 1) {
+ s.setBadRequest("must specifify the name of the table");
+ s.respond();
+ } else if (queryMap.size() > 0) {
+ s
+ .setBadRequest("no query string should be specified when updating a table");
+ s.respond();
+ } else {
+ ArrayList<HColumnDescriptor> newColumns = parser
+ .getColumnDescriptors(input);
+ byte[] tableName = pathSegments[0];
+ getModel().updateTable(Bytes.toString(tableName), newColumns);
+ s.setOK();
+ s.respond();
+ }
+ }
+
+ @Override
+ public void delete(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ TableModel innerModel = getModel();
+
+ byte[] tableName;
+
+ tableName = pathSegments[0];
+
+ if (pathSegments.length == 1) {
+ if (!innerModel.delete(tableName)) {
+ s.setBadRequest("table does not exist");
+ } else {
+ s.setAccepted();
+ }
+ s.respond();
+ } else {
+
+ }
+ }
+
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/rest/TableModel.java b/src/java/org/apache/hadoop/hbase/rest/TableModel.java
new file mode 100644
index 0000000..9e1524a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/TableModel.java
@@ -0,0 +1,279 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import agilejson.TOJSON;
+
+public class TableModel extends AbstractModel {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(TableModel.class);
+
+ public TableModel(HBaseConfiguration config, HBaseAdmin admin) {
+ super.initialize(config, admin);
+ }
+
+ // Get Methods
+ public RowResult[] get(byte[] tableName) throws HBaseRestException {
+ return get(tableName, getColumns(tableName));
+ }
+
+ /**
+ * Returns all cells from all rows from the given table in the given columns.
+ * The output is in the order that the columns are given.
+ *
+ * @param tableName
+ * table name
+ * @param columnNames
+ * column names
+ * @return resultant rows
+ * @throws org.apache.hadoop.hbase.rest.exception.HBaseRestException
+ */
+ public RowResult[] get(byte[] tableName, byte[][] columnNames)
+ throws HBaseRestException {
+ try {
+ ArrayList<RowResult> a = new ArrayList<RowResult>();
+ HTable table = new HTable(tableName);
+
+ Scanner s = table.getScanner(columnNames);
+ RowResult r;
+
+ while ((r = s.next()) != null) {
+ a.add(r);
+ }
+
+ return a.toArray(new RowResult[0]);
+ } catch (Exception e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ protected boolean doesTableExist(byte[] tableName) throws HBaseRestException {
+ try {
+ return this.admin.tableExists(tableName);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ protected void disableTable(byte[] tableName) throws HBaseRestException {
+ try {
+ this.admin.disableTable(tableName);
+ } catch (IOException e) {
+ throw new HBaseRestException("IOException disabling table", e);
+ }
+ }
+
+ protected void enableTable(byte[] tableName) throws HBaseRestException {
+ try {
+ this.admin.enableTable(tableName);
+ } catch (IOException e) {
+ throw new HBaseRestException("IOException enabiling table", e);
+ }
+ }
+
+ public boolean updateTable(String tableName,
+ ArrayList<HColumnDescriptor> columns) throws HBaseRestException {
+ HTableDescriptor htc = null;
+ try {
+ htc = this.admin.getTableDescriptor(tableName);
+ } catch (IOException e) {
+ throw new HBaseRestException("Table does not exist");
+ }
+
+ for (HColumnDescriptor column : columns) {
+ if (htc.hasFamily(Bytes.toBytes(column.getNameAsString()))) {
+ try {
+ this.admin.disableTable(tableName);
+ this.admin.modifyColumn(tableName, column.getNameAsString(), column);
+ this.admin.enableTable(tableName);
+ } catch (IOException e) {
+ throw new HBaseRestException("unable to modify column "
+ + column.getNameAsString(), e);
+ }
+ } else {
+ try {
+ this.admin.disableTable(tableName);
+ this.admin.addColumn(tableName, column);
+ this.admin.enableTable(tableName);
+ } catch (IOException e) {
+ throw new HBaseRestException("unable to add column "
+ + column.getNameAsString(), e);
+ }
+ }
+ }
+
+ return true;
+
+ }
+
+ /**
+ * Get table metadata.
+ *
+ * @param tableName
+ * @return HTableDescriptor
+ * @throws HBaseRestException
+ */
+ public HTableDescriptor getTableMetadata(final String tableName)
+ throws HBaseRestException {
+ HTableDescriptor descriptor = null;
+ try {
+ HTableDescriptor[] tables = this.admin.listTables();
+ for (int i = 0; i < tables.length; i++) {
+ if (Bytes.toString(tables[i].getName()).equals(tableName)) {
+ descriptor = tables[i];
+ break;
+ }
+ }
+ if (descriptor == null) {
+
+ } else {
+ return descriptor;
+ }
+ } catch (IOException e) {
+ throw new HBaseRestException("error processing request.");
+ }
+ return descriptor;
+ }
+
+ /**
+ * Return region offsets.
+ * @param tableName
+ * @return Regions
+ * @throws HBaseRestException
+ */
+ public Regions getTableRegions(final String tableName)
+ throws HBaseRestException {
+ try {
+ HTable table = new HTable(this.conf, tableName);
+ // Presumption is that this.table has already been focused on target
+ // table.
+ Regions regions = new Regions(table.getStartKeys());
+ // Presumption is that this.table has already been set against target
+ // table
+ return regions;
+ } catch (IOException e) {
+ throw new HBaseRestException("Unable to get regions from table");
+ }
+ }
+
+ // Post Methods
+ /**
+ * Creates table tableName described by the json in input.
+ *
+ * @param tableName
+ * table name
+ * @param htd
+ * HBaseTableDescriptor for the table to be created
+ *
+ * @return true if operation does not fail due to a table with the given
+ * tableName not existing.
+ * @throws org.apache.hadoop.hbase.rest.exception.HBaseRestException
+ */
+ public boolean post(byte[] tableName, HTableDescriptor htd)
+ throws HBaseRestException {
+ try {
+ if (!this.admin.tableExists(tableName)) {
+ this.admin.createTable(htd);
+ return true;
+ }
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ return false;
+ }
+
+ /**
+ * Deletes table tableName
+ *
+ * @param tableName
+ * name of the table.
+ * @return true if table exists and deleted, false if table does not exist.
+ * @throws org.apache.hadoop.hbase.rest.exception.HBaseRestException
+ */
+ public boolean delete(byte[] tableName) throws HBaseRestException {
+ try {
+ if (this.admin.tableExists(tableName)) {
+ this.admin.disableTable(tableName);
+ this.admin.deleteTable(tableName);
+ return true;
+ }
+ return false;
+ } catch (Exception e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public static class Regions implements ISerializable {
+ byte[][] regionKey;
+
+ public Regions(byte[][] bs) {
+ super();
+ this.regionKey = bs;
+ }
+
+ @SuppressWarnings("unused")
+ private Regions() {
+ }
+
+ /**
+ * @return the regionKey
+ */
+ @TOJSON(fieldName = "region")
+ public byte[][] getRegionKey() {
+ return regionKey;
+ }
+
+ /**
+ * @param regionKey
+ * the regionKey to set
+ */
+ public void setRegionKey(byte[][] regionKey) {
+ this.regionKey = regionKey;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see org.apache.hadoop.hbase.rest.xml.IOutputXML#toXML()
+ */
+ public void restSerialize(IRestSerializer serializer)
+ throws HBaseRestException {
+ serializer.serializeRegionData(this);
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/TimestampController.java b/src/java/org/apache/hadoop/hbase/rest/TimestampController.java
new file mode 100644
index 0000000..da6e26e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/TimestampController.java
@@ -0,0 +1,139 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.parser.IHBaseRestParser;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TimestampController extends AbstractController {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(TimestampController.class);
+
+ protected TimestampModel getModel() {
+ return (TimestampModel) model;
+ }
+
+ @Override
+ protected AbstractModel generateModel(
+ HBaseConfiguration conf, HBaseAdmin admin) {
+ return new TimestampModel(conf, admin);
+ }
+
+ @Override
+ public void get(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ TimestampModel innerModel = getModel();
+
+ byte[] tableName;
+ byte[] rowName;
+ long timestamp;
+
+ tableName = pathSegments[0];
+ rowName = pathSegments[2];
+ timestamp = Bytes.toLong(pathSegments[3]);
+
+ if (queryMap.size() == 0) {
+ s.setOK(innerModel.get(tableName, rowName, timestamp));
+ } else {
+ // get the column names if any were passed in
+ String[] column_params = queryMap.get(RESTConstants.COLUMN);
+ byte[][] columns = null;
+
+ if (column_params != null && column_params.length > 0) {
+ List<String> available_columns = new ArrayList<String>();
+ for (String column_param : column_params) {
+ available_columns.add(column_param);
+ }
+ columns = Bytes.toByteArrays(available_columns.toArray(new String[0]));
+ }
+ s.setOK(innerModel.get(tableName, rowName, columns, timestamp));
+ }
+ s.respond();
+ }
+
+ @Override
+ public void post(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ TimestampModel innerModel = getModel();
+
+ byte[] tableName;
+ byte[] rowName;
+ byte[] columnName;
+ long timestamp;
+
+ tableName = pathSegments[0];
+ rowName = pathSegments[1];
+ columnName = pathSegments[2];
+ timestamp = Bytes.toLong(pathSegments[3]);
+
+ try {
+ if (queryMap.size() == 0) {
+ innerModel.post(tableName, rowName, columnName, timestamp, input);
+ s.setOK();
+ } else {
+ s.setUnsupportedMediaType("Unknown Query.");
+ }
+ } catch (HBaseRestException e) {
+ s.setUnsupportedMediaType(e.getMessage());
+ }
+ s.respond();
+ }
+
+ @Override
+ public void put(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap, byte[] input, IHBaseRestParser parser)
+ throws HBaseRestException {
+ throw new UnsupportedOperationException("Not supported yet.");
+ }
+
+ @Override
+ public void delete(Status s, byte[][] pathSegments,
+ Map<String, String[]> queryMap) throws HBaseRestException {
+ TimestampModel innerModel = getModel();
+
+ byte[] tableName;
+ byte[] rowName;
+ long timestamp;
+
+ tableName = pathSegments[0];
+ rowName = pathSegments[2];
+ timestamp = Bytes.toLong(pathSegments[3]);
+
+ if (queryMap.size() == 0) {
+ innerModel.delete(tableName, rowName, timestamp);
+ } else {
+ innerModel.delete(tableName, rowName, this
+ .getColumnsFromQueryMap(queryMap), timestamp);
+ }
+ s.setAccepted();
+ s.respond();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/TimestampModel.java b/src/java/org/apache/hadoop/hbase/rest/TimestampModel.java
new file mode 100644
index 0000000..0e876c5
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/TimestampModel.java
@@ -0,0 +1,126 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+public class TimestampModel extends AbstractModel {
+
+ @SuppressWarnings("unused")
+ private Log LOG = LogFactory.getLog(TimestampModel.class);
+
+ public TimestampModel(HBaseConfiguration conf, HBaseAdmin admin) {
+ super.initialize(conf, admin);
+ }
+
+ public void delete(byte[] tableName, byte[] rowName, long timestamp)
+ throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ table.deleteAll(rowName, timestamp);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public void delete(byte[] tableName, byte[] rowName, byte[][] columns,
+ long timestamp) throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ for (byte[] column : columns) {
+ table.deleteAll(rowName, column, timestamp);
+ }
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public Cell get(byte[] tableName, byte[] rowName, byte[] columnName,
+ long timestamp) throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ return table.get(rowName, columnName, timestamp, 1)[0];
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public Cell[] get(byte[] tableName, byte[] rowName, byte[] columnName,
+ long timestamp, int numVersions) throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ return table.get(rowName, columnName, timestamp, numVersions);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public RowResult get(byte[] tableName, byte[] rowName, byte[][] columns,
+ long timestamp) throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ return table.getRow(rowName, columns, timestamp);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ /**
+ * @param tableName
+ * @param rowName
+ * @param timestamp
+ * @return RowResult
+ * @throws HBaseRestException
+ */
+ public RowResult get(byte[] tableName, byte[] rowName, long timestamp)
+ throws HBaseRestException {
+ try {
+ HTable table = new HTable(tableName);
+ return table.getRow(rowName, timestamp);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ public void post(byte[] tableName, byte[] rowName, byte[] columnName,
+ long timestamp, byte[] value) throws HBaseRestException {
+ try {
+ HTable table;
+ BatchUpdate b;
+
+ table = new HTable(tableName);
+ b = new BatchUpdate(rowName, timestamp);
+
+ b.put(columnName, value);
+ table.commit(b);
+ } catch (IOException e) {
+ throw new HBaseRestException(e);
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/descriptors/RestCell.java b/src/java/org/apache/hadoop/hbase/rest/descriptors/RestCell.java
new file mode 100644
index 0000000..e480ce1
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/descriptors/RestCell.java
@@ -0,0 +1,104 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.descriptors;
+
+import org.apache.hadoop.hbase.io.Cell;
+
+import agilejson.TOJSON;
+
+/**
+ *
+ */
+public class RestCell extends Cell {
+
+ byte[] name;
+
+
+
+ /**
+ *
+ */
+ public RestCell() {
+ super();
+ // TODO Auto-generated constructor stub
+ }
+
+ /**
+ * @param name
+ * @param cell
+ */
+ public RestCell(byte[] name, Cell cell) {
+ super(cell.getValue(), cell.getTimestamp());
+ this.name = name;
+ }
+
+ /**
+ * @param value
+ * @param timestamp
+ */
+ public RestCell(byte[] value, long timestamp) {
+ super(value, timestamp);
+ // TODO Auto-generated constructor stub
+ }
+
+ /**
+ * @param vals
+ * @param ts
+ */
+ public RestCell(byte[][] vals, long[] ts) {
+ super(vals, ts);
+ // TODO Auto-generated constructor stub
+ }
+
+ /**
+ * @param value
+ * @param timestamp
+ */
+ public RestCell(String value, long timestamp) {
+ super(value, timestamp);
+ // TODO Auto-generated constructor stub
+ }
+
+ /**
+ * @param vals
+ * @param ts
+ */
+ public RestCell(String[] vals, long[] ts) {
+ super(vals, ts);
+ // TODO Auto-generated constructor stub
+ }
+
+ /**
+ * @return the name
+ */
+ @TOJSON(base64=true)
+ public byte[] getName() {
+ return name;
+ }
+
+ /**
+ * @param name the name to set
+ */
+ public void setName(byte[] name) {
+ this.name = name;
+ }
+
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/descriptors/RowUpdateDescriptor.java b/src/java/org/apache/hadoop/hbase/rest/descriptors/RowUpdateDescriptor.java
new file mode 100644
index 0000000..4401055
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/descriptors/RowUpdateDescriptor.java
@@ -0,0 +1,74 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.descriptors;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ *
+ */
+public class RowUpdateDescriptor {
+ private String tableName;
+ private String rowName;
+ private Map<byte[], byte[]> colvals = new HashMap<byte[], byte[]>();
+
+ public RowUpdateDescriptor(String tableName, String rowName) {
+ this.tableName = tableName;
+ this.rowName = rowName;
+ }
+
+ public RowUpdateDescriptor() {}
+
+ /**
+ * @return the tableName
+ */
+ public String getTableName() {
+ return tableName;
+ }
+
+ /**
+ * @param tableName the tableName to set
+ */
+ public void setTableName(String tableName) {
+ this.tableName = tableName;
+ }
+
+ /**
+ * @return the rowName
+ */
+ public String getRowName() {
+ return rowName;
+ }
+
+ /**
+ * @param rowName the rowName to set
+ */
+ public void setRowName(String rowName) {
+ this.rowName = rowName;
+ }
+
+ /**
+ * @return the test
+ */
+ public Map<byte[], byte[]> getColVals() {
+ return colvals;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/descriptors/ScannerDescriptor.java b/src/java/org/apache/hadoop/hbase/rest/descriptors/ScannerDescriptor.java
new file mode 100644
index 0000000..2cddabe
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/descriptors/ScannerDescriptor.java
@@ -0,0 +1,130 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.descriptors;
+
+/**
+ *
+ */
+public class ScannerDescriptor {
+ byte[][] columns;
+ long timestamp;
+ byte[] startRow;
+ byte[] stopRow;
+ String filters;
+
+ /**
+ * @param columns
+ * @param timestamp
+ * @param startRow
+ * @param stopRow
+ * @param filters
+ */
+ public ScannerDescriptor(byte[][] columns, long timestamp, byte[] startRow,
+ byte[] stopRow, String filters) {
+ super();
+ this.columns = columns;
+ this.timestamp = timestamp;
+ this.startRow = startRow;
+ this.stopRow = stopRow;
+ this.filters = filters;
+
+ if(this.startRow == null) {
+ this.startRow = new byte[0];
+ }
+ if(this.stopRow == null) {
+ this.stopRow = new byte[0];
+ }
+ }
+
+ /**
+ * @return the columns
+ */
+ public byte[][] getColumns() {
+ return columns;
+ }
+
+ /**
+ * @param columns
+ * the columns to set
+ */
+ public void setColumns(byte[][] columns) {
+ this.columns = columns;
+ }
+
+ /**
+ * @return the timestamp
+ */
+ public long getTimestamp() {
+ return timestamp;
+ }
+
+ /**
+ * @param timestamp
+ * the timestamp to set
+ */
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ }
+
+ /**
+ * @return the startRow
+ */
+ public byte[] getStartRow() {
+ return startRow;
+ }
+
+ /**
+ * @param startRow
+ * the startRow to set
+ */
+ public void setStartRow(byte[] startRow) {
+ this.startRow = startRow;
+ }
+
+ /**
+ * @return the stopRow
+ */
+ public byte[] getStopRow() {
+ return stopRow;
+ }
+
+ /**
+ * @param stopRow
+ * the stopRow to set
+ */
+ public void setStopRow(byte[] stopRow) {
+ this.stopRow = stopRow;
+ }
+
+ /**
+ * @return the filters
+ */
+ public String getFilters() {
+ return filters;
+ }
+
+ /**
+ * @param filters
+ * the filters to set
+ */
+ public void setFilters(String filters) {
+ this.filters = filters;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/rest/descriptors/ScannerIdentifier.java b/src/java/org/apache/hadoop/hbase/rest/descriptors/ScannerIdentifier.java
new file mode 100644
index 0000000..168472a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/descriptors/ScannerIdentifier.java
@@ -0,0 +1,96 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.descriptors;
+
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+
+import agilejson.TOJSON;
+
+/**
+ *
+ */
+public class ScannerIdentifier implements ISerializable {
+ Integer id;
+ Long numRows;
+
+ /**
+ * @param id
+ */
+ public ScannerIdentifier(Integer id) {
+ super();
+ this.id = id;
+ }
+
+ /**
+ * @param id
+ * @param numRows
+ */
+ public ScannerIdentifier(Integer id, Long numRows) {
+ super();
+ this.id = id;
+ this.numRows = numRows;
+ }
+
+ /**
+ * @return the id
+ */
+ @TOJSON
+ public Integer getId() {
+ return id;
+ }
+
+ /**
+ * @param id
+ * the id to set
+ */
+ public void setId(Integer id) {
+ this.id = id;
+ }
+
+ /**
+ * @return the numRows
+ */
+ public Long getNumRows() {
+ return numRows;
+ }
+
+ /**
+ * @param numRows
+ * the numRows to set
+ */
+ public void setNumRows(Long numRows) {
+ this.numRows = numRows;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.xml.IOutputXML#toXML(org.apache.hadoop.hbase
+ * .rest.serializer.IRestSerializer)
+ */
+ public void restSerialize(IRestSerializer serializer)
+ throws HBaseRestException {
+ serializer.serializeScannerIdentifier(this);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/descriptors/TimestampsDescriptor.java b/src/java/org/apache/hadoop/hbase/rest/descriptors/TimestampsDescriptor.java
new file mode 100644
index 0000000..9125c80
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/descriptors/TimestampsDescriptor.java
@@ -0,0 +1,67 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.descriptors;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.rest.serializer.IRestSerializer;
+import org.apache.hadoop.hbase.rest.serializer.ISerializable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ *
+ */
+public class TimestampsDescriptor implements ISerializable {
+ Map<Long, String> timestamps = new HashMap<Long, String>();
+
+ public void add(long timestamp, byte[] tableName, byte[] rowName) {
+ StringBuilder sb = new StringBuilder();
+ sb.append('/');
+ sb.append(Bytes.toString(tableName));
+ sb.append("/row/");
+ sb.append(Bytes.toString(rowName));
+ sb.append('/');
+ sb.append(timestamp);
+
+ timestamps.put(timestamp, sb.toString());
+ }
+
+ /**
+ * @return the timestamps
+ */
+ public Map<Long, String> getTimestamps() {
+ return timestamps;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.ISerializable#restSerialize(org
+ * .apache.hadoop.hbase.rest.serializer.IRestSerializer)
+ */
+ public void restSerialize(IRestSerializer serializer)
+ throws HBaseRestException {
+ serializer.serializeTimestamps(this);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/exception/HBaseRestException.java b/src/java/org/apache/hadoop/hbase/rest/exception/HBaseRestException.java
new file mode 100644
index 0000000..a938534
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/exception/HBaseRestException.java
@@ -0,0 +1,86 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.exception;
+
+import agilejson.TOJSON;
+
+public class HBaseRestException extends Exception {
+
+ /**
+ *
+ */
+ private static final long serialVersionUID = 8481585437124298646L;
+ private Exception innerException;
+ private String innerClass;
+ private String innerMessage;
+
+ public HBaseRestException() {
+
+ }
+
+ public HBaseRestException(Exception e) throws HBaseRestException {
+ if (HBaseRestException.class.isAssignableFrom(e.getClass())) {
+ throw ((HBaseRestException) e);
+ }
+ setInnerException(e);
+ innerClass = e.getClass().toString();
+ innerMessage = e.getMessage();
+ }
+
+ /**
+ * @param message
+ */
+ public HBaseRestException(String message) {
+ super(message);
+ innerMessage = message;
+ }
+
+ public HBaseRestException(String message, Exception exception) {
+ super(message, exception);
+ setInnerException(exception);
+ innerClass = exception.getClass().toString();
+ innerMessage = message;
+ }
+
+ @TOJSON
+ public String getInnerClass() {
+ return this.innerClass;
+ }
+
+ @TOJSON
+ public String getInnerMessage() {
+ return this.innerMessage;
+ }
+
+ /**
+ * @param innerException
+ * the innerException to set
+ */
+ public void setInnerException(Exception innerException) {
+ this.innerException = innerException;
+ }
+
+ /**
+ * @return the innerException
+ */
+ public Exception getInnerException() {
+ return innerException;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/ColumnValueFilterFactory.java b/src/java/org/apache/hadoop/hbase/rest/filter/ColumnValueFilterFactory.java
new file mode 100644
index 0000000..7af652d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/ColumnValueFilterFactory.java
@@ -0,0 +1,66 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import org.apache.hadoop.hbase.filter.ColumnValueFilter;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+/**
+ * FilterFactory that constructs a ColumnValueFilter from a JSON arg String.
+ * Expects a Stringified JSON argument with the following form:
+ *
+ * { "column_name" : "MY_COLUMN_NAME", "compare_op" : "INSERT_COMPARE_OP_HERE",
+ * "value" : "MY_COMPARE_VALUE" }
+ *
+ * The current valid compare ops are: equal, greater, greater_or_equal, less,
+ * less_or_equal, not_equal
+ */
+public class ColumnValueFilterFactory implements FilterFactory {
+
+ public RowFilterInterface getFilterFromJSON(String args)
+ throws HBaseRestException {
+ JSONObject innerJSON;
+ String columnName;
+ String compareOp;
+ String value;
+
+ try {
+ innerJSON = new JSONObject(args);
+ } catch (JSONException e) {
+ throw new HBaseRestException(e);
+ }
+
+ if ((columnName = innerJSON.optString(COLUMN_NAME)) == null) {
+ throw new MalformedFilterException();
+ }
+ if ((compareOp = innerJSON.optString(COMPARE_OP)) == null) {
+ throw new MalformedFilterException();
+ }
+ if ((value = innerJSON.optString(VALUE)) == null) {
+ throw new MalformedFilterException();
+ }
+
+ return new ColumnValueFilter(columnName.getBytes(),
+ ColumnValueFilter.CompareOp.valueOf(compareOp), value.getBytes());
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/FilterFactory.java b/src/java/org/apache/hadoop/hbase/rest/filter/FilterFactory.java
new file mode 100644
index 0000000..00803c1
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/FilterFactory.java
@@ -0,0 +1,71 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+/**
+ * Constructs Filters from JSON. Filters are defined
+ * as JSON Objects of the form:
+ * {
+ * "type" : "FILTER_CLASS_NAME",
+ * "args" : "FILTER_ARGUMENTS"
+ * }
+ *
+ * For Filters like WhileMatchRowFilter,
+ * nested Filters are supported. Just serialize a different
+ * filter in the form (for instance if you wanted to use WhileMatchRowFilter
+ * with a StopRowFilter:
+ *
+ * {
+ * "type" : "WhileMatchRowFilter",
+ * "args" : {
+ * "type" : "StopRowFilter",
+ * "args" : "ROW_KEY_TO_STOP_ON"
+ * }
+ * }
+ *
+ * For filters like RowSetFilter, nested Filters AND Filter arrays
+ * are supported. So for instance If one wanted to do a RegExp
+ * RowFilter UNIONed with a WhileMatchRowFilter(StopRowFilter),
+ * you would look like this:
+ *
+ * {
+ * "type" : "RowFilterSet",
+ * "args" : [
+ * {
+ * "type" : "RegExpRowFilter",
+ * "args" : "MY_REGULAR_EXPRESSION"
+ * },
+ * {
+ * "type" : "WhileMatchRowFilter"
+ * "args" : {
+ * "type" : "StopRowFilter"
+ * "args" : "MY_STOP_ROW_EXPRESSION"
+ * }
+ * }
+ * ]
+ * }
+ */
+public interface FilterFactory extends FilterFactoryConstants {
+ public RowFilterInterface getFilterFromJSON(String args)
+ throws HBaseRestException;
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/FilterFactoryConstants.java b/src/java/org/apache/hadoop/hbase/rest/filter/FilterFactoryConstants.java
new file mode 100644
index 0000000..1a2bd48
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/FilterFactoryConstants.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+public interface FilterFactoryConstants {
+ static String TYPE = "type";
+ static String ARGUMENTS = "args";
+ static String COLUMN_NAME = "column_name";
+ static String COMPARE_OP = "compare_op";
+ static String VALUE = "value";
+
+ static class MalformedFilterException extends HBaseRestException {
+ private static final long serialVersionUID = 1L;
+
+ public MalformedFilterException() {
+ }
+
+ @Override
+ public String toString() {
+ return "malformed filter expression";
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/InclusiveStopRowFilterFactory.java b/src/java/org/apache/hadoop/hbase/rest/filter/InclusiveStopRowFilterFactory.java
new file mode 100644
index 0000000..65392135
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/InclusiveStopRowFilterFactory.java
@@ -0,0 +1,37 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import org.apache.hadoop.hbase.filter.InclusiveStopRowFilter;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * FilterFactory that construct a InclusiveStopRowFilter
+ * from a JSON argument String.
+ *
+ * It expects that the whole input string consists of only
+ * the rowKey that you wish to stop on.
+ */
+public class InclusiveStopRowFilterFactory implements FilterFactory {
+ public RowFilterInterface getFilterFromJSON(String args) {
+ return new InclusiveStopRowFilter(Bytes.toBytes(args));
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/PageRowFilterFactory.java b/src/java/org/apache/hadoop/hbase/rest/filter/PageRowFilterFactory.java
new file mode 100644
index 0000000..35b8a4d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/PageRowFilterFactory.java
@@ -0,0 +1,34 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import org.apache.hadoop.hbase.filter.PageRowFilter;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+
+/**
+ * Constructs a PageRowFilter from a JSON argument String.
+ * Expects the entire JSON argument string to consist
+ * of the long that is the length of the pages that you want.
+ */
+public class PageRowFilterFactory implements FilterFactory {
+ public RowFilterInterface getFilterFromJSON(String args) {
+ return new PageRowFilter(Long.parseLong(args));
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/RegExpRowFilterFactory.java b/src/java/org/apache/hadoop/hbase/rest/filter/RegExpRowFilterFactory.java
new file mode 100644
index 0000000..df72f30
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/RegExpRowFilterFactory.java
@@ -0,0 +1,34 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import org.apache.hadoop.hbase.filter.RegExpRowFilter;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+
+/**
+ * Constructs a RegExpRowFilter from a JSON argument string.
+ * Expects the entire JSON arg string to consist of the
+ * entire regular expression to be used.
+ */
+public class RegExpRowFilterFactory implements FilterFactory {
+ public RowFilterInterface getFilterFromJSON(String args) {
+ return new RegExpRowFilter(args);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/RowFilterSetFactory.java b/src/java/org/apache/hadoop/hbase/rest/filter/RowFilterSetFactory.java
new file mode 100644
index 0000000..edcf4b9
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/RowFilterSetFactory.java
@@ -0,0 +1,114 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import java.util.HashSet;
+import java.util.Set;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.RowFilterSet;
+import org.apache.hadoop.hbase.rest.RESTConstants;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.json.JSONArray;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+/**
+ * Constructs a RowFilterSet from a JSON argument String.
+ *
+ * Assumes that the input is a JSONArray consisting of JSON Object version of
+ * the filters that you wish to mash together in an AND statement.
+ *
+ * The Syntax for the individual inner filters are defined by their respective
+ * FilterFactory. If a filter factory for said Factory does not exist, a
+ * MalformedFilterJSONException will be thrown.
+ *
+ * Currently OR Statements are not supported even though at a later iteration
+ * they could be supported easily.
+ */
+public class RowFilterSetFactory implements FilterFactory {
+
+ public RowFilterInterface getFilterFromJSON(String args)
+ throws HBaseRestException {
+ JSONArray filterArray;
+ Set<RowFilterInterface> set;
+ JSONObject filter;
+
+ try {
+ filterArray = new JSONArray(args);
+ } catch (JSONException e) {
+ throw new HBaseRestException(e);
+ }
+
+ // If only 1 Row, just return the row.
+ if (filterArray.length() == 1) {
+ return getRowFilter(filterArray.optJSONObject(0));
+ }
+
+ // Otherwise continue
+ set = new HashSet<RowFilterInterface>();
+
+ for (int i = 0; i < filterArray.length(); i++) {
+
+ // Get FIlter Object
+ if ((filter = filterArray.optJSONObject(i)) == null) {
+ throw new MalformedFilterException();
+ }
+
+ // Add newly constructed filter to the filter set;
+ set.add(getRowFilter(filter));
+ }
+
+ // Put set into a RowFilterSet and return.
+ return new RowFilterSet(set);
+ }
+
+ /**
+ * A refactored method that encapsulates the creation of a RowFilter given a
+ * JSONObject with a correct form of: { "type" : "MY_TYPE", "args" : MY_ARGS,
+ * }
+ *
+ * @param filter
+ * @return RowFilter
+ * @throws org.apache.hadoop.hbase.rest.exception.HBaseRestException
+ */
+ protected RowFilterInterface getRowFilter(JSONObject filter)
+ throws HBaseRestException {
+ FilterFactory f;
+ String filterType;
+ String filterArgs;
+
+ // Get Filter's Type
+ if ((filterType = filter.optString(FilterFactoryConstants.TYPE)) == null) {
+ throw new MalformedFilterException();
+ }
+
+ // Get Filter Args
+ if ((filterArgs = filter.optString(FilterFactoryConstants.ARGUMENTS)) == null) {
+ throw new MalformedFilterException();
+ }
+
+ // Get Filter Factory for given Filter Type
+ if ((f = RESTConstants.filterFactories.get(filterType)) == null) {
+ throw new MalformedFilterException();
+ }
+
+ return f.getFilterFromJSON(filterArgs);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/StopRowFilterFactory.java b/src/java/org/apache/hadoop/hbase/rest/filter/StopRowFilterFactory.java
new file mode 100644
index 0000000..28caaf6
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/StopRowFilterFactory.java
@@ -0,0 +1,37 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.StopRowFilter;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * FilterFactory that construct a StopRowFilter
+ * from an Argument String.
+ *
+ * It expects that the whole input string consists of only
+ * the rowKey that you wish to stop on.
+ */
+public class StopRowFilterFactory implements FilterFactory {
+ public RowFilterInterface getFilterFromJSON(String args) {
+ return new StopRowFilter(Bytes.toBytes(args));
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/filter/WhileMatchRowFilterFactory.java b/src/java/org/apache/hadoop/hbase/rest/filter/WhileMatchRowFilterFactory.java
new file mode 100644
index 0000000..bdb2a25
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/filter/WhileMatchRowFilterFactory.java
@@ -0,0 +1,61 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.filter;
+
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.rest.RESTConstants;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+/**
+ * Factory to produce WhileMatchRowFilters from JSON
+ * Expects as an arguement a valid JSON Object in
+ * String form of another RowFilterInterface.
+ */
+public class WhileMatchRowFilterFactory implements FilterFactory {
+ public RowFilterInterface getFilterFromJSON(String args)
+ throws HBaseRestException {
+ JSONObject innerFilterJSON;
+ FilterFactory f;
+ String innerFilterType;
+ String innerFilterArgs;
+
+ try {
+ innerFilterJSON = new JSONObject(args);
+ } catch (JSONException e) {
+ throw new HBaseRestException(e);
+ }
+
+ // Check if filter is correct
+ if ((innerFilterType = innerFilterJSON.optString(TYPE)) == null)
+ throw new MalformedFilterException();
+ if ((innerFilterArgs = innerFilterJSON.optString(ARGUMENTS)) == null)
+ throw new MalformedFilterException();
+
+ if ((f = RESTConstants.filterFactories.get(innerFilterType)) == null)
+ throw new MalformedFilterException();
+
+ RowFilterInterface innerFilter = f.getFilterFromJSON(innerFilterArgs);
+
+ return new WhileMatchRowFilter(innerFilter);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/package.html b/src/java/org/apache/hadoop/hbase/rest/package.html
new file mode 100644
index 0000000..b63a7a4
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/package.html
@@ -0,0 +1,112 @@
+<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
+<html>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<head/>
+
+<body bgcolor="white">
+Provides an HBase
+<a href="http://en.wikipedia.org/wiki/Representational_State_Transfer">
+REST</a> service.
+
+This directory contains a REST service implementation for an Hbase RPC
+service.
+
+<h2><a name="description">Description</a></h2>
+<p>
+By default, an instance of the REST servlet runs in the master UI; just browse
+to [WWW] http://MASTER_HOST:MASTER_PORT/api/ (Results are returned as xml by
+default so you may have to look at source to see results).
+
+If you intend to use the hbase REST API heavily, to run an instance of the RES
+T server outside of the master, do the following:
+ <pre>
+cd $HBASE_HOME
+bin/hbase rest start
+ </pre>
+The default port is 60050.
+</p>
+
+<h2><a name="uri">URI</a></h2>
+<h3><a name="uri#meta">System Operation</a></h3>
+<ul>
+ <li>GET / : Retrieve a list of all the tables in HBase.</li>
+</ul>
+
+<h3><a name="uri#table">Table Operation</a></h3>
+<ul>
+ <li>POST / : Create a table</li>
+
+ <li>GET /[table_name] : Retrieve metadata about the table</li>
+
+ <li>PUT /[table_name] : Update the table schema</li>
+
+ <li>DELETE /[table_name] : Delete the table</li>
+
+ <li>POST /[table_name]/disable : Disable the table</li>
+
+ <li>POST /[table_name]/enable : Enable the table</li>
+
+ <li>GET /[table_name]/regions : Retrieve a list of the regions for this table
+ so that you can efficiently split up the work</li>
+</ul>
+
+<h3><a name="uri#row">Row Operation</a></h3>
+<ul>
+ <li>GET /[table_name]/row/[row_key]/timestamps : Retrieve a list of all the
+ timestamps available for this row key (Not supported by native hbase yet)</li>
+
+ <li>GET /[table_name]/row/[row_key] : Retrieve data from a
+ row. If column not specified, return all columns</li>
+
+ <li>GET /[table_name]/row/[row_key]/[timestamp] : Retrieve
+ data from a row, constrained by the timestamp value. If column not specified,
+ return all columns</li>
+
+ <li>POST/PUT /[table_name]/row/[row_key] : Set the value of one or more
+ columns for a given row key</li>
+
+ <li>POST/PUT /[table_name]/row/[row_key]/[timestamp] : Set the value of one
+ or more columns for a given row key with an optional timestamp</li>
+
+ <li>DELETE /[table_name]/row/[row_key]/ : Delete the specified columns from
+ the row. If there are no columns specified, then it will delete ALL columns</li>
+
+ <li>DELETE /[table_name]/row/[row_key]/[timestamp] : Delete the specified
+ columns from the row constrained by the timestamp. If there are no columns
+ specified, then it will delete ALL columns. Not supported yet.</li>
+</ul>
+
+<h3><a name="uri#scanner">Scanner Operation</a></h3>
+<ul>
+ <li>POST/PUT /[table_name]/scanner : Request that a scanner be created with
+ the specified options. Returns a scanner ID that can be used to iterate over
+ the results of the scanner</li>
+
+ <li>POST /[table_name]/scanner/[scanner_id] : Return the current item in the
+ scanner and advance to the next one. Think of it as a queue dequeue operation</li>
+
+ <li>DELETE /[table_name]/scanner/[scanner_id] : Close a scanner</li>
+</ul>
+<p>
+For examples and more details, please visit
+<a href="http://wiki.apache.org/hadoop/Hbase/HbaseRest">HBaseRest Wiki</a> page.
+</p>
+</body>
+</html>
diff --git a/src/java/org/apache/hadoop/hbase/rest/parser/HBaseRestParserFactory.java b/src/java/org/apache/hadoop/hbase/rest/parser/HBaseRestParserFactory.java
new file mode 100644
index 0000000..8247127
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/parser/HBaseRestParserFactory.java
@@ -0,0 +1,56 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.parser;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.rest.Dispatcher.ContentType;
+
+/**
+ *
+ */
+public class HBaseRestParserFactory {
+
+ private static final Map<ContentType, Class<?>> parserMap =
+ new HashMap<ContentType, Class<?>>();
+
+ static {
+ parserMap.put(ContentType.XML, XMLRestParser.class);
+ parserMap.put(ContentType.JSON, JsonRestParser.class);
+ }
+
+ public static IHBaseRestParser getParser(ContentType ct) {
+ IHBaseRestParser parser = null;
+
+ Class<?> clazz = parserMap.get(ct);
+ try {
+ parser = (IHBaseRestParser) clazz.newInstance();
+ } catch (InstantiationException e) {
+ // TODO Auto-generated catch block
+ e.printStackTrace();
+ } catch (IllegalAccessException e) {
+ // TODO Auto-generated catch block
+ e.printStackTrace();
+ }
+
+ return parser;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/parser/IHBaseRestParser.java b/src/java/org/apache/hadoop/hbase/rest/parser/IHBaseRestParser.java
new file mode 100644
index 0000000..b87313c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/parser/IHBaseRestParser.java
@@ -0,0 +1,52 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.parser;
+
+import java.util.ArrayList;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.rest.descriptors.RowUpdateDescriptor;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerDescriptor;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+/**
+ *
+ */
+public interface IHBaseRestParser {
+ /**
+ * Parses a HTableDescriptor given the input array.
+ *
+ * @param input
+ * @return HTableDescriptor
+ * @throws HBaseRestException
+ */
+ public HTableDescriptor getTableDescriptor(byte[] input)
+ throws HBaseRestException;
+
+ public ArrayList<HColumnDescriptor> getColumnDescriptors(byte[] input)
+ throws HBaseRestException;
+
+ public ScannerDescriptor getScannerDescriptor(byte[] input)
+ throws HBaseRestException;
+
+ public RowUpdateDescriptor getRowUpdateDescriptor(byte[] input,
+ byte[][] pathSegments) throws HBaseRestException;
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/parser/JsonRestParser.java b/src/java/org/apache/hadoop/hbase/rest/parser/JsonRestParser.java
new file mode 100644
index 0000000..f715728
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/parser/JsonRestParser.java
@@ -0,0 +1,234 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.parser;
+
+import java.util.ArrayList;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.rest.RESTConstants;
+import org.apache.hadoop.hbase.rest.descriptors.RowUpdateDescriptor;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerDescriptor;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.json.JSONArray;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+/**
+ *
+ */
+public class JsonRestParser implements IHBaseRestParser {
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser#getTableDescriptor
+ * (byte[])
+ */
+ public HTableDescriptor getTableDescriptor(byte[] input)
+ throws HBaseRestException {
+ try {
+ JSONObject o;
+ HTableDescriptor h;
+ JSONArray columnDescriptorArray;
+ o = new JSONObject(new String(input));
+ columnDescriptorArray = o.getJSONArray("column_families");
+ h = new HTableDescriptor(o.getString("name"));
+
+ for (int i = 0; i < columnDescriptorArray.length(); i++) {
+ JSONObject json_columnDescriptor = columnDescriptorArray
+ .getJSONObject(i);
+ h.addFamily(this.getColumnDescriptor(json_columnDescriptor));
+ }
+ return h;
+ } catch (Exception e) {
+ throw new HBaseRestException(e);
+ }
+ }
+
+ private HColumnDescriptor getColumnDescriptor(JSONObject jsonObject)
+ throws JSONException {
+ String strTemp;
+ strTemp = jsonObject.getString("name");
+ if (strTemp.charAt(strTemp.length() - 1) != ':') {
+ strTemp += ":";
+ }
+
+ byte[] name = Bytes.toBytes(strTemp);
+
+ int maxVersions;
+ String cType;
+ boolean inMemory;
+ boolean blockCacheEnabled;
+ int maxValueLength;
+ int timeToLive;
+ boolean bloomfilter;
+
+ try {
+ bloomfilter = jsonObject.getBoolean("bloomfilter");
+ } catch (JSONException e) {
+ bloomfilter = false;
+ }
+
+ try {
+ maxVersions = jsonObject.getInt("max_versions");
+ } catch (JSONException e) {
+ maxVersions = 3;
+ }
+
+ try {
+ cType = jsonObject.getString("compression_type").toUpperCase();
+ } catch (JSONException e) {
+ cType = HColumnDescriptor.DEFAULT_COMPRESSION;
+ }
+
+ try {
+ inMemory = jsonObject.getBoolean("in_memory");
+ } catch (JSONException e) {
+ inMemory = false;
+ }
+
+ try {
+ blockCacheEnabled = jsonObject.getBoolean("block_cache_enabled");
+ } catch (JSONException e) {
+ blockCacheEnabled = false;
+ }
+
+ try {
+ maxValueLength = jsonObject.getInt("max_value_length");
+ } catch (JSONException e) {
+ maxValueLength = 2147483647;
+ }
+
+ try {
+ timeToLive = jsonObject.getInt("time_to_live");
+ } catch (JSONException e) {
+ timeToLive = Integer.MAX_VALUE;
+ }
+
+ return new HColumnDescriptor(name, maxVersions, cType, inMemory,
+ blockCacheEnabled, maxValueLength, timeToLive, bloomfilter);
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser#getColumnDescriptors
+ * (byte[])
+ */
+ public ArrayList<HColumnDescriptor> getColumnDescriptors(byte[] input)
+ throws HBaseRestException {
+ ArrayList<HColumnDescriptor> columns = new ArrayList<HColumnDescriptor>();
+ try {
+ JSONObject o;
+ JSONArray columnDescriptorArray;
+ o = new JSONObject(new String(input));
+ columnDescriptorArray = o.getJSONArray("column_families");
+
+ for (int i = 0; i < columnDescriptorArray.length(); i++) {
+ JSONObject json_columnDescriptor = columnDescriptorArray
+ .getJSONObject(i);
+ columns.add(this.getColumnDescriptor(json_columnDescriptor));
+ }
+ } catch (JSONException e) {
+ throw new HBaseRestException("Error Parsing json input", e);
+ }
+
+ return columns;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser#getScannerDescriptor
+ * (byte[])
+ */
+ public ScannerDescriptor getScannerDescriptor(byte[] input)
+ throws HBaseRestException {
+ JSONObject scannerDescriptor;
+ JSONArray columnArray;
+
+ byte[][] columns = null;
+ long timestamp;
+ byte[] startRow;
+ byte[] stopRow;
+ String filters;
+
+ try {
+ scannerDescriptor = new JSONObject(new String(input));
+
+ columnArray = scannerDescriptor.optJSONArray(RESTConstants.COLUMNS);
+ timestamp = scannerDescriptor.optLong(RESTConstants.SCANNER_TIMESTAMP);
+ startRow = Bytes.toBytes(scannerDescriptor.optString(
+ RESTConstants.SCANNER_START_ROW, ""));
+ stopRow = Bytes.toBytes(scannerDescriptor.optString(
+ RESTConstants.SCANNER_STOP_ROW, ""));
+ filters = scannerDescriptor.optString(RESTConstants.SCANNER_FILTER);
+
+ if (columnArray != null) {
+ columns = new byte[columnArray.length()][];
+ for (int i = 0; i < columnArray.length(); i++) {
+ columns[i] = Bytes.toBytes(columnArray.optString(i));
+ }
+ }
+
+ return new ScannerDescriptor(columns, timestamp, startRow, stopRow,
+ filters);
+ } catch (JSONException e) {
+ throw new HBaseRestException("error parsing json string", e);
+ }
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser#getRowUpdateDescriptor
+ * (byte[], byte[][])
+ */
+ public RowUpdateDescriptor getRowUpdateDescriptor(byte[] input,
+ byte[][] pathSegments) throws HBaseRestException {
+
+ RowUpdateDescriptor rud = new RowUpdateDescriptor();
+ JSONArray a;
+
+ rud.setTableName(Bytes.toString(pathSegments[0]));
+ rud.setRowName(Bytes.toString(pathSegments[2]));
+
+ try {
+ JSONObject updateObject = new JSONObject(new String(input));
+ a = updateObject.getJSONArray(RESTConstants.COLUMNS);
+ for (int i = 0; i < a.length(); i++) {
+ rud.getColVals().put(
+ Bytes.toBytes(a.getJSONObject(i).getString(RESTConstants.NAME)),
+ org.apache.hadoop.hbase.util.Base64.decode(a.getJSONObject(i)
+ .getString(RESTConstants.VALUE)));
+ }
+ } catch (JSONException e) {
+ throw new HBaseRestException("Error parsing row update json", e);
+ }
+ return rud;
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/parser/XMLRestParser.java b/src/java/org/apache/hadoop/hbase/rest/parser/XMLRestParser.java
new file mode 100644
index 0000000..abab643
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/parser/XMLRestParser.java
@@ -0,0 +1,289 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.parser;
+
+import java.io.ByteArrayInputStream;
+import java.util.ArrayList;
+
+import javax.xml.parsers.DocumentBuilder;
+import javax.xml.parsers.DocumentBuilderFactory;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.rest.RESTConstants;
+import org.apache.hadoop.hbase.rest.descriptors.RowUpdateDescriptor;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerDescriptor;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.w3c.dom.Document;
+import org.w3c.dom.Element;
+import org.w3c.dom.Node;
+import org.w3c.dom.NodeList;
+
+/**
+ *
+ */
+public class XMLRestParser implements IHBaseRestParser {
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser#getTableDescriptor
+ * (byte[])
+ */
+ public HTableDescriptor getTableDescriptor(byte[] input)
+ throws HBaseRestException {
+ DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory
+ .newInstance();
+ docBuilderFactory.setIgnoringComments(true);
+
+ DocumentBuilder builder = null;
+ Document doc = null;
+ HTableDescriptor htd = null;
+
+ try {
+ builder = docBuilderFactory.newDocumentBuilder();
+ ByteArrayInputStream is = new ByteArrayInputStream(input);
+ doc = builder.parse(is);
+ } catch (Exception e) {
+ throw new HBaseRestException(e);
+ }
+
+ try {
+ Node name_node = doc.getElementsByTagName("name").item(0);
+ String table_name = name_node.getFirstChild().getNodeValue();
+
+ htd = new HTableDescriptor(table_name);
+ NodeList columnfamily_nodes = doc.getElementsByTagName("columnfamily");
+ for (int i = 0; i < columnfamily_nodes.getLength(); i++) {
+ Element columnfamily = (Element) columnfamily_nodes.item(i);
+ htd.addFamily(this.getColumnDescriptor(columnfamily));
+ }
+ } catch (Exception e) {
+ throw new HBaseRestException(e);
+ }
+ return htd;
+ }
+
+ public HColumnDescriptor getColumnDescriptor(Element columnfamily) {
+ return this.getColumnDescriptor(columnfamily, null);
+ }
+
+ private HColumnDescriptor getColumnDescriptor(Element columnfamily,
+ HTableDescriptor currentTDesp) {
+ Node name_node = columnfamily.getElementsByTagName("name").item(0);
+ String colname = makeColumnName(name_node.getFirstChild().getNodeValue());
+
+ int max_versions = HColumnDescriptor.DEFAULT_VERSIONS;
+ String compression = HColumnDescriptor.DEFAULT_COMPRESSION;
+ boolean in_memory = HColumnDescriptor.DEFAULT_IN_MEMORY;
+ boolean block_cache = HColumnDescriptor.DEFAULT_BLOCKCACHE;
+ int max_cell_size = HColumnDescriptor.DEFAULT_LENGTH;
+ int ttl = HColumnDescriptor.DEFAULT_TTL;
+ boolean bloomfilter = HColumnDescriptor.DEFAULT_BLOOMFILTER;
+
+ if (currentTDesp != null) {
+ HColumnDescriptor currentCDesp = currentTDesp.getFamily(Bytes
+ .toBytes(colname));
+ if (currentCDesp != null) {
+ max_versions = currentCDesp.getMaxVersions();
+ // compression = currentCDesp.getCompression();
+ in_memory = currentCDesp.isInMemory();
+ block_cache = currentCDesp.isBlockCacheEnabled();
+ max_cell_size = currentCDesp.getMaxValueLength();
+ ttl = currentCDesp.getTimeToLive();
+ bloomfilter = currentCDesp.isBloomfilter();
+ }
+ }
+
+ NodeList max_versions_list = columnfamily
+ .getElementsByTagName("max-versions");
+ if (max_versions_list.getLength() > 0) {
+ max_versions = Integer.parseInt(max_versions_list.item(0).getFirstChild()
+ .getNodeValue());
+ }
+
+ NodeList compression_list = columnfamily
+ .getElementsByTagName("compression");
+ if (compression_list.getLength() > 0) {
+ compression = compression_list.item(0)
+ .getFirstChild().getNodeValue().toUpperCase();
+ }
+
+ NodeList in_memory_list = columnfamily.getElementsByTagName("in-memory");
+ if (in_memory_list.getLength() > 0) {
+ in_memory = Boolean.valueOf(in_memory_list.item(0).getFirstChild()
+ .getNodeValue());
+ }
+
+ NodeList block_cache_list = columnfamily
+ .getElementsByTagName("block-cache");
+ if (block_cache_list.getLength() > 0) {
+ block_cache = Boolean.valueOf(block_cache_list.item(0).getFirstChild()
+ .getNodeValue());
+ }
+
+ NodeList max_cell_size_list = columnfamily
+ .getElementsByTagName("max-cell-size");
+ if (max_cell_size_list.getLength() > 0) {
+ max_cell_size = Integer.valueOf(max_cell_size_list.item(0)
+ .getFirstChild().getNodeValue());
+ }
+
+ NodeList ttl_list = columnfamily.getElementsByTagName("time-to-live");
+ if (ttl_list.getLength() > 0) {
+ ttl = Integer.valueOf(ttl_list.item(0).getFirstChild().getNodeValue());
+ }
+
+ NodeList bloomfilter_list = columnfamily
+ .getElementsByTagName("bloomfilter");
+ if (bloomfilter_list.getLength() > 0) {
+ bloomfilter = Boolean.valueOf(bloomfilter_list.item(0).getFirstChild()
+ .getNodeValue());
+ }
+
+ HColumnDescriptor hcd = new HColumnDescriptor(Bytes.toBytes(colname),
+ max_versions, compression, in_memory, block_cache,
+ max_cell_size, ttl, bloomfilter);
+
+ NodeList metadataList = columnfamily.getElementsByTagName("metadata");
+ for (int i = 0; i < metadataList.getLength(); i++) {
+ Element metadataColumn = (Element) metadataList.item(i);
+ // extract the name and value children
+ Node mname_node = metadataColumn.getElementsByTagName("name").item(0);
+ String mname = mname_node.getFirstChild().getNodeValue();
+ Node mvalue_node = metadataColumn.getElementsByTagName("value").item(0);
+ String mvalue = mvalue_node.getFirstChild().getNodeValue();
+ hcd.setValue(mname, mvalue);
+ }
+
+ return hcd;
+ }
+
+ protected String makeColumnName(String column) {
+ String returnColumn = column;
+ if (column.indexOf(':') == -1)
+ returnColumn += ':';
+ return returnColumn;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser#getColumnDescriptors
+ * (byte[])
+ */
+ public ArrayList<HColumnDescriptor> getColumnDescriptors(byte[] input)
+ throws HBaseRestException {
+ DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory
+ .newInstance();
+ docBuilderFactory.setIgnoringComments(true);
+
+ DocumentBuilder builder = null;
+ Document doc = null;
+ ArrayList<HColumnDescriptor> columns = new ArrayList<HColumnDescriptor>();
+
+ try {
+ builder = docBuilderFactory.newDocumentBuilder();
+ ByteArrayInputStream is = new ByteArrayInputStream(input);
+ doc = builder.parse(is);
+ } catch (Exception e) {
+ throw new HBaseRestException(e);
+ }
+
+ NodeList columnfamily_nodes = doc.getElementsByTagName("columnfamily");
+ for (int i = 0; i < columnfamily_nodes.getLength(); i++) {
+ Element columnfamily = (Element) columnfamily_nodes.item(i);
+ columns.add(this.getColumnDescriptor(columnfamily));
+ }
+
+ return columns;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser#getScannerDescriptor
+ * (byte[])
+ */
+ public ScannerDescriptor getScannerDescriptor(byte[] input)
+ throws HBaseRestException {
+ // TODO Auto-generated method stub
+ return null;
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.parser.IHBaseRestParser#getRowUpdateDescriptor
+ * (byte[], byte[][])
+ */
+ public RowUpdateDescriptor getRowUpdateDescriptor(byte[] input,
+ byte[][] pathSegments) throws HBaseRestException {
+ RowUpdateDescriptor rud = new RowUpdateDescriptor();
+
+ rud.setTableName(Bytes.toString(pathSegments[0]));
+ rud.setRowName(Bytes.toString(pathSegments[2]));
+
+ DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory
+ .newInstance();
+ docBuilderFactory.setIgnoringComments(true);
+
+ DocumentBuilder builder = null;
+ Document doc = null;
+
+ try {
+ builder = docBuilderFactory.newDocumentBuilder();
+ ByteArrayInputStream is = new ByteArrayInputStream(input);
+ doc = builder.parse(is);
+ } catch (Exception e) {
+ throw new HBaseRestException(e.getMessage(), e);
+ }
+
+ NodeList cell_nodes = doc.getElementsByTagName(RESTConstants.COLUMN);
+ System.out.println("cell_nodes.length: " + cell_nodes.getLength());
+ for (int i = 0; i < cell_nodes.getLength(); i++) {
+ String columnName = null;
+ byte[] value = null;
+
+ Element cell = (Element) cell_nodes.item(i);
+
+ NodeList item = cell.getElementsByTagName(RESTConstants.NAME);
+ if (item.getLength() > 0) {
+ columnName = item.item(0).getFirstChild().getNodeValue();
+ }
+
+ NodeList item1 = cell.getElementsByTagName(RESTConstants.VALUE);
+ if (item1.getLength() > 0) {
+ value = org.apache.hadoop.hbase.util.Base64.decode(item1
+ .item(0).getFirstChild().getNodeValue());
+ }
+
+ if (columnName != null && value != null) {
+ rud.getColVals().put(columnName.getBytes(), value);
+ }
+ }
+ return rud;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/serializer/AbstractRestSerializer.java b/src/java/org/apache/hadoop/hbase/rest/serializer/AbstractRestSerializer.java
new file mode 100644
index 0000000..bcbe1c7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/serializer/AbstractRestSerializer.java
@@ -0,0 +1,59 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.serializer;
+
+import javax.servlet.http.HttpServletResponse;
+
+/**
+ *
+ * Abstract object that is used as the base of all serializers in the
+ * REST based interface.
+ */
+public abstract class AbstractRestSerializer implements IRestSerializer {
+
+ // keep the response object to write back to the stream
+ protected final HttpServletResponse response;
+ // Used to denote if pretty printing of the output should be used
+ protected final boolean prettyPrint;
+
+ /**
+ * marking the default constructor as private so it will never be used.
+ */
+ @SuppressWarnings("unused")
+ private AbstractRestSerializer() {
+ response = null;
+ prettyPrint = false;
+ }
+
+ /**
+ * Public constructor for AbstractRestSerializer. This is the constructor that
+ * should be called whenever creating a RestSerializer object.
+ *
+ * @param response
+ * @param prettyPrint
+ */
+ public AbstractRestSerializer(HttpServletResponse response,
+ boolean prettyPrint) {
+ super();
+ this.response = response;
+ this.prettyPrint = prettyPrint;
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/serializer/IRestSerializer.java b/src/java/org/apache/hadoop/hbase/rest/serializer/IRestSerializer.java
new file mode 100644
index 0000000..e91db35
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/serializer/IRestSerializer.java
@@ -0,0 +1,173 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.serializer;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.rest.DatabaseModel.DatabaseMetadata;
+import org.apache.hadoop.hbase.rest.Status.StatusMessage;
+import org.apache.hadoop.hbase.rest.TableModel.Regions;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerIdentifier;
+import org.apache.hadoop.hbase.rest.descriptors.TimestampsDescriptor;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+/**
+ *
+ * Interface that is implemented to return serialized objects back to
+ * the output stream.
+ */
+public interface IRestSerializer {
+ /**
+ * Serializes an object into the appropriate format and writes it to the
+ * output stream.
+ *
+ * This is the main point of entry when for an object to be serialized to the
+ * output stream.
+ *
+ * @param o
+ * @throws HBaseRestException
+ */
+ public void writeOutput(Object o) throws HBaseRestException;
+
+ /**
+ * serialize the database metadata
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param databaseMetadata
+ * @throws HBaseRestException
+ */
+ public void serializeDatabaseMetadata(DatabaseMetadata databaseMetadata)
+ throws HBaseRestException;
+
+ /**
+ * serialize the HTableDescriptor object
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param tableDescriptor
+ * @throws HBaseRestException
+ */
+ public void serializeTableDescriptor(HTableDescriptor tableDescriptor)
+ throws HBaseRestException;
+
+ /**
+ * serialize an HColumnDescriptor to the output stream.
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param column
+ * @throws HBaseRestException
+ */
+ public void serializeColumnDescriptor(HColumnDescriptor column)
+ throws HBaseRestException;
+
+ /**
+ * serialize the region data for a table to the output stream
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param regions
+ * @throws HBaseRestException
+ */
+ public void serializeRegionData(Regions regions) throws HBaseRestException;
+
+ /**
+ * serialize the status message object to the output stream
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param message
+ * @throws HBaseRestException
+ */
+ public void serializeStatusMessage(StatusMessage message)
+ throws HBaseRestException;
+
+ /**
+ * serialize the ScannerIdentifier object to the output stream
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param scannerIdentifier
+ * @throws HBaseRestException
+ */
+ public void serializeScannerIdentifier(ScannerIdentifier scannerIdentifier)
+ throws HBaseRestException;
+
+ /**
+ * serialize a RowResult object to the output stream
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param rowResult
+ * @throws HBaseRestException
+ */
+ public void serializeRowResult(RowResult rowResult) throws HBaseRestException;
+
+ /**
+ * serialize a RowResult array to the output stream
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param rows
+ * @throws HBaseRestException
+ */
+ public void serializeRowResultArray(RowResult[] rows)
+ throws HBaseRestException;
+
+ /**
+ * serialize a cell object to the output stream
+ *
+ * Implementation of this method is optional, IF all the work is done in the
+ * writeOutput(Object o) method
+ *
+ * @param cell
+ * @throws HBaseRestException
+ */
+ public void serializeCell(Cell cell) throws HBaseRestException;
+
+ /**
+ * serialize a Cell array to the output stream
+ *
+ * @param cells
+ * @throws HBaseRestException
+ */
+ public void serializeCellArray(Cell[] cells) throws HBaseRestException;
+
+
+ /**
+ * serialize a description of the timestamps available for a row
+ * to the output stream.
+ *
+ * @param timestampsDescriptor
+ * @throws HBaseRestException
+ */
+ public void serializeTimestamps(TimestampsDescriptor timestampsDescriptor) throws HBaseRestException;
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/serializer/ISerializable.java b/src/java/org/apache/hadoop/hbase/rest/serializer/ISerializable.java
new file mode 100644
index 0000000..d482854
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/serializer/ISerializable.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.serializer;
+
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+/**
+ *
+ * Interface for objects that wish to write back to the REST based
+ * interface output stream. Objects should implement this interface,
+ * then use the IRestSerializer passed to it to call the appropriate
+ * serialization method.
+ */
+public interface ISerializable {
+ /**
+ * visitor pattern method where the object implementing this interface will
+ * call back on the IRestSerializer with the correct method to run to
+ * serialize the output of the object to the stream.
+ *
+ * @param serializer
+ * @throws HBaseRestException
+ */
+ public void restSerialize(IRestSerializer serializer)
+ throws HBaseRestException;
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/serializer/JSONSerializer.java b/src/java/org/apache/hadoop/hbase/rest/serializer/JSONSerializer.java
new file mode 100644
index 0000000..d54df8d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/serializer/JSONSerializer.java
@@ -0,0 +1,213 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.serializer;
+
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.rest.DatabaseModel.DatabaseMetadata;
+import org.apache.hadoop.hbase.rest.Status.StatusMessage;
+import org.apache.hadoop.hbase.rest.TableModel.Regions;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerIdentifier;
+import org.apache.hadoop.hbase.rest.descriptors.TimestampsDescriptor;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+import agilejson.JSON;
+
+/**
+ *
+ * Serializes objects into JSON strings and prints them back out on the output
+ * stream. It should be noted that this JSON implementation uses annotations on
+ * the objects to be serialized.
+ *
+ * Since these annotations are used to describe the serialization of the objects
+ * the only method that is implemented is writeOutput(Object o). The other
+ * methods in the interface do not need to be implemented.
+ */
+public class JSONSerializer extends AbstractRestSerializer {
+
+ /**
+ * @param response
+ */
+ public JSONSerializer(HttpServletResponse response) {
+ super(response, false);
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#writeOutput(java
+ * .lang.Object, javax.servlet.http.HttpServletResponse)
+ */
+ public void writeOutput(Object o) throws HBaseRestException {
+ response.setContentType("application/json");
+
+ try {
+ // LOG.debug("At top of send data");
+ String data = JSON.toJSON(o);
+ response.setContentLength(data.length());
+ response.getWriter().println(data);
+ } catch (Exception e) {
+ // LOG.debug("Error sending data: " + e.toString());
+ throw new HBaseRestException(e);
+ }
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @seeorg.apache.hadoop.hbase.rest.serializer.IRestSerializer#
+ * serializeColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor)
+ */
+ public void serializeColumnDescriptor(HColumnDescriptor column)
+ throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @seeorg.apache.hadoop.hbase.rest.serializer.IRestSerializer#
+ * serializeDatabaseMetadata
+ * (org.apache.hadoop.hbase.rest.DatabaseModel.DatabaseMetadata)
+ */
+ public void serializeDatabaseMetadata(DatabaseMetadata databaseMetadata)
+ throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeRegionData
+ * (org.apache.hadoop.hbase.rest.TableModel.Regions)
+ */
+ public void serializeRegionData(Regions regions) throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @seeorg.apache.hadoop.hbase.rest.serializer.IRestSerializer#
+ * serializeTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor)
+ */
+ public void serializeTableDescriptor(HTableDescriptor tableDescriptor)
+ throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeStatusMessage
+ * (org.apache.hadoop.hbase.rest.Status.StatusMessage)
+ */
+ public void serializeStatusMessage(StatusMessage message)
+ throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @seeorg.apache.hadoop.hbase.rest.serializer.IRestSerializer#
+ * serializeScannerIdentifier(org.apache.hadoop.hbase.rest.ScannerIdentifier)
+ */
+ public void serializeScannerIdentifier(ScannerIdentifier scannerIdentifier)
+ throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeRowResult
+ * (org.apache.hadoop.hbase.io.RowResult)
+ */
+ public void serializeRowResult(RowResult rowResult) throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeRowResultArray
+ * (org.apache.hadoop.hbase.io.RowResult[])
+ */
+ public void serializeRowResultArray(RowResult[] rows)
+ throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeCell(org
+ * .apache.hadoop.hbase.io.Cell)
+ */
+ public void serializeCell(Cell cell) throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeCellArray
+ * (org.apache.hadoop.hbase.io.Cell[])
+ */
+ public void serializeCellArray(Cell[] cells) throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeTimestamps
+ * (org.apache.hadoop.hbase.rest.RowModel.TimestampsDescriptor)
+ */
+ public void serializeTimestamps(TimestampsDescriptor timestampsDescriptor)
+ throws HBaseRestException {
+ // No implementation needed for the JSON serializer
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/serializer/RestSerializerFactory.java b/src/java/org/apache/hadoop/hbase/rest/serializer/RestSerializerFactory.java
new file mode 100644
index 0000000..9284da0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/serializer/RestSerializerFactory.java
@@ -0,0 +1,56 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.serializer;
+
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.hadoop.hbase.rest.Dispatcher.ContentType;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+
+/**
+ *
+ * Factory used to return a Rest Serializer tailored to the HTTP
+ * Requesters accept type in the header.
+ *
+ */
+public class RestSerializerFactory {
+
+ public static AbstractRestSerializer getSerializer(
+ HttpServletRequest request, HttpServletResponse response)
+ throws HBaseRestException {
+ ContentType ct = ContentType.getContentType(request.getHeader("accept"));
+ AbstractRestSerializer serializer = null;
+
+ // TODO refactor this so it uses reflection to create the new objects.
+ switch (ct) {
+ case XML:
+ serializer = new SimpleXMLSerializer(response);
+ break;
+ case JSON:
+ serializer = new JSONSerializer(response);
+ break;
+ default:
+ serializer = new SimpleXMLSerializer(response);
+ break;
+ }
+ return serializer;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/rest/serializer/SimpleXMLSerializer.java b/src/java/org/apache/hadoop/hbase/rest/serializer/SimpleXMLSerializer.java
new file mode 100644
index 0000000..12b30a8
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/rest/serializer/SimpleXMLSerializer.java
@@ -0,0 +1,464 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest.serializer;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import javax.servlet.http.HttpServletResponse;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.rest.DatabaseModel.DatabaseMetadata;
+import org.apache.hadoop.hbase.rest.Status.StatusMessage;
+import org.apache.hadoop.hbase.rest.TableModel.Regions;
+import org.apache.hadoop.hbase.rest.descriptors.RestCell;
+import org.apache.hadoop.hbase.rest.descriptors.ScannerIdentifier;
+import org.apache.hadoop.hbase.rest.descriptors.TimestampsDescriptor;
+import org.apache.hadoop.hbase.rest.exception.HBaseRestException;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ *
+ * Basic first pass at implementing an XML serializer for the REST interface.
+ * This should probably be refactored into something better.
+ *
+ */
+public class SimpleXMLSerializer extends AbstractRestSerializer {
+
+ private final AbstractPrinter printer;
+
+ /**
+ * @param response
+ * @throws HBaseRestException
+ */
+ @SuppressWarnings("synthetic-access")
+ public SimpleXMLSerializer(HttpServletResponse response)
+ throws HBaseRestException {
+ super(response, false);
+ printer = new SimplePrinter(response);
+ }
+
+ @SuppressWarnings("synthetic-access")
+ public SimpleXMLSerializer(HttpServletResponse response, boolean prettyPrint)
+ throws HBaseRestException {
+ super(response, prettyPrint);
+ if (prettyPrint) {
+ printer = new PrettyPrinter(response);
+ } else {
+ printer = new SimplePrinter(response);
+ }
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#writeOutput(java
+ * .lang.Object, java.io.OutputStream)
+ */
+ public void writeOutput(Object o) throws HBaseRestException {
+ response.setContentType("text/xml");
+ response.setCharacterEncoding(HConstants.UTF8_ENCODING);
+
+ if (o instanceof ISerializable) {
+ ((ISerializable) o).restSerialize(this);
+ } else if (o.getClass().isArray()
+ && o.getClass().getComponentType() == RowResult.class) {
+ this.serializeRowResultArray((RowResult[]) o);
+ } else if (o.getClass().isArray()
+ && o.getClass().getComponentType() == Cell.class) {
+ this.serializeCellArray((Cell[]) o);
+ } else {
+ throw new HBaseRestException(
+ "Object does not conform to the ISerializable "
+ + "interface. Unable to generate xml output.");
+ }
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @seeorg.apache.hadoop.hbase.rest.serializer.IRestSerializer#
+ * serializeDatabaseMetadata
+ * (org.apache.hadoop.hbase.rest.DatabaseModel.DatabaseMetadata)
+ */
+ public void serializeDatabaseMetadata(DatabaseMetadata databaseMetadata)
+ throws HBaseRestException {
+ printer.print("<tables>");
+ for (HTableDescriptor table : databaseMetadata.getTables()) {
+ table.restSerialize(this);
+ }
+ printer.print("</tables>");
+ printer.flush();
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @seeorg.apache.hadoop.hbase.rest.serializer.IRestSerializer#
+ * serializeTableDescriptor(org.apache.hadoop.hbase.HTableDescriptor)
+ */
+ public void serializeTableDescriptor(HTableDescriptor tableDescriptor)
+ throws HBaseRestException {
+ printer.print("<table>");
+ // name element
+ printer.print("<name>");
+ printer.print(tableDescriptor.getNameAsString());
+ printer.print("</name>");
+ // column families
+ printer.print("<columnfamilies>");
+ for (HColumnDescriptor column : tableDescriptor.getColumnFamilies()) {
+ column.restSerialize(this);
+ }
+ printer.print("</columnfamilies>");
+ printer.print("</table>");
+ printer.flush();
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @seeorg.apache.hadoop.hbase.rest.serializer.IRestSerializer#
+ * serializeColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor)
+ */
+ public void serializeColumnDescriptor(HColumnDescriptor column)
+ throws HBaseRestException {
+
+ printer.print("<columnfamily>");
+ // name
+ printer.print("<name>");
+ printer.print(org.apache.hadoop.hbase.util.Base64.encodeBytes(column.getName()));
+ printer.print("</name>");
+ // compression
+ printer.print("<compression>");
+ printer.print(column.getCompression().toString());
+ printer.print("</compression>");
+ // bloomfilter
+ printer.print("<bloomfilter>");
+ printer.print(column.getCompressionType().toString());
+ printer.print("</bloomfilter>");
+ // max-versions
+ printer.print("<max-versions>");
+ printer.print(column.getMaxVersions());
+ printer.print("</max-versions>");
+ // max-length
+ printer.print("<max-length>");
+ printer.print(column.getMaxValueLength());
+ printer.print("</max-length>");
+ printer.print("</columnfamily>");
+ printer.flush();
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeRegionData
+ * (org.apache.hadoop.hbase.rest.TableModel.Regions)
+ */
+ public void serializeRegionData(Regions regions) throws HBaseRestException {
+
+ printer.print("<regions>");
+ for (byte[] region : regions.getRegionKey()) {
+ printer.print("<region>");
+ printer.print(Bytes.toString(region));
+ printer.print("</region>");
+ }
+ printer.print("</regions>");
+ printer.flush();
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeStatusMessage
+ * (org.apache.hadoop.hbase.rest.Status.StatusMessage)
+ */
+ public void serializeStatusMessage(StatusMessage message)
+ throws HBaseRestException {
+
+ printer.print("<status>");
+ printer.print("<code>");
+ printer.print(message.getStatusCode());
+ printer.print("</code>");
+ printer.print("<message>");
+ printer.print(message.getMessage().toString());
+ printer.print("</message>");
+ printer.print("<error>");
+ printer.print(message.getError());
+ printer.print("</error>");
+ printer.print("</status>");
+ printer.flush();
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @seeorg.apache.hadoop.hbase.rest.serializer.IRestSerializer#
+ * serializeScannerIdentifier(org.apache.hadoop.hbase.rest.ScannerIdentifier)
+ */
+ public void serializeScannerIdentifier(ScannerIdentifier scannerIdentifier)
+ throws HBaseRestException {
+
+ printer.print("<scanner>");
+ printer.print("<id>");
+ printer.print(scannerIdentifier.getId());
+ printer.print("</id>");
+ printer.print("</scanner>");
+ printer.flush();
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeRowResult
+ * (org.apache.hadoop.hbase.io.RowResult)
+ */
+ public void serializeRowResult(RowResult rowResult) throws HBaseRestException {
+
+ printer.print("<row>");
+ printer.print("<name>");
+ printer.print(org.apache.hadoop.hbase.util.Base64.encodeBytes(rowResult
+ .getRow()));
+ printer.print("</name>");
+ printer.print("<columns>");
+ for (RestCell cell : rowResult.getCells()) {
+ printer.print("<column>");
+ printer.print("<name>");
+ printer.print(org.apache.hadoop.hbase.util.Base64.encodeBytes(cell
+ .getName()));
+ printer.print("</name>");
+ printer.print("<timestamp>");
+ printer.print(cell.getTimestamp());
+ printer.print("</timestamp>");
+ printer.print("<value>");
+ printer.print(org.apache.hadoop.hbase.util.Base64.encodeBytes(cell
+ .getValue()));
+ printer.print("</value>");
+ printer.print("</column>");
+ printer.flush();
+ }
+ printer.print("</columns>");
+ printer.print("</row>");
+ printer.flush();
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeRowResultArray
+ * (org.apache.hadoop.hbase.io.RowResult[])
+ */
+ public void serializeRowResultArray(RowResult[] rows)
+ throws HBaseRestException {
+ printer.print("<rows>");
+ for (RowResult row : rows) {
+ row.restSerialize(this);
+ }
+ printer.print("</rows>");
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeCell(org
+ * .apache.hadoop.hbase.io.Cell)
+ */
+ public void serializeCell(Cell cell) throws HBaseRestException {
+ printer.print("<cell>");
+ printer.print("<value>");
+ printer.print(org.apache.hadoop.hbase.util.Base64.encodeBytes(cell
+ .getValue()));
+ printer.print("</value>");
+ printer.print("<timestamp>");
+ printer.print(cell.getTimestamp());
+ printer.print("</timestamp>");
+ printer.print("</cell>");
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeCellArray
+ * (org.apache.hadoop.hbase.io.Cell[])
+ */
+ public void serializeCellArray(Cell[] cells) throws HBaseRestException {
+ printer.print("<cells>");
+ for (Cell cell : cells) {
+ cell.restSerialize(this);
+ }
+ printer.print("</cells>");
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.IRestSerializer#serializeTimestamps
+ * (org.apache.hadoop.hbase.rest.RowModel.TimestampsDescriptor)
+ */
+ public void serializeTimestamps(TimestampsDescriptor timestampsDescriptor)
+ throws HBaseRestException {
+ // TODO Auto-generated method stub
+
+ }
+
+ // Private classes used for printing the output
+
+ private interface IPrinter {
+ public void print(String output);
+
+ public void print(int output);
+
+ public void print(long output);
+
+ public void print(boolean output);
+
+ public void flush();
+ }
+
+ private abstract class AbstractPrinter implements IPrinter {
+ protected final PrintWriter writer;
+
+ @SuppressWarnings("unused")
+ private AbstractPrinter() {
+ writer = null;
+ }
+
+ public AbstractPrinter(HttpServletResponse response)
+ throws HBaseRestException {
+ try {
+ writer = response.getWriter();
+ } catch (IOException e) {
+ throw new HBaseRestException(e.getMessage(), e);
+ }
+ }
+
+ public void flush() {
+ writer.flush();
+ }
+ }
+
+ private class SimplePrinter extends AbstractPrinter {
+ private SimplePrinter(HttpServletResponse response)
+ throws HBaseRestException {
+ super(response);
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.SimpleXMLSerializer.Printer#print
+ * (java.io.PrintWriter, java.lang.String)
+ */
+ public void print(final String output) {
+ writer.print(output);
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.SimpleXMLSerializer.IPrinter#
+ * print(int)
+ */
+ public void print(int output) {
+ writer.print(output);
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.SimpleXMLSerializer.IPrinter#
+ * print(long)
+ */
+ public void print(long output) {
+ writer.print(output);
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.rest.serializer.SimpleXMLSerializer.IPrinter#print(boolean)
+ */
+ public void print(boolean output) {
+ writer.print(output);
+ }
+ }
+
+ private class PrettyPrinter extends AbstractPrinter {
+ private PrettyPrinter(HttpServletResponse response)
+ throws HBaseRestException {
+ super(response);
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.SimpleXMLSerializer.Printer#print
+ * (java.io.PrintWriter, java.lang.String)
+ */
+ public void print(String output) {
+ writer.println(output);
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.SimpleXMLSerializer.IPrinter#
+ * print(int)
+ */
+ public void print(int output) {
+ writer.println(output);
+
+ }
+
+ /*
+ * (non-Javadoc)
+ *
+ * @see
+ * org.apache.hadoop.hbase.rest.serializer.SimpleXMLSerializer.IPrinter#
+ * print(long)
+ */
+ public void print(long output) {
+ writer.println(output);
+ }
+
+ /* (non-Javadoc)
+ * @see org.apache.hadoop.hbase.rest.serializer.SimpleXMLSerializer.IPrinter#print(boolean)
+ */
+ public void print(boolean output) {
+ writer.println(output);
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift b/src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
new file mode 100644
index 0000000..da7fc93
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
@@ -0,0 +1,549 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// ----------------------------------------------------------------
+// HBase.thrift -
+//
+// This is a Thrift interface definition file for the Hbase service.
+// Target language libraries for C++, Java, Ruby, PHP, (and more) are
+// generated by running this file through the Thrift compiler with the
+// appropriate flags. The Thrift compiler binary and runtime
+// libraries for various languages is currently available from
+// Facebook (http://developers.facebook.com/thrift/). The intent is
+// for the Thrift project to migrate to Apache Incubator.
+//
+// See the package.html file for information on the version of Thrift
+// used to generate the *.java files checked into the Hbase project.
+// ----------------------------------------------------------------
+
+namespace java org.apache.hadoop.hbase.thrift.generated
+namespace cpp apache.hadoop.hbase.thrift
+namespace rb Apache.Hadoop.Hbase.Thrift
+namespace py hbase
+namespace perl Hbase
+
+// note: other language namespaces tbd...
+
+//
+// Types
+//
+
+// NOTE: all variables with the Text type are assumed to be correctly
+// formatted UTF-8 strings. This is a programming language and locale
+// dependent property that the client application is repsonsible for
+// maintaining. If strings with an invalid encoding are sent, an
+// IOError will be thrown.
+
+typedef binary Text
+typedef binary Bytes
+typedef i32 ScannerID
+
+/**
+ * TCell - Used to transport a cell value (byte[]) and the timestamp it was
+ * stored with together as a result for get and getRow methods. This promotes
+ * the timestamp of a cell to a first-class value, making it easy to take
+ * note of temporal data. Cell is used all the way from HStore up to HTable.
+ */
+struct TCell{
+ 1:Bytes value,
+ 2:i64 timestamp
+}
+
+/**
+ * An HColumnDescriptor contains information about a column family
+ * such as the number of versions, compression settings, etc. It is
+ * used as input when creating a table or adding a column.
+ */
+struct ColumnDescriptor {
+ 1:Text name,
+ 2:i32 maxVersions = 3,
+ 3:string compression = "NONE",
+ 4:bool inMemory = 0,
+ 5:i32 maxValueLength = 2147483647,
+ 6:string bloomFilterType = "NONE",
+ 7:i32 bloomFilterVectorSize = 0,
+ 8:i32 bloomFilterNbHashes = 0,
+ 9:bool blockCacheEnabled = 0,
+ 10:i32 timeToLive = -1
+}
+
+/**
+ * A TRegionInfo contains information about an HTable region.
+ */
+struct TRegionInfo {
+ 1:Text startKey,
+ 2:Text endKey,
+ 3:i64 id,
+ 4:Text name,
+ 5:byte version
+}
+
+/**
+ * A Mutation object is used to either update or delete a column-value.
+ */
+struct Mutation {
+ 1:bool isDelete = 0,
+ 2:Text column,
+ 3:Text value
+}
+
+
+/**
+ * A BatchMutation object is used to apply a number of Mutations to a single row.
+ */
+struct BatchMutation {
+ 1:Text row,
+ 2:list<Mutation> mutations
+}
+
+
+/**
+ * Holds row name and then a map of columns to cells.
+ */
+struct TRowResult {
+ 1:Text row,
+ 2:map<Text, TCell> columns
+}
+
+//
+// Exceptions
+//
+/**
+ * An IOError exception signals that an error occurred communicating
+ * to the Hbase master or an Hbase region server. Also used to return
+ * more general Hbase error conditions.
+ */
+exception IOError {
+ 1:string message
+}
+
+/**
+ * An IllegalArgument exception indicates an illegal or invalid
+ * argument was passed into a procedure.
+ */
+exception IllegalArgument {
+ 1:string message
+}
+
+/**
+ * An AlreadyExists exceptions signals that a table with the specified
+ * name already exists
+ */
+exception AlreadyExists {
+ 1:string message
+}
+
+//
+// Service
+//
+
+service Hbase {
+ /**
+ * Brings a table on-line (enables it)
+ * @param tableName name of the table
+ */
+ void enableTable(1:Bytes tableName)
+ throws (1:IOError io)
+
+ /**
+ * Disables a table (takes it off-line) If it is being served, the master
+ * will tell the servers to stop serving it.
+ * @param tableName name of the table
+ */
+ void disableTable(1:Bytes tableName)
+ throws (1:IOError io)
+
+ /**
+ * @param tableName name of table to check
+ * @return true if table is on-line
+ */
+ bool isTableEnabled(1:Bytes tableName)
+ throws (1:IOError io)
+
+ void compact(1:Bytes tableNameOrRegionName)
+ throws (1:IOError io)
+
+ void majorCompact(1:Bytes tableNameOrRegionName)
+ throws (1:IOError io)
+
+ /**
+ * List all the userspace tables.
+ * @return - returns a list of names
+ */
+ list<Text> getTableNames()
+ throws (1:IOError io)
+
+ /**
+ * List all the column families assoicated with a table.
+ * @param tableName table name
+ * @return list of column family descriptors
+ */
+ map<Text,ColumnDescriptor> getColumnDescriptors (1:Text tableName)
+ throws (1:IOError io)
+
+ /**
+ * List the regions associated with a table.
+ * @param tableName table name
+ * @return list of region descriptors
+ */
+ list<TRegionInfo> getTableRegions(1:Text tableName)
+ throws (1:IOError io)
+
+ /**
+ * Create a table with the specified column families. The name
+ * field for each ColumnDescriptor must be set and must end in a
+ * colon (:). All other fields are optional and will get default
+ * values if not explicitly specified.
+ *
+ * @param tableName name of table to create
+ * @param columnFamilies list of column family descriptors
+ *
+ * @throws IllegalArgument if an input parameter is invalid
+ * @throws AlreadyExists if the table name already exists
+ */
+ void createTable(1:Text tableName, 2:list<ColumnDescriptor> columnFamilies)
+ throws (1:IOError io, 2:IllegalArgument ia, 3:AlreadyExists exist)
+
+ /**
+ * Deletes a table
+ * @param tableName name of table to delete
+ * @throws IOError if table doesn't exist on server or there was some other
+ * problem
+ */
+ void deleteTable(1:Text tableName)
+ throws (1:IOError io)
+
+ /**
+ * Get a single TCell for the specified table, row, and column at the
+ * latest timestamp. Returns an empty list if no such value exists.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param column column name
+ * @return value for specified row/column
+ */
+ list<TCell> get(1:Text tableName, 2:Text row, 3:Text column)
+ throws (1:IOError io)
+
+ /**
+ * Get the specified number of versions for the specified table,
+ * row, and column.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param column column name
+ * @param numVersions number of versions to retrieve
+ * @return list of cells for specified row/column
+ */
+ list<TCell> getVer(1:Text tableName, 2:Text row, 3:Text column,
+ 4:i32 numVersions)
+ throws (1:IOError io)
+
+ /**
+ * Get the specified number of versions for the specified table,
+ * row, and column. Only versions less than or equal to the specified
+ * timestamp will be returned.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param column column name
+ * @param timestamp timestamp
+ * @param numVersions number of versions to retrieve
+ * @return list of cells for specified row/column
+ */
+ list<TCell> getVerTs(1:Text tableName, 2:Text row, 3:Text column,
+ 4:i64 timestamp, 5:i32 numVersions)
+ throws (1:IOError io)
+
+ /**
+ * Get all the data for the specified table and row at the latest
+ * timestamp. Returns an empty list if the row does not exist.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @return TRowResult containing the row and map of columns to TCells
+ */
+ list<TRowResult> getRow(1:Text tableName, 2:Text row)
+ throws (1:IOError io)
+
+ /**
+ * Get the specified columns for the specified table and row at the latest
+ * timestamp. Returns an empty list if the row does not exist.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param columns List of columns to return, null for all columns
+ * @return TRowResult containing the row and map of columns to TCells
+ */
+ list<TRowResult> getRowWithColumns(1:Text tableName, 2:Text row,
+ 3:list<Text> columns)
+ throws (1:IOError io)
+
+ /**
+ * Get all the data for the specified table and row at the specified
+ * timestamp. Returns an empty list if the row does not exist.
+ *
+ * @param tableName of table
+ * @param row row key
+ * @param timestamp timestamp
+ * @return TRowResult containing the row and map of columns to TCells
+ */
+ list<TRowResult> getRowTs(1:Text tableName, 2:Text row, 3:i64 timestamp)
+ throws (1:IOError io)
+
+ /**
+ * Get the specified columns for the specified table and row at the specified
+ * timestamp. Returns an empty list if the row does not exist.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param columns List of columns to return, null for all columns
+ * @return TRowResult containing the row and map of columns to TCells
+ */
+ list<TRowResult> getRowWithColumnsTs(1:Text tableName, 2:Text row,
+ 3:list<Text> columns, 4:i64 timestamp)
+ throws (1:IOError io)
+
+ /**
+ * Apply a series of mutations (updates/deletes) to a row in a
+ * single transaction. If an exception is thrown, then the
+ * transaction is aborted. Default current timestamp is used, and
+ * all entries will have an identical timestamp.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param mutations list of mutation commands
+ */
+ void mutateRow(1:Text tableName, 2:Text row, 3:list<Mutation> mutations)
+ throws (1:IOError io, 2:IllegalArgument ia)
+
+ /**
+ * Apply a series of mutations (updates/deletes) to a row in a
+ * single transaction. If an exception is thrown, then the
+ * transaction is aborted. The specified timestamp is used, and
+ * all entries will have an identical timestamp.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param mutations list of mutation commands
+ * @param timestamp timestamp
+ */
+ void mutateRowTs(1:Text tableName, 2:Text row, 3:list<Mutation> mutations, 4:i64 timestamp)
+ throws (1:IOError io, 2:IllegalArgument ia)
+
+ /**
+ * Apply a series of batches (each a series of mutations on a single row)
+ * in a single transaction. If an exception is thrown, then the
+ * transaction is aborted. Default current timestamp is used, and
+ * all entries will have an identical timestamp.
+ *
+ * @param tableName name of table
+ * @param rowBatches list of row batches
+ */
+ void mutateRows(1:Text tableName, 2:list<BatchMutation> rowBatches)
+ throws (1:IOError io, 2:IllegalArgument ia)
+
+ /**
+ * Apply a series of batches (each a series of mutations on a single row)
+ * in a single transaction. If an exception is thrown, then the
+ * transaction is aborted. The specified timestamp is used, and
+ * all entries will have an identical timestamp.
+ *
+ * @param tableName name of table
+ * @param rowBatches list of row batches
+ * @param timestamp timestamp
+ */
+ void mutateRowsTs(1:Text tableName, 2:list<BatchMutation> rowBatches, 3:i64 timestamp)
+ throws (1:IOError io, 2:IllegalArgument ia)
+
+ /**
+ * Atomically increment the column value specified. Returns the next value post increment.
+ * @param tableName name of table
+ * @param row row to increment
+ * @param column name of column
+ * @param value amount to increment by
+ */
+ i64 atomicIncrement(1:Text tableName, 2:Text row, 3:Text column, 4:i64 value)
+ throws (1:IOError io, 2:IllegalArgument ia)
+
+ /**
+ * Delete all cells that match the passed row and column.
+ *
+ * @param tableName name of table
+ * @param row Row to update
+ * @param column name of column whose value is to be deleted
+ */
+ void deleteAll(1:Text tableName, 2:Text row, 3:Text column)
+ throws (1:IOError io)
+
+ /**
+ * Delete all cells that match the passed row and column and whose
+ * timestamp is equal-to or older than the passed timestamp.
+ *
+ * @param tableName name of table
+ * @param row Row to update
+ * @param column name of column whose value is to be deleted
+ * @param timestamp timestamp
+ */
+ void deleteAllTs(1:Text tableName, 2:Text row, 3:Text column, 4:i64 timestamp)
+ throws (1:IOError io)
+
+ /**
+ * Completely delete the row's cells.
+ *
+ * @param tableName name of table
+ * @param row key of the row to be completely deleted.
+ */
+ void deleteAllRow(1:Text tableName, 2:Text row)
+ throws (1:IOError io)
+
+ /**
+ * Completely delete the row's cells marked with a timestamp
+ * equal-to or older than the passed timestamp.
+ *
+ * @param tableName name of table
+ * @param row key of the row to be completely deleted.
+ * @param timestamp timestamp
+ */
+ void deleteAllRowTs(1:Text tableName, 2:Text row, 3:i64 timestamp)
+ throws (1:IOError io)
+
+ /**
+ * Get a scanner on the current table starting at the specified row and
+ * ending at the last row in the table. Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier.
+ * @param tableName name of table
+ * @param startRow starting row in table to scan. send "" (empty string) to
+ * start at the first row.
+ *
+ * @return scanner id to be used with other scanner procedures
+ */
+ ScannerID scannerOpen(1:Text tableName,
+ 2:Text startRow,
+ 3:list<Text> columns)
+ throws (1:IOError io)
+
+ /**
+ * Get a scanner on the current table starting and stopping at the
+ * specified rows. ending at the last row in the table. Return the
+ * specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier.
+ * @param tableName name of table
+ * @param startRow starting row in table to scan. send "" (empty string) to
+ * start at the first row.
+ * @param stopRow row to stop scanning on. This row is *not* included in the
+ * scanner's results
+ *
+ * @return scanner id to be used with other scanner procedures
+ */
+ ScannerID scannerOpenWithStop(1:Text tableName,
+ 2:Text startRow,
+ 3:Text stopRow,
+ 4:list<Text> columns)
+ throws (1:IOError io)
+
+ /**
+ * Get a scanner on the current table starting at the specified row and
+ * ending at the last row in the table. Return the specified columns.
+ * Only values with the specified timestamp are returned.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier.
+ * @param tableName name of table
+ * @param startRow starting row in table to scan. send "" (empty string) to
+ * start at the first row.
+ * @param timestamp timestamp
+ *
+ * @return scanner id to be used with other scanner procedures
+ */
+ ScannerID scannerOpenTs(1:Text tableName,
+ 2:Text startRow,
+ 3:list<Text> columns,
+ 4:i64 timestamp)
+ throws (1:IOError io)
+
+ /**
+ * Get a scanner on the current table starting and stopping at the
+ * specified rows. ending at the last row in the table. Return the
+ * specified columns. Only values with the specified timestamp are
+ * returned.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier.
+ * @param tableName name of table
+ * @param startRow starting row in table to scan. send "" (empty string) to
+ * start at the first row.
+ * @param stopRow row to stop scanning on. This row is *not* included
+ * in the scanner's results
+ * @param timestamp timestamp
+ *
+ * @return scanner id to be used with other scanner procedures
+ */
+ ScannerID scannerOpenWithStopTs(1:Text tableName,
+ 2:Text startRow,
+ 3:Text stopRow,
+ 4:list<Text> columns,
+ 5:i64 timestamp)
+ throws (1:IOError io)
+
+ /**
+ * Returns the scanner's current row value and advances to the next
+ * row in the table. When there are no more rows in the table, or a key
+ * greater-than-or-equal-to the scanner's specified stopRow is reached,
+ * an empty list is returned.
+ *
+ * @param id id of a scanner returned by scannerOpen
+ * @return a TRowResult containing the current row and a map of the columns to TCells.
+ * @throws IllegalArgument if ScannerID is invalid
+ * @throws NotFound when the scanner reaches the end
+ */
+ list<TRowResult> scannerGet(1:ScannerID id)
+ throws (1:IOError io, 2:IllegalArgument ia)
+
+ /**
+ * Returns, starting at the scanner's current row value nbRows worth of
+ * rows and advances to the next row in the table. When there are no more
+ * rows in the table, or a key greater-than-or-equal-to the scanner's
+ * specified stopRow is reached, an empty list is returned.
+ *
+ * @param id id of a scanner returned by scannerOpen
+ * @param nbRows number of results to regturn
+ * @return a TRowResult containing the current row and a map of the columns to TCells.
+ * @throws IllegalArgument if ScannerID is invalid
+ * @throws NotFound when the scanner reaches the end
+ */
+ list<TRowResult> scannerGetList(1:ScannerID id,2:i32 nbRows)
+ throws (1:IOError io, 2:IllegalArgument ia)
+
+ /**
+ * Closes the server-state associated with an open scanner.
+ *
+ * @param id id of a scanner returned by scannerOpen
+ * @throws IllegalArgument if ScannerID is invalid
+ */
+ void scannerClose(1:ScannerID id)
+ throws (1:IOError io, 2:IllegalArgument ia)
+}
diff --git a/src/java/org/apache/hadoop/hbase/thrift/ThriftServer.java b/src/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
new file mode 100644
index 0000000..2074e53
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
@@ -0,0 +1,629 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.thrift;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.thrift.generated.AlreadyExists;
+import org.apache.hadoop.hbase.thrift.generated.BatchMutation;
+import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.thrift.generated.IOError;
+import org.apache.hadoop.hbase.thrift.generated.IllegalArgument;
+import org.apache.hadoop.hbase.thrift.generated.Mutation;
+import org.apache.hadoop.hbase.thrift.generated.TRegionInfo;
+import org.apache.hadoop.hbase.thrift.generated.TCell;
+import org.apache.hadoop.hbase.thrift.generated.TRowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.apache.thrift.server.TServer;
+import org.apache.thrift.server.TThreadPoolServer;
+import org.apache.thrift.transport.TServerSocket;
+import org.apache.thrift.transport.TServerTransport;
+
+/**
+ * ThriftServer - this class starts up a Thrift server which implements the
+ * Hbase API specified in the Hbase.thrift IDL file.
+ */
+public class ThriftServer {
+
+ /**
+ * The HBaseHandler is a glue object that connects Thrift RPC calls to the
+ * HBase client API primarily defined in the HBaseAdmin and HTable objects.
+ */
+ public static class HBaseHandler implements Hbase.Iface {
+ protected HBaseConfiguration conf = new HBaseConfiguration();
+ protected HBaseAdmin admin = null;
+ protected final Log LOG = LogFactory.getLog(this.getClass().getName());
+
+ // nextScannerId and scannerMap are used to manage scanner state
+ protected int nextScannerId = 0;
+ protected HashMap<Integer, Scanner> scannerMap = null;
+
+ /**
+ * Returns a list of all the column families for a given htable.
+ *
+ * @param table
+ * @return
+ * @throws IOException
+ */
+ byte[][] getAllColumns(HTable table) throws IOException {
+ HColumnDescriptor[] cds = table.getTableDescriptor().getColumnFamilies();
+ byte[][] columns = new byte[cds.length][];
+ for (int i = 0; i < cds.length; i++) {
+ columns[i] = cds[i].getNameWithColon();
+ }
+ return columns;
+ }
+
+ /**
+ * Creates and returns an HTable instance from a given table name.
+ *
+ * @param tableName
+ * name of table
+ * @return HTable object
+ * @throws IOException
+ * @throws IOError
+ */
+ protected HTable getTable(final byte[] tableName) throws IOError,
+ IOException {
+ return new HTable(this.conf, tableName);
+ }
+
+ /**
+ * Assigns a unique ID to the scanner and adds the mapping to an internal
+ * hash-map.
+ *
+ * @param scanner
+ * @return integer scanner id
+ */
+ protected synchronized int addScanner(Scanner scanner) {
+ int id = nextScannerId++;
+ scannerMap.put(id, scanner);
+ return id;
+ }
+
+ /**
+ * Returns the scanner associated with the specified ID.
+ *
+ * @param id
+ * @return a Scanner, or null if ID was invalid.
+ */
+ protected synchronized Scanner getScanner(int id) {
+ return scannerMap.get(id);
+ }
+
+ /**
+ * Removes the scanner associated with the specified ID from the internal
+ * id->scanner hash-map.
+ *
+ * @param id
+ * @return a Scanner, or null if ID was invalid.
+ */
+ protected synchronized Scanner removeScanner(int id) {
+ return scannerMap.remove(id);
+ }
+
+ /**
+ * Constructs an HBaseHandler object.
+ *
+ * @throws MasterNotRunningException
+ */
+ HBaseHandler() throws MasterNotRunningException {
+ conf = new HBaseConfiguration();
+ admin = new HBaseAdmin(conf);
+ scannerMap = new HashMap<Integer, Scanner>();
+ }
+
+ public void enableTable(final byte[] tableName) throws IOError {
+ try{
+ admin.enableTable(tableName);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public void disableTable(final byte[] tableName) throws IOError{
+ try{
+ admin.disableTable(tableName);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public boolean isTableEnabled(final byte[] tableName) throws IOError {
+ try {
+ return HTable.isTableEnabled(tableName);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public void compact(byte[] tableNameOrRegionName) throws IOError {
+ try{
+ admin.compact(tableNameOrRegionName);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public void majorCompact(byte[] tableNameOrRegionName) throws IOError {
+ try{
+ admin.majorCompact(tableNameOrRegionName);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public List<byte[]> getTableNames() throws IOError {
+ try {
+ HTableDescriptor[] tables = this.admin.listTables();
+ ArrayList<byte[]> list = new ArrayList<byte[]>(tables.length);
+ for (int i = 0; i < tables.length; i++) {
+ list.add(tables[i].getName());
+ }
+ return list;
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public List<TRegionInfo> getTableRegions(byte[] tableName)
+ throws IOError {
+ try{
+ HTable table = getTable(tableName);
+ Map<HRegionInfo, HServerAddress> regionsInfo = table.getRegionsInfo();
+ List<TRegionInfo> regions = new ArrayList<TRegionInfo>();
+
+ for (HRegionInfo regionInfo : regionsInfo.keySet()){
+ TRegionInfo region = new TRegionInfo();
+ region.startKey = regionInfo.getStartKey();
+ region.endKey = regionInfo.getEndKey();
+ region.id = regionInfo.getRegionId();
+ region.name = regionInfo.getRegionName();
+ region.version = regionInfo.getVersion();
+ regions.add(region);
+ }
+ return regions;
+ } catch (IOException e){
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public List<TCell> get(byte[] tableName, byte[] row, byte[] column)
+ throws IOError {
+ try {
+ HTable table = getTable(tableName);
+ Cell cell = table.get(row, column);
+ return ThriftUtilities.cellFromHBase(cell);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public List<TCell> getVer(byte[] tableName, byte[] row,
+ byte[] column, int numVersions) throws IOError {
+ try {
+ HTable table = getTable(tableName);
+ Cell[] cells =
+ table.get(row, column, numVersions);
+ return ThriftUtilities.cellFromHBase(cells);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public List<TCell> getVerTs(byte[] tableName, byte[] row,
+ byte[] column, long timestamp, int numVersions) throws IOError {
+ try {
+ HTable table = getTable(tableName);
+ Cell[] cells = table.get(row, column, timestamp, numVersions);
+ return ThriftUtilities.cellFromHBase(cells);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public List<TRowResult> getRow(byte[] tableName, byte[] row)
+ throws IOError {
+ return getRowWithColumnsTs(tableName, row, null,
+ HConstants.LATEST_TIMESTAMP);
+ }
+
+ public List<TRowResult> getRowWithColumns(byte[] tableName, byte[] row,
+ List<byte[]> columns) throws IOError {
+ return getRowWithColumnsTs(tableName, row, columns,
+ HConstants.LATEST_TIMESTAMP);
+ }
+
+ public List<TRowResult> getRowTs(byte[] tableName, byte[] row,
+ long timestamp) throws IOError {
+ return getRowWithColumnsTs(tableName, row, null,
+ timestamp);
+ }
+
+ public List<TRowResult> getRowWithColumnsTs(byte[] tableName, byte[] row,
+ List<byte[]> columns, long timestamp) throws IOError {
+ try {
+ HTable table = getTable(tableName);
+ if (columns == null) {
+ return ThriftUtilities.rowResultFromHBase(table.getRow(row,
+ timestamp));
+ }
+ byte[][] columnArr = columns.toArray(new byte[columns.size()][]);
+ return ThriftUtilities.rowResultFromHBase(table.getRow(row,
+ columnArr, timestamp));
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public void deleteAll(byte[] tableName, byte[] row, byte[] column)
+ throws IOError {
+ deleteAllTs(tableName, row, column, HConstants.LATEST_TIMESTAMP);
+ }
+
+ public void deleteAllTs(byte[] tableName, byte[] row, byte[] column,
+ long timestamp) throws IOError {
+ try {
+ HTable table = getTable(tableName);
+ table.deleteAll(row, column, timestamp);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public void deleteAllRow(byte[] tableName, byte[] row) throws IOError {
+ deleteAllRowTs(tableName, row, HConstants.LATEST_TIMESTAMP);
+ }
+
+ public void deleteAllRowTs(byte[] tableName, byte[] row, long timestamp)
+ throws IOError {
+ try {
+ HTable table = getTable(tableName);
+ table.deleteAll(row, timestamp);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public void createTable(byte[] tableName,
+ List<ColumnDescriptor> columnFamilies) throws IOError,
+ IllegalArgument, AlreadyExists {
+ try {
+ if (admin.tableExists(tableName)) {
+ throw new AlreadyExists("table name already in use");
+ }
+ HTableDescriptor desc = new HTableDescriptor(tableName);
+ for (ColumnDescriptor col : columnFamilies) {
+ HColumnDescriptor colDesc = ThriftUtilities.colDescFromThrift(col);
+ desc.addFamily(colDesc);
+ }
+ admin.createTable(desc);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ } catch (IllegalArgumentException e) {
+ throw new IllegalArgument(e.getMessage());
+ }
+ }
+
+ public void deleteTable(byte[] tableName) throws IOError {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("deleteTable: table=" + new String(tableName));
+ }
+ try {
+ if (!admin.tableExists(tableName)) {
+ throw new IOError("table does not exist");
+ }
+ admin.deleteTable(tableName);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public void mutateRow(byte[] tableName, byte[] row,
+ List<Mutation> mutations) throws IOError, IllegalArgument {
+ mutateRowTs(tableName, row, mutations, HConstants.LATEST_TIMESTAMP);
+ }
+
+ public void mutateRowTs(byte[] tableName, byte[] row,
+ List<Mutation> mutations, long timestamp) throws IOError, IllegalArgument {
+ HTable table = null;
+ try {
+ table = getTable(tableName);
+ BatchUpdate batchUpdate = new BatchUpdate(row, timestamp);
+ for (Mutation m : mutations) {
+ if (m.isDelete) {
+ batchUpdate.delete(m.column);
+ } else {
+ batchUpdate.put(m.column, m.value);
+ }
+ }
+ table.commit(batchUpdate);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ } catch (IllegalArgumentException e) {
+ throw new IllegalArgument(e.getMessage());
+ }
+ }
+
+ public void mutateRows(byte[] tableName, List<BatchMutation> rowBatches)
+ throws IOError, IllegalArgument, TException {
+ mutateRowsTs(tableName, rowBatches, HConstants.LATEST_TIMESTAMP);
+ }
+
+ public void mutateRowsTs(byte[] tableName, List<BatchMutation> rowBatches, long timestamp)
+ throws IOError, IllegalArgument, TException {
+ List<BatchUpdate> batchUpdates = new ArrayList<BatchUpdate>();
+
+ for (BatchMutation batch : rowBatches) {
+ byte[] row = batch.row;
+ List<Mutation> mutations = batch.mutations;
+ BatchUpdate batchUpdate = new BatchUpdate(row, timestamp);
+ for (Mutation m : mutations) {
+ if (m.isDelete) {
+ batchUpdate.delete(m.column);
+ } else {
+ batchUpdate.put(m.column, m.value);
+ }
+ }
+ batchUpdates.add(batchUpdate);
+ }
+
+ HTable table = null;
+ try {
+ table = getTable(tableName);
+ table.commit(batchUpdates);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ } catch (IllegalArgumentException e) {
+ throw new IllegalArgument(e.getMessage());
+ }
+ }
+
+ public long atomicIncrement(byte[] tableName, byte[] row, byte[] column, long amount) throws IOError, IllegalArgument, TException {
+ HTable table;
+ try {
+ table = getTable(tableName);
+ return table.incrementColumnValue(row, column, amount);
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public void scannerClose(int id) throws IOError, IllegalArgument {
+ LOG.debug("scannerClose: id=" + id);
+ Scanner scanner = getScanner(id);
+ if (scanner == null) {
+ throw new IllegalArgument("scanner ID is invalid");
+ }
+ scanner.close();
+ removeScanner(id);
+ }
+
+ public List<TRowResult> scannerGetList(int id,int nbRows) throws IllegalArgument, IOError {
+ LOG.debug("scannerGetList: id=" + id);
+ Scanner scanner = getScanner(id);
+ if (null == scanner) {
+ throw new IllegalArgument("scanner ID is invalid");
+ }
+
+ RowResult [] results = null;
+ try {
+ results = scanner.next(nbRows);
+ if (null == results) {
+ return new ArrayList<TRowResult>();
+ }
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ return ThriftUtilities.rowResultFromHBase(results);
+ }
+ public List<TRowResult> scannerGet(int id) throws IllegalArgument, IOError {
+ return scannerGetList(id,1);
+ }
+ public int scannerOpen(byte[] tableName, byte[] startRow,
+ List<byte[]> columns) throws IOError {
+ try {
+ HTable table = getTable(tableName);
+ byte[][] columnsArray = null;
+ if ((columns == null) || (columns.size() == 0)) {
+ columnsArray = getAllColumns(table);
+ } else {
+ columnsArray = columns.toArray(new byte[0][]);
+ }
+ return addScanner(table.getScanner(columnsArray, startRow));
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public int scannerOpenWithStop(byte[] tableName, byte[] startRow,
+ byte[] stopRow, List<byte[]> columns) throws IOError, TException {
+ try {
+ HTable table = getTable(tableName);
+ byte[][] columnsArray = null;
+ if ((columns == null) || (columns.size() == 0)) {
+ columnsArray = getAllColumns(table);
+ } else {
+ columnsArray = columns.toArray(new byte[0][]);
+ }
+ return addScanner(table.getScanner(columnsArray, startRow, stopRow));
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public int scannerOpenTs(byte[] tableName, byte[] startRow,
+ List<byte[]> columns, long timestamp) throws IOError, TException {
+ try {
+ HTable table = getTable(tableName);
+ byte[][] columnsArray = null;
+ if ((columns == null) || (columns.size() == 0)) {
+ columnsArray = getAllColumns(table);
+ } else {
+ columnsArray = columns.toArray(new byte[0][]);
+ }
+ return addScanner(table.getScanner(columnsArray, startRow, timestamp));
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public int scannerOpenWithStopTs(byte[] tableName, byte[] startRow,
+ byte[] stopRow, List<byte[]> columns, long timestamp)
+ throws IOError, TException {
+ try {
+ HTable table = getTable(tableName);
+ byte[][] columnsArray = null;
+ if ((columns == null) || (columns.size() == 0)) {
+ columnsArray = getAllColumns(table);
+ } else {
+ columnsArray = columns.toArray(new byte[0][]);
+ }
+ return addScanner(table.getScanner(columnsArray, startRow, stopRow,
+ timestamp));
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+
+ public Map<byte[], ColumnDescriptor> getColumnDescriptors(
+ byte[] tableName) throws IOError, TException {
+ try {
+ TreeMap<byte[], ColumnDescriptor> columns =
+ new TreeMap<byte[], ColumnDescriptor>(Bytes.BYTES_COMPARATOR);
+
+ HTable table = getTable(tableName);
+ HTableDescriptor desc = table.getTableDescriptor();
+
+ for (HColumnDescriptor e : desc.getFamilies()) {
+ ColumnDescriptor col = ThriftUtilities.colDescFromHbase(e);
+ columns.put(col.name, col);
+ }
+ return columns;
+ } catch (IOException e) {
+ throw new IOError(e.getMessage());
+ }
+ }
+ }
+
+ //
+ // Main program and support routines
+ //
+
+ private static void printUsageAndExit() {
+ printUsageAndExit(null);
+ }
+
+ private static void printUsageAndExit(final String message) {
+ if (message != null) {
+ System.err.println(message);
+ }
+ System.out.println("Usage: java org.apache.hadoop.hbase.thrift.ThriftServer " +
+ "--help | [--port=PORT] start");
+ System.out.println("Arguments:");
+ System.out.println(" start Start thrift server");
+ System.out.println(" stop Stop thrift server");
+ System.out.println("Options:");
+ System.out.println(" port Port to listen on. Default: 9090");
+ // System.out.println(" bind Address to bind on. Default: 0.0.0.0.");
+ System.out.println(" help Print this message and exit");
+ System.exit(0);
+ }
+
+ /*
+ * Start up the Thrift server.
+ * @param args
+ */
+ protected static void doMain(final String [] args) throws Exception {
+ if (args.length < 1) {
+ printUsageAndExit();
+ }
+
+ int port = 9090;
+ // String bindAddress = "0.0.0.0";
+
+ // Process command-line args. TODO: Better cmd-line processing
+ // (but hopefully something not as painful as cli options).
+// final String addressArgKey = "--bind=";
+ final String portArgKey = "--port=";
+ for (String cmd: args) {
+// if (cmd.startsWith(addressArgKey)) {
+// bindAddress = cmd.substring(addressArgKey.length());
+// continue;
+// } else
+ if (cmd.startsWith(portArgKey)) {
+ port = Integer.parseInt(cmd.substring(portArgKey.length()));
+ continue;
+ } else if (cmd.equals("--help") || cmd.equals("-h")) {
+ printUsageAndExit();
+ } else if (cmd.equals("start")) {
+ continue;
+ } else if (cmd.equals("stop")) {
+ printUsageAndExit("To shutdown the thrift server run " +
+ "bin/hbase-daemon.sh stop thrift or send a kill signal to " +
+ "the thrift server pid");
+ }
+
+ // Print out usage if we get to here.
+ printUsageAndExit();
+ }
+ Log LOG = LogFactory.getLog("ThriftServer");
+ LOG.info("starting HBase Thrift server on port " +
+ Integer.toString(port));
+ HBaseHandler handler = new HBaseHandler();
+ Hbase.Processor processor = new Hbase.Processor(handler);
+ TServerTransport serverTransport = new TServerSocket(port);
+ TProtocolFactory protFactory = new TBinaryProtocol.Factory(true, true);
+ TServer server = new TThreadPoolServer(processor, serverTransport,
+ protFactory);
+ server.serve();
+ }
+
+ /**
+ * @param args
+ * @throws Exception
+ */
+ public static void main(String [] args) throws Exception {
+ doMain(args);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java b/src/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java
new file mode 100644
index 0000000..f06e867
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java
@@ -0,0 +1,154 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.thrift;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor;
+import org.apache.hadoop.hbase.thrift.generated.IllegalArgument;
+import org.apache.hadoop.hbase.thrift.generated.TCell;
+import org.apache.hadoop.hbase.thrift.generated.TRowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class ThriftUtilities {
+
+ /**
+ * This utility method creates a new Hbase HColumnDescriptor object based on a
+ * Thrift ColumnDescriptor "struct".
+ *
+ * @param in
+ * Thrift ColumnDescriptor object
+ * @return HColumnDescriptor
+ * @throws IllegalArgument
+ */
+ static public HColumnDescriptor colDescFromThrift(ColumnDescriptor in)
+ throws IllegalArgument {
+ Compression.Algorithm comp =
+ Compression.getCompressionAlgorithmByName(in.compression.toLowerCase());
+ boolean bloom = false;
+ if (in.bloomFilterType.compareTo("NONE") != 0) {
+ bloom = true;
+ }
+
+ if (in.name == null || in.name.length <= 0) {
+ throw new IllegalArgument("column name is empty");
+ }
+ HColumnDescriptor col = new HColumnDescriptor(in.name,
+ in.maxVersions, comp.getName(), in.inMemory, in.blockCacheEnabled,
+ in.maxValueLength, in.timeToLive, bloom);
+ return col;
+ }
+
+ /**
+ * This utility method creates a new Thrift ColumnDescriptor "struct" based on
+ * an Hbase HColumnDescriptor object.
+ *
+ * @param in
+ * Hbase HColumnDescriptor object
+ * @return Thrift ColumnDescriptor
+ */
+ static public ColumnDescriptor colDescFromHbase(HColumnDescriptor in) {
+ ColumnDescriptor col = new ColumnDescriptor();
+ col.name = in.getName();
+ col.maxVersions = in.getMaxVersions();
+ col.compression = in.getCompression().toString();
+ col.inMemory = in.isInMemory();
+ col.blockCacheEnabled = in.isBlockCacheEnabled();
+ col.maxValueLength = in.getMaxValueLength();
+ col.bloomFilterType = Boolean.toString(in.isBloomfilter());
+ return col;
+ }
+
+ /**
+ * This utility method creates a list of Thrift TCell "struct" based on
+ * an Hbase Cell object. The empty list is returned if the input is null.
+ *
+ * @param in
+ * Hbase Cell object
+ * @return Thrift TCell array
+ */
+ static public List<TCell> cellFromHBase(Cell in) {
+ List<TCell> list = new ArrayList<TCell>(1);
+ if (in != null) {
+ list.add(new TCell(in.getValue(), in.getTimestamp()));
+ }
+ return list;
+ }
+
+ /**
+ * This utility method creates a list of Thrift TCell "struct" based on
+ * an Hbase Cell array. The empty list is returned if the input is null.
+ * @param in Hbase Cell array
+ * @return Thrift TCell array
+ */
+ static public List<TCell> cellFromHBase(Cell[] in) {
+ List<TCell> list = null;
+ if (in != null) {
+ list = new ArrayList<TCell>(in.length);
+ for (int i = 0; i < in.length; i++) {
+ list.add(new TCell(in[i].getValue(), in[i].getTimestamp()));
+ }
+ } else {
+ list = new ArrayList<TCell>(0);
+ }
+ return list;
+ }
+
+ /**
+ * This utility method creates a list of Thrift TRowResult "struct" based on
+ * an Hbase RowResult object. The empty list is returned if the input is
+ * null.
+ *
+ * @param in
+ * Hbase RowResult object
+ * @return Thrift TRowResult array
+ */
+ static public List<TRowResult> rowResultFromHBase(RowResult[] in) {
+ List<TRowResult> results = new ArrayList<TRowResult>();
+ for ( RowResult result_ : in) {
+ if(null == result_) {
+ continue;
+ }
+ TRowResult result = new TRowResult();
+ result.row = result_.getRow();
+ result.columns = new TreeMap<byte[], TCell>(Bytes.BYTES_COMPARATOR);
+ for (Map.Entry<byte[], Cell> entry : result_.entrySet()){
+ Cell cell = entry.getValue();
+ result.columns.put(entry.getKey(),
+ new TCell(cell.getValue(), cell.getTimestamp()));
+
+ }
+ results.add(result);
+ }
+ return results;
+ }
+ static public List<TRowResult> rowResultFromHBase(RowResult in) {
+ RowResult [] result = { in };
+ return rowResultFromHBase(result);
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java b/src/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
new file mode 100644
index 0000000..1b2f644
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
@@ -0,0 +1,239 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * An AlreadyExists exceptions signals that a table with the specified
+ * name already exists
+ */
+public class AlreadyExists extends Exception implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("AlreadyExists");
+ private static final TField MESSAGE_FIELD_DESC = new TField("message", TType.STRING, (short)1);
+
+ public String message;
+ public static final int MESSAGE = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(MESSAGE, new FieldMetaData("message", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(AlreadyExists.class, metaDataMap);
+ }
+
+ public AlreadyExists() {
+ }
+
+ public AlreadyExists(
+ String message)
+ {
+ this();
+ this.message = message;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public AlreadyExists(AlreadyExists other) {
+ if (other.isSetMessage()) {
+ this.message = other.message;
+ }
+ }
+
+ @Override
+ public AlreadyExists clone() {
+ return new AlreadyExists(this);
+ }
+
+ public String getMessage() {
+ return this.message;
+ }
+
+ public void setMessage(String message) {
+ this.message = message;
+ }
+
+ public void unsetMessage() {
+ this.message = null;
+ }
+
+ // Returns true if field message is set (has been asigned a value) and false otherwise
+ public boolean isSetMessage() {
+ return this.message != null;
+ }
+
+ public void setMessageIsSet(boolean value) {
+ if (!value) {
+ this.message = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case MESSAGE:
+ if (value == null) {
+ unsetMessage();
+ } else {
+ setMessage((String)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case MESSAGE:
+ return getMessage();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case MESSAGE:
+ return isSetMessage();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof AlreadyExists)
+ return this.equals((AlreadyExists)that);
+ return false;
+ }
+
+ public boolean equals(AlreadyExists that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_message = true && this.isSetMessage();
+ boolean that_present_message = true && that.isSetMessage();
+ if (this_present_message || that_present_message) {
+ if (!(this_present_message && that_present_message))
+ return false;
+ if (!this.message.equals(that.message))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case MESSAGE:
+ if (field.type == TType.STRING) {
+ this.message = iprot.readString();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.message != null) {
+ oprot.writeFieldBegin(MESSAGE_FIELD_DESC);
+ oprot.writeString(this.message);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("AlreadyExists(");
+ boolean first = true;
+
+ sb.append("message:");
+ if (this.message == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.message);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java b/src/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
new file mode 100644
index 0000000..289741c
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
@@ -0,0 +1,350 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * A BatchMutation object is used to apply a number of Mutations to a single row.
+ */
+public class BatchMutation implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("BatchMutation");
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)1);
+ private static final TField MUTATIONS_FIELD_DESC = new TField("mutations", TType.LIST, (short)2);
+
+ public byte[] row;
+ public static final int ROW = 1;
+ public List<Mutation> mutations;
+ public static final int MUTATIONS = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(MUTATIONS, new FieldMetaData("mutations", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, Mutation.class))));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(BatchMutation.class, metaDataMap);
+ }
+
+ public BatchMutation() {
+ }
+
+ public BatchMutation(
+ byte[] row,
+ List<Mutation> mutations)
+ {
+ this();
+ this.row = row;
+ this.mutations = mutations;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public BatchMutation(BatchMutation other) {
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetMutations()) {
+ List<Mutation> __this__mutations = new ArrayList<Mutation>();
+ for (Mutation other_element : other.mutations) {
+ __this__mutations.add(new Mutation(other_element));
+ }
+ this.mutations = __this__mutations;
+ }
+ }
+
+ @Override
+ public BatchMutation clone() {
+ return new BatchMutation(this);
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public int getMutationsSize() {
+ return (this.mutations == null) ? 0 : this.mutations.size();
+ }
+
+ public java.util.Iterator<Mutation> getMutationsIterator() {
+ return (this.mutations == null) ? null : this.mutations.iterator();
+ }
+
+ public void addToMutations(Mutation elem) {
+ if (this.mutations == null) {
+ this.mutations = new ArrayList<Mutation>();
+ }
+ this.mutations.add(elem);
+ }
+
+ public List<Mutation> getMutations() {
+ return this.mutations;
+ }
+
+ public void setMutations(List<Mutation> mutations) {
+ this.mutations = mutations;
+ }
+
+ public void unsetMutations() {
+ this.mutations = null;
+ }
+
+ // Returns true if field mutations is set (has been asigned a value) and false otherwise
+ public boolean isSetMutations() {
+ return this.mutations != null;
+ }
+
+ public void setMutationsIsSet(boolean value) {
+ if (!value) {
+ this.mutations = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case MUTATIONS:
+ if (value == null) {
+ unsetMutations();
+ } else {
+ setMutations((List<Mutation>)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case ROW:
+ return getRow();
+
+ case MUTATIONS:
+ return getMutations();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case ROW:
+ return isSetRow();
+ case MUTATIONS:
+ return isSetMutations();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof BatchMutation)
+ return this.equals((BatchMutation)that);
+ return false;
+ }
+
+ public boolean equals(BatchMutation that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_mutations = true && this.isSetMutations();
+ boolean that_present_mutations = true && that.isSetMutations();
+ if (this_present_mutations || that_present_mutations) {
+ if (!(this_present_mutations && that_present_mutations))
+ return false;
+ if (!this.mutations.equals(that.mutations))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case MUTATIONS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list0 = iprot.readListBegin();
+ this.mutations = new ArrayList<Mutation>(_list0.size);
+ for (int _i1 = 0; _i1 < _list0.size; ++_i1)
+ {
+ Mutation _elem2;
+ _elem2 = new Mutation();
+ _elem2.read(iprot);
+ this.mutations.add(_elem2);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.mutations != null) {
+ oprot.writeFieldBegin(MUTATIONS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.mutations.size()));
+ for (Mutation _iter3 : this.mutations) {
+ _iter3.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("BatchMutation(");
+ boolean first = true;
+
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("mutations:");
+ if (this.mutations == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.mutations);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java b/src/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
new file mode 100644
index 0000000..fc0ba7b
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
@@ -0,0 +1,898 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * An HColumnDescriptor contains information about a column family
+ * such as the number of versions, compression settings, etc. It is
+ * used as input when creating a table or adding a column.
+ */
+public class ColumnDescriptor implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("ColumnDescriptor");
+ private static final TField NAME_FIELD_DESC = new TField("name", TType.STRING, (short)1);
+ private static final TField MAX_VERSIONS_FIELD_DESC = new TField("maxVersions", TType.I32, (short)2);
+ private static final TField COMPRESSION_FIELD_DESC = new TField("compression", TType.STRING, (short)3);
+ private static final TField IN_MEMORY_FIELD_DESC = new TField("inMemory", TType.BOOL, (short)4);
+ private static final TField MAX_VALUE_LENGTH_FIELD_DESC = new TField("maxValueLength", TType.I32, (short)5);
+ private static final TField BLOOM_FILTER_TYPE_FIELD_DESC = new TField("bloomFilterType", TType.STRING, (short)6);
+ private static final TField BLOOM_FILTER_VECTOR_SIZE_FIELD_DESC = new TField("bloomFilterVectorSize", TType.I32, (short)7);
+ private static final TField BLOOM_FILTER_NB_HASHES_FIELD_DESC = new TField("bloomFilterNbHashes", TType.I32, (short)8);
+ private static final TField BLOCK_CACHE_ENABLED_FIELD_DESC = new TField("blockCacheEnabled", TType.BOOL, (short)9);
+ private static final TField TIME_TO_LIVE_FIELD_DESC = new TField("timeToLive", TType.I32, (short)10);
+
+ public byte[] name;
+ public static final int NAME = 1;
+ public int maxVersions;
+ public static final int MAXVERSIONS = 2;
+ public String compression;
+ public static final int COMPRESSION = 3;
+ public boolean inMemory;
+ public static final int INMEMORY = 4;
+ public int maxValueLength;
+ public static final int MAXVALUELENGTH = 5;
+ public String bloomFilterType;
+ public static final int BLOOMFILTERTYPE = 6;
+ public int bloomFilterVectorSize;
+ public static final int BLOOMFILTERVECTORSIZE = 7;
+ public int bloomFilterNbHashes;
+ public static final int BLOOMFILTERNBHASHES = 8;
+ public boolean blockCacheEnabled;
+ public static final int BLOCKCACHEENABLED = 9;
+ public int timeToLive;
+ public static final int TIMETOLIVE = 10;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean maxVersions = false;
+ public boolean inMemory = false;
+ public boolean maxValueLength = false;
+ public boolean bloomFilterVectorSize = false;
+ public boolean bloomFilterNbHashes = false;
+ public boolean blockCacheEnabled = false;
+ public boolean timeToLive = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(NAME, new FieldMetaData("name", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(MAXVERSIONS, new FieldMetaData("maxVersions", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(COMPRESSION, new FieldMetaData("compression", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(INMEMORY, new FieldMetaData("inMemory", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.BOOL)));
+ put(MAXVALUELENGTH, new FieldMetaData("maxValueLength", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(BLOOMFILTERTYPE, new FieldMetaData("bloomFilterType", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(BLOOMFILTERVECTORSIZE, new FieldMetaData("bloomFilterVectorSize", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(BLOOMFILTERNBHASHES, new FieldMetaData("bloomFilterNbHashes", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(BLOCKCACHEENABLED, new FieldMetaData("blockCacheEnabled", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.BOOL)));
+ put(TIMETOLIVE, new FieldMetaData("timeToLive", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(ColumnDescriptor.class, metaDataMap);
+ }
+
+ public ColumnDescriptor() {
+ this.maxVersions = 3;
+
+ this.compression = "NONE";
+
+ this.inMemory = false;
+
+ this.maxValueLength = 2147483647;
+
+ this.bloomFilterType = "NONE";
+
+ this.bloomFilterVectorSize = 0;
+
+ this.bloomFilterNbHashes = 0;
+
+ this.blockCacheEnabled = false;
+
+ this.timeToLive = -1;
+
+ }
+
+ public ColumnDescriptor(
+ byte[] name,
+ int maxVersions,
+ String compression,
+ boolean inMemory,
+ int maxValueLength,
+ String bloomFilterType,
+ int bloomFilterVectorSize,
+ int bloomFilterNbHashes,
+ boolean blockCacheEnabled,
+ int timeToLive)
+ {
+ this();
+ this.name = name;
+ this.maxVersions = maxVersions;
+ this.__isset.maxVersions = true;
+ this.compression = compression;
+ this.inMemory = inMemory;
+ this.__isset.inMemory = true;
+ this.maxValueLength = maxValueLength;
+ this.__isset.maxValueLength = true;
+ this.bloomFilterType = bloomFilterType;
+ this.bloomFilterVectorSize = bloomFilterVectorSize;
+ this.__isset.bloomFilterVectorSize = true;
+ this.bloomFilterNbHashes = bloomFilterNbHashes;
+ this.__isset.bloomFilterNbHashes = true;
+ this.blockCacheEnabled = blockCacheEnabled;
+ this.__isset.blockCacheEnabled = true;
+ this.timeToLive = timeToLive;
+ this.__isset.timeToLive = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public ColumnDescriptor(ColumnDescriptor other) {
+ if (other.isSetName()) {
+ this.name = other.name;
+ }
+ __isset.maxVersions = other.__isset.maxVersions;
+ this.maxVersions = other.maxVersions;
+ if (other.isSetCompression()) {
+ this.compression = other.compression;
+ }
+ __isset.inMemory = other.__isset.inMemory;
+ this.inMemory = other.inMemory;
+ __isset.maxValueLength = other.__isset.maxValueLength;
+ this.maxValueLength = other.maxValueLength;
+ if (other.isSetBloomFilterType()) {
+ this.bloomFilterType = other.bloomFilterType;
+ }
+ __isset.bloomFilterVectorSize = other.__isset.bloomFilterVectorSize;
+ this.bloomFilterVectorSize = other.bloomFilterVectorSize;
+ __isset.bloomFilterNbHashes = other.__isset.bloomFilterNbHashes;
+ this.bloomFilterNbHashes = other.bloomFilterNbHashes;
+ __isset.blockCacheEnabled = other.__isset.blockCacheEnabled;
+ this.blockCacheEnabled = other.blockCacheEnabled;
+ __isset.timeToLive = other.__isset.timeToLive;
+ this.timeToLive = other.timeToLive;
+ }
+
+ @Override
+ public ColumnDescriptor clone() {
+ return new ColumnDescriptor(this);
+ }
+
+ public byte[] getName() {
+ return this.name;
+ }
+
+ public void setName(byte[] name) {
+ this.name = name;
+ }
+
+ public void unsetName() {
+ this.name = null;
+ }
+
+ // Returns true if field name is set (has been asigned a value) and false otherwise
+ public boolean isSetName() {
+ return this.name != null;
+ }
+
+ public void setNameIsSet(boolean value) {
+ if (!value) {
+ this.name = null;
+ }
+ }
+
+ public int getMaxVersions() {
+ return this.maxVersions;
+ }
+
+ public void setMaxVersions(int maxVersions) {
+ this.maxVersions = maxVersions;
+ this.__isset.maxVersions = true;
+ }
+
+ public void unsetMaxVersions() {
+ this.__isset.maxVersions = false;
+ }
+
+ // Returns true if field maxVersions is set (has been asigned a value) and false otherwise
+ public boolean isSetMaxVersions() {
+ return this.__isset.maxVersions;
+ }
+
+ public void setMaxVersionsIsSet(boolean value) {
+ this.__isset.maxVersions = value;
+ }
+
+ public String getCompression() {
+ return this.compression;
+ }
+
+ public void setCompression(String compression) {
+ this.compression = compression;
+ }
+
+ public void unsetCompression() {
+ this.compression = null;
+ }
+
+ // Returns true if field compression is set (has been asigned a value) and false otherwise
+ public boolean isSetCompression() {
+ return this.compression != null;
+ }
+
+ public void setCompressionIsSet(boolean value) {
+ if (!value) {
+ this.compression = null;
+ }
+ }
+
+ public boolean isInMemory() {
+ return this.inMemory;
+ }
+
+ public void setInMemory(boolean inMemory) {
+ this.inMemory = inMemory;
+ this.__isset.inMemory = true;
+ }
+
+ public void unsetInMemory() {
+ this.__isset.inMemory = false;
+ }
+
+ // Returns true if field inMemory is set (has been asigned a value) and false otherwise
+ public boolean isSetInMemory() {
+ return this.__isset.inMemory;
+ }
+
+ public void setInMemoryIsSet(boolean value) {
+ this.__isset.inMemory = value;
+ }
+
+ public int getMaxValueLength() {
+ return this.maxValueLength;
+ }
+
+ public void setMaxValueLength(int maxValueLength) {
+ this.maxValueLength = maxValueLength;
+ this.__isset.maxValueLength = true;
+ }
+
+ public void unsetMaxValueLength() {
+ this.__isset.maxValueLength = false;
+ }
+
+ // Returns true if field maxValueLength is set (has been asigned a value) and false otherwise
+ public boolean isSetMaxValueLength() {
+ return this.__isset.maxValueLength;
+ }
+
+ public void setMaxValueLengthIsSet(boolean value) {
+ this.__isset.maxValueLength = value;
+ }
+
+ public String getBloomFilterType() {
+ return this.bloomFilterType;
+ }
+
+ public void setBloomFilterType(String bloomFilterType) {
+ this.bloomFilterType = bloomFilterType;
+ }
+
+ public void unsetBloomFilterType() {
+ this.bloomFilterType = null;
+ }
+
+ // Returns true if field bloomFilterType is set (has been asigned a value) and false otherwise
+ public boolean isSetBloomFilterType() {
+ return this.bloomFilterType != null;
+ }
+
+ public void setBloomFilterTypeIsSet(boolean value) {
+ if (!value) {
+ this.bloomFilterType = null;
+ }
+ }
+
+ public int getBloomFilterVectorSize() {
+ return this.bloomFilterVectorSize;
+ }
+
+ public void setBloomFilterVectorSize(int bloomFilterVectorSize) {
+ this.bloomFilterVectorSize = bloomFilterVectorSize;
+ this.__isset.bloomFilterVectorSize = true;
+ }
+
+ public void unsetBloomFilterVectorSize() {
+ this.__isset.bloomFilterVectorSize = false;
+ }
+
+ // Returns true if field bloomFilterVectorSize is set (has been asigned a value) and false otherwise
+ public boolean isSetBloomFilterVectorSize() {
+ return this.__isset.bloomFilterVectorSize;
+ }
+
+ public void setBloomFilterVectorSizeIsSet(boolean value) {
+ this.__isset.bloomFilterVectorSize = value;
+ }
+
+ public int getBloomFilterNbHashes() {
+ return this.bloomFilterNbHashes;
+ }
+
+ public void setBloomFilterNbHashes(int bloomFilterNbHashes) {
+ this.bloomFilterNbHashes = bloomFilterNbHashes;
+ this.__isset.bloomFilterNbHashes = true;
+ }
+
+ public void unsetBloomFilterNbHashes() {
+ this.__isset.bloomFilterNbHashes = false;
+ }
+
+ // Returns true if field bloomFilterNbHashes is set (has been asigned a value) and false otherwise
+ public boolean isSetBloomFilterNbHashes() {
+ return this.__isset.bloomFilterNbHashes;
+ }
+
+ public void setBloomFilterNbHashesIsSet(boolean value) {
+ this.__isset.bloomFilterNbHashes = value;
+ }
+
+ public boolean isBlockCacheEnabled() {
+ return this.blockCacheEnabled;
+ }
+
+ public void setBlockCacheEnabled(boolean blockCacheEnabled) {
+ this.blockCacheEnabled = blockCacheEnabled;
+ this.__isset.blockCacheEnabled = true;
+ }
+
+ public void unsetBlockCacheEnabled() {
+ this.__isset.blockCacheEnabled = false;
+ }
+
+ // Returns true if field blockCacheEnabled is set (has been asigned a value) and false otherwise
+ public boolean isSetBlockCacheEnabled() {
+ return this.__isset.blockCacheEnabled;
+ }
+
+ public void setBlockCacheEnabledIsSet(boolean value) {
+ this.__isset.blockCacheEnabled = value;
+ }
+
+ public int getTimeToLive() {
+ return this.timeToLive;
+ }
+
+ public void setTimeToLive(int timeToLive) {
+ this.timeToLive = timeToLive;
+ this.__isset.timeToLive = true;
+ }
+
+ public void unsetTimeToLive() {
+ this.__isset.timeToLive = false;
+ }
+
+ // Returns true if field timeToLive is set (has been asigned a value) and false otherwise
+ public boolean isSetTimeToLive() {
+ return this.__isset.timeToLive;
+ }
+
+ public void setTimeToLiveIsSet(boolean value) {
+ this.__isset.timeToLive = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case NAME:
+ if (value == null) {
+ unsetName();
+ } else {
+ setName((byte[])value);
+ }
+ break;
+
+ case MAXVERSIONS:
+ if (value == null) {
+ unsetMaxVersions();
+ } else {
+ setMaxVersions((Integer)value);
+ }
+ break;
+
+ case COMPRESSION:
+ if (value == null) {
+ unsetCompression();
+ } else {
+ setCompression((String)value);
+ }
+ break;
+
+ case INMEMORY:
+ if (value == null) {
+ unsetInMemory();
+ } else {
+ setInMemory((Boolean)value);
+ }
+ break;
+
+ case MAXVALUELENGTH:
+ if (value == null) {
+ unsetMaxValueLength();
+ } else {
+ setMaxValueLength((Integer)value);
+ }
+ break;
+
+ case BLOOMFILTERTYPE:
+ if (value == null) {
+ unsetBloomFilterType();
+ } else {
+ setBloomFilterType((String)value);
+ }
+ break;
+
+ case BLOOMFILTERVECTORSIZE:
+ if (value == null) {
+ unsetBloomFilterVectorSize();
+ } else {
+ setBloomFilterVectorSize((Integer)value);
+ }
+ break;
+
+ case BLOOMFILTERNBHASHES:
+ if (value == null) {
+ unsetBloomFilterNbHashes();
+ } else {
+ setBloomFilterNbHashes((Integer)value);
+ }
+ break;
+
+ case BLOCKCACHEENABLED:
+ if (value == null) {
+ unsetBlockCacheEnabled();
+ } else {
+ setBlockCacheEnabled((Boolean)value);
+ }
+ break;
+
+ case TIMETOLIVE:
+ if (value == null) {
+ unsetTimeToLive();
+ } else {
+ setTimeToLive((Integer)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case NAME:
+ return getName();
+
+ case MAXVERSIONS:
+ return new Integer(getMaxVersions());
+
+ case COMPRESSION:
+ return getCompression();
+
+ case INMEMORY:
+ return new Boolean(isInMemory());
+
+ case MAXVALUELENGTH:
+ return new Integer(getMaxValueLength());
+
+ case BLOOMFILTERTYPE:
+ return getBloomFilterType();
+
+ case BLOOMFILTERVECTORSIZE:
+ return new Integer(getBloomFilterVectorSize());
+
+ case BLOOMFILTERNBHASHES:
+ return new Integer(getBloomFilterNbHashes());
+
+ case BLOCKCACHEENABLED:
+ return new Boolean(isBlockCacheEnabled());
+
+ case TIMETOLIVE:
+ return new Integer(getTimeToLive());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case NAME:
+ return isSetName();
+ case MAXVERSIONS:
+ return isSetMaxVersions();
+ case COMPRESSION:
+ return isSetCompression();
+ case INMEMORY:
+ return isSetInMemory();
+ case MAXVALUELENGTH:
+ return isSetMaxValueLength();
+ case BLOOMFILTERTYPE:
+ return isSetBloomFilterType();
+ case BLOOMFILTERVECTORSIZE:
+ return isSetBloomFilterVectorSize();
+ case BLOOMFILTERNBHASHES:
+ return isSetBloomFilterNbHashes();
+ case BLOCKCACHEENABLED:
+ return isSetBlockCacheEnabled();
+ case TIMETOLIVE:
+ return isSetTimeToLive();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof ColumnDescriptor)
+ return this.equals((ColumnDescriptor)that);
+ return false;
+ }
+
+ public boolean equals(ColumnDescriptor that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_name = true && this.isSetName();
+ boolean that_present_name = true && that.isSetName();
+ if (this_present_name || that_present_name) {
+ if (!(this_present_name && that_present_name))
+ return false;
+ if (!java.util.Arrays.equals(this.name, that.name))
+ return false;
+ }
+
+ boolean this_present_maxVersions = true;
+ boolean that_present_maxVersions = true;
+ if (this_present_maxVersions || that_present_maxVersions) {
+ if (!(this_present_maxVersions && that_present_maxVersions))
+ return false;
+ if (this.maxVersions != that.maxVersions)
+ return false;
+ }
+
+ boolean this_present_compression = true && this.isSetCompression();
+ boolean that_present_compression = true && that.isSetCompression();
+ if (this_present_compression || that_present_compression) {
+ if (!(this_present_compression && that_present_compression))
+ return false;
+ if (!this.compression.equals(that.compression))
+ return false;
+ }
+
+ boolean this_present_inMemory = true;
+ boolean that_present_inMemory = true;
+ if (this_present_inMemory || that_present_inMemory) {
+ if (!(this_present_inMemory && that_present_inMemory))
+ return false;
+ if (this.inMemory != that.inMemory)
+ return false;
+ }
+
+ boolean this_present_maxValueLength = true;
+ boolean that_present_maxValueLength = true;
+ if (this_present_maxValueLength || that_present_maxValueLength) {
+ if (!(this_present_maxValueLength && that_present_maxValueLength))
+ return false;
+ if (this.maxValueLength != that.maxValueLength)
+ return false;
+ }
+
+ boolean this_present_bloomFilterType = true && this.isSetBloomFilterType();
+ boolean that_present_bloomFilterType = true && that.isSetBloomFilterType();
+ if (this_present_bloomFilterType || that_present_bloomFilterType) {
+ if (!(this_present_bloomFilterType && that_present_bloomFilterType))
+ return false;
+ if (!this.bloomFilterType.equals(that.bloomFilterType))
+ return false;
+ }
+
+ boolean this_present_bloomFilterVectorSize = true;
+ boolean that_present_bloomFilterVectorSize = true;
+ if (this_present_bloomFilterVectorSize || that_present_bloomFilterVectorSize) {
+ if (!(this_present_bloomFilterVectorSize && that_present_bloomFilterVectorSize))
+ return false;
+ if (this.bloomFilterVectorSize != that.bloomFilterVectorSize)
+ return false;
+ }
+
+ boolean this_present_bloomFilterNbHashes = true;
+ boolean that_present_bloomFilterNbHashes = true;
+ if (this_present_bloomFilterNbHashes || that_present_bloomFilterNbHashes) {
+ if (!(this_present_bloomFilterNbHashes && that_present_bloomFilterNbHashes))
+ return false;
+ if (this.bloomFilterNbHashes != that.bloomFilterNbHashes)
+ return false;
+ }
+
+ boolean this_present_blockCacheEnabled = true;
+ boolean that_present_blockCacheEnabled = true;
+ if (this_present_blockCacheEnabled || that_present_blockCacheEnabled) {
+ if (!(this_present_blockCacheEnabled && that_present_blockCacheEnabled))
+ return false;
+ if (this.blockCacheEnabled != that.blockCacheEnabled)
+ return false;
+ }
+
+ boolean this_present_timeToLive = true;
+ boolean that_present_timeToLive = true;
+ if (this_present_timeToLive || that_present_timeToLive) {
+ if (!(this_present_timeToLive && that_present_timeToLive))
+ return false;
+ if (this.timeToLive != that.timeToLive)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case NAME:
+ if (field.type == TType.STRING) {
+ this.name = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case MAXVERSIONS:
+ if (field.type == TType.I32) {
+ this.maxVersions = iprot.readI32();
+ this.__isset.maxVersions = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COMPRESSION:
+ if (field.type == TType.STRING) {
+ this.compression = iprot.readString();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case INMEMORY:
+ if (field.type == TType.BOOL) {
+ this.inMemory = iprot.readBool();
+ this.__isset.inMemory = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case MAXVALUELENGTH:
+ if (field.type == TType.I32) {
+ this.maxValueLength = iprot.readI32();
+ this.__isset.maxValueLength = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case BLOOMFILTERTYPE:
+ if (field.type == TType.STRING) {
+ this.bloomFilterType = iprot.readString();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case BLOOMFILTERVECTORSIZE:
+ if (field.type == TType.I32) {
+ this.bloomFilterVectorSize = iprot.readI32();
+ this.__isset.bloomFilterVectorSize = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case BLOOMFILTERNBHASHES:
+ if (field.type == TType.I32) {
+ this.bloomFilterNbHashes = iprot.readI32();
+ this.__isset.bloomFilterNbHashes = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case BLOCKCACHEENABLED:
+ if (field.type == TType.BOOL) {
+ this.blockCacheEnabled = iprot.readBool();
+ this.__isset.blockCacheEnabled = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMETOLIVE:
+ if (field.type == TType.I32) {
+ this.timeToLive = iprot.readI32();
+ this.__isset.timeToLive = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.name != null) {
+ oprot.writeFieldBegin(NAME_FIELD_DESC);
+ oprot.writeBinary(this.name);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(MAX_VERSIONS_FIELD_DESC);
+ oprot.writeI32(this.maxVersions);
+ oprot.writeFieldEnd();
+ if (this.compression != null) {
+ oprot.writeFieldBegin(COMPRESSION_FIELD_DESC);
+ oprot.writeString(this.compression);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(IN_MEMORY_FIELD_DESC);
+ oprot.writeBool(this.inMemory);
+ oprot.writeFieldEnd();
+ oprot.writeFieldBegin(MAX_VALUE_LENGTH_FIELD_DESC);
+ oprot.writeI32(this.maxValueLength);
+ oprot.writeFieldEnd();
+ if (this.bloomFilterType != null) {
+ oprot.writeFieldBegin(BLOOM_FILTER_TYPE_FIELD_DESC);
+ oprot.writeString(this.bloomFilterType);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(BLOOM_FILTER_VECTOR_SIZE_FIELD_DESC);
+ oprot.writeI32(this.bloomFilterVectorSize);
+ oprot.writeFieldEnd();
+ oprot.writeFieldBegin(BLOOM_FILTER_NB_HASHES_FIELD_DESC);
+ oprot.writeI32(this.bloomFilterNbHashes);
+ oprot.writeFieldEnd();
+ oprot.writeFieldBegin(BLOCK_CACHE_ENABLED_FIELD_DESC);
+ oprot.writeBool(this.blockCacheEnabled);
+ oprot.writeFieldEnd();
+ oprot.writeFieldBegin(TIME_TO_LIVE_FIELD_DESC);
+ oprot.writeI32(this.timeToLive);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("ColumnDescriptor(");
+ boolean first = true;
+
+ sb.append("name:");
+ if (this.name == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.name);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("maxVersions:");
+ sb.append(this.maxVersions);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("compression:");
+ if (this.compression == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.compression);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("inMemory:");
+ sb.append(this.inMemory);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("maxValueLength:");
+ sb.append(this.maxValueLength);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("bloomFilterType:");
+ if (this.bloomFilterType == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.bloomFilterType);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("bloomFilterVectorSize:");
+ sb.append(this.bloomFilterVectorSize);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("bloomFilterNbHashes:");
+ sb.append(this.bloomFilterNbHashes);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("blockCacheEnabled:");
+ sb.append(this.blockCacheEnabled);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timeToLive:");
+ sb.append(this.timeToLive);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java b/src/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
new file mode 100644
index 0000000..8a7783e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
@@ -0,0 +1,22136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+public class Hbase {
+
+ public interface Iface {
+
+ /**
+ * Brings a table on-line (enables it)
+ * @param tableName name of the table
+ *
+ * @param tableName
+ */
+ public void enableTable(byte[] tableName) throws IOError, TException;
+
+ /**
+ * Disables a table (takes it off-line) If it is being served, the master
+ * will tell the servers to stop serving it.
+ * @param tableName name of the table
+ *
+ * @param tableName
+ */
+ public void disableTable(byte[] tableName) throws IOError, TException;
+
+ /**
+ * @param tableName name of table to check
+ * @return true if table is on-line
+ *
+ * @param tableName
+ */
+ public boolean isTableEnabled(byte[] tableName) throws IOError, TException;
+
+ public void compact(byte[] tableNameOrRegionName) throws IOError, TException;
+
+ public void majorCompact(byte[] tableNameOrRegionName) throws IOError, TException;
+
+ /**
+ * List all the userspace tables.
+ * @return - returns a list of names
+ */
+ public List<byte[]> getTableNames() throws IOError, TException;
+
+ /**
+ * List all the column families assoicated with a table.
+ * @param tableName table name
+ * @return list of column family descriptors
+ *
+ * @param tableName
+ */
+ public Map<byte[],ColumnDescriptor> getColumnDescriptors(byte[] tableName) throws IOError, TException;
+
+ /**
+ * List the regions associated with a table.
+ * @param tableName table name
+ * @return list of region descriptors
+ *
+ * @param tableName
+ */
+ public List<TRegionInfo> getTableRegions(byte[] tableName) throws IOError, TException;
+
+ /**
+ * Create a table with the specified column families. The name
+ * field for each ColumnDescriptor must be set and must end in a
+ * colon (:). All other fields are optional and will get default
+ * values if not explicitly specified.
+ *
+ * @param tableName name of table to create
+ * @param columnFamilies list of column family descriptors
+ *
+ * @throws IllegalArgument if an input parameter is invalid
+ * @throws AlreadyExists if the table name already exists
+ *
+ * @param tableName
+ * @param columnFamilies
+ */
+ public void createTable(byte[] tableName, List<ColumnDescriptor> columnFamilies) throws IOError, IllegalArgument, AlreadyExists, TException;
+
+ /**
+ * Deletes a table
+ * @param tableName name of table to delete
+ * @throws IOError if table doesn't exist on server or there was some other
+ * problem
+ *
+ * @param tableName
+ */
+ public void deleteTable(byte[] tableName) throws IOError, TException;
+
+ /**
+ * Get a single TCell for the specified table, row, and column at the
+ * latest timestamp. Returns an empty list if no such value exists.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param column column name
+ * @return value for specified row/column
+ *
+ * @param tableName
+ * @param row
+ * @param column
+ */
+ public List<TCell> get(byte[] tableName, byte[] row, byte[] column) throws IOError, TException;
+
+ /**
+ * Get the specified number of versions for the specified table,
+ * row, and column.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param column column name
+ * @param numVersions number of versions to retrieve
+ * @return list of cells for specified row/column
+ *
+ * @param tableName
+ * @param row
+ * @param column
+ * @param numVersions
+ */
+ public List<TCell> getVer(byte[] tableName, byte[] row, byte[] column, int numVersions) throws IOError, TException;
+
+ /**
+ * Get the specified number of versions for the specified table,
+ * row, and column. Only versions less than or equal to the specified
+ * timestamp will be returned.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param column column name
+ * @param timestamp timestamp
+ * @param numVersions number of versions to retrieve
+ * @return list of cells for specified row/column
+ *
+ * @param tableName
+ * @param row
+ * @param column
+ * @param timestamp
+ * @param numVersions
+ */
+ public List<TCell> getVerTs(byte[] tableName, byte[] row, byte[] column, long timestamp, int numVersions) throws IOError, TException;
+
+ /**
+ * Get all the data for the specified table and row at the latest
+ * timestamp. Returns an empty list if the row does not exist.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @return TRowResult containing the row and map of columns to TCells
+ *
+ * @param tableName
+ * @param row
+ */
+ public List<TRowResult> getRow(byte[] tableName, byte[] row) throws IOError, TException;
+
+ /**
+ * Get the specified columns for the specified table and row at the latest
+ * timestamp. Returns an empty list if the row does not exist.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param columns List of columns to return, null for all columns
+ * @return TRowResult containing the row and map of columns to TCells
+ *
+ * @param tableName
+ * @param row
+ * @param columns
+ */
+ public List<TRowResult> getRowWithColumns(byte[] tableName, byte[] row, List<byte[]> columns) throws IOError, TException;
+
+ /**
+ * Get all the data for the specified table and row at the specified
+ * timestamp. Returns an empty list if the row does not exist.
+ *
+ * @param tableName of table
+ * @param row row key
+ * @param timestamp timestamp
+ * @return TRowResult containing the row and map of columns to TCells
+ *
+ * @param tableName
+ * @param row
+ * @param timestamp
+ */
+ public List<TRowResult> getRowTs(byte[] tableName, byte[] row, long timestamp) throws IOError, TException;
+
+ /**
+ * Get the specified columns for the specified table and row at the specified
+ * timestamp. Returns an empty list if the row does not exist.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param columns List of columns to return, null for all columns
+ * @return TRowResult containing the row and map of columns to TCells
+ *
+ * @param tableName
+ * @param row
+ * @param columns
+ * @param timestamp
+ */
+ public List<TRowResult> getRowWithColumnsTs(byte[] tableName, byte[] row, List<byte[]> columns, long timestamp) throws IOError, TException;
+
+ /**
+ * Apply a series of mutations (updates/deletes) to a row in a
+ * single transaction. If an exception is thrown, then the
+ * transaction is aborted. Default current timestamp is used, and
+ * all entries will have an identical timestamp.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param mutations list of mutation commands
+ *
+ * @param tableName
+ * @param row
+ * @param mutations
+ */
+ public void mutateRow(byte[] tableName, byte[] row, List<Mutation> mutations) throws IOError, IllegalArgument, TException;
+
+ /**
+ * Apply a series of mutations (updates/deletes) to a row in a
+ * single transaction. If an exception is thrown, then the
+ * transaction is aborted. The specified timestamp is used, and
+ * all entries will have an identical timestamp.
+ *
+ * @param tableName name of table
+ * @param row row key
+ * @param mutations list of mutation commands
+ * @param timestamp timestamp
+ *
+ * @param tableName
+ * @param row
+ * @param mutations
+ * @param timestamp
+ */
+ public void mutateRowTs(byte[] tableName, byte[] row, List<Mutation> mutations, long timestamp) throws IOError, IllegalArgument, TException;
+
+ /**
+ * Apply a series of batches (each a series of mutations on a single row)
+ * in a single transaction. If an exception is thrown, then the
+ * transaction is aborted. Default current timestamp is used, and
+ * all entries will have an identical timestamp.
+ *
+ * @param tableName name of table
+ * @param rowBatches list of row batches
+ *
+ * @param tableName
+ * @param rowBatches
+ */
+ public void mutateRows(byte[] tableName, List<BatchMutation> rowBatches) throws IOError, IllegalArgument, TException;
+
+ /**
+ * Apply a series of batches (each a series of mutations on a single row)
+ * in a single transaction. If an exception is thrown, then the
+ * transaction is aborted. The specified timestamp is used, and
+ * all entries will have an identical timestamp.
+ *
+ * @param tableName name of table
+ * @param rowBatches list of row batches
+ * @param timestamp timestamp
+ *
+ * @param tableName
+ * @param rowBatches
+ * @param timestamp
+ */
+ public void mutateRowsTs(byte[] tableName, List<BatchMutation> rowBatches, long timestamp) throws IOError, IllegalArgument, TException;
+
+ /**
+ * Atomically increment the column value specified. Returns the next value post increment.
+ * @param tableName name of table
+ * @param row row to increment
+ * @param column name of column
+ * @param value amount to increment by
+ *
+ * @param tableName
+ * @param row
+ * @param column
+ * @param value
+ */
+ public long atomicIncrement(byte[] tableName, byte[] row, byte[] column, long value) throws IOError, IllegalArgument, TException;
+
+ /**
+ * Delete all cells that match the passed row and column.
+ *
+ * @param tableName name of table
+ * @param row Row to update
+ * @param column name of column whose value is to be deleted
+ *
+ * @param tableName
+ * @param row
+ * @param column
+ */
+ public void deleteAll(byte[] tableName, byte[] row, byte[] column) throws IOError, TException;
+
+ /**
+ * Delete all cells that match the passed row and column and whose
+ * timestamp is equal-to or older than the passed timestamp.
+ *
+ * @param tableName name of table
+ * @param row Row to update
+ * @param column name of column whose value is to be deleted
+ * @param timestamp timestamp
+ *
+ * @param tableName
+ * @param row
+ * @param column
+ * @param timestamp
+ */
+ public void deleteAllTs(byte[] tableName, byte[] row, byte[] column, long timestamp) throws IOError, TException;
+
+ /**
+ * Completely delete the row's cells.
+ *
+ * @param tableName name of table
+ * @param row key of the row to be completely deleted.
+ *
+ * @param tableName
+ * @param row
+ */
+ public void deleteAllRow(byte[] tableName, byte[] row) throws IOError, TException;
+
+ /**
+ * Completely delete the row's cells marked with a timestamp
+ * equal-to or older than the passed timestamp.
+ *
+ * @param tableName name of table
+ * @param row key of the row to be completely deleted.
+ * @param timestamp timestamp
+ *
+ * @param tableName
+ * @param row
+ * @param timestamp
+ */
+ public void deleteAllRowTs(byte[] tableName, byte[] row, long timestamp) throws IOError, TException;
+
+ /**
+ * Get a scanner on the current table starting at the specified row and
+ * ending at the last row in the table. Return the specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier.
+ * @param tableName name of table
+ * @param startRow starting row in table to scan. send "" (empty string) to
+ * start at the first row.
+ *
+ * @return scanner id to be used with other scanner procedures
+ *
+ * @param tableName
+ * @param startRow
+ * @param columns
+ */
+ public int scannerOpen(byte[] tableName, byte[] startRow, List<byte[]> columns) throws IOError, TException;
+
+ /**
+ * Get a scanner on the current table starting and stopping at the
+ * specified rows. ending at the last row in the table. Return the
+ * specified columns.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier.
+ * @param tableName name of table
+ * @param startRow starting row in table to scan. send "" (empty string) to
+ * start at the first row.
+ * @param stopRow row to stop scanning on. This row is *not* included in the
+ * scanner's results
+ *
+ * @return scanner id to be used with other scanner procedures
+ *
+ * @param tableName
+ * @param startRow
+ * @param stopRow
+ * @param columns
+ */
+ public int scannerOpenWithStop(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns) throws IOError, TException;
+
+ /**
+ * Get a scanner on the current table starting at the specified row and
+ * ending at the last row in the table. Return the specified columns.
+ * Only values with the specified timestamp are returned.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier.
+ * @param tableName name of table
+ * @param startRow starting row in table to scan. send "" (empty string) to
+ * start at the first row.
+ * @param timestamp timestamp
+ *
+ * @return scanner id to be used with other scanner procedures
+ *
+ * @param tableName
+ * @param startRow
+ * @param columns
+ * @param timestamp
+ */
+ public int scannerOpenTs(byte[] tableName, byte[] startRow, List<byte[]> columns, long timestamp) throws IOError, TException;
+
+ /**
+ * Get a scanner on the current table starting and stopping at the
+ * specified rows. ending at the last row in the table. Return the
+ * specified columns. Only values with the specified timestamp are
+ * returned.
+ *
+ * @param columns columns to scan. If column name is a column family, all
+ * columns of the specified column family are returned. Its also possible
+ * to pass a regex in the column qualifier.
+ * @param tableName name of table
+ * @param startRow starting row in table to scan. send "" (empty string) to
+ * start at the first row.
+ * @param stopRow row to stop scanning on. This row is *not* included
+ * in the scanner's results
+ * @param timestamp timestamp
+ *
+ * @return scanner id to be used with other scanner procedures
+ *
+ * @param tableName
+ * @param startRow
+ * @param stopRow
+ * @param columns
+ * @param timestamp
+ */
+ public int scannerOpenWithStopTs(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns, long timestamp) throws IOError, TException;
+
+ /**
+ * Returns the scanner's current row value and advances to the next
+ * row in the table. When there are no more rows in the table, or a key
+ * greater-than-or-equal-to the scanner's specified stopRow is reached,
+ * an empty list is returned.
+ *
+ * @param id id of a scanner returned by scannerOpen
+ * @return a TRowResult containing the current row and a map of the columns to TCells.
+ * @throws IllegalArgument if ScannerID is invalid
+ * @throws NotFound when the scanner reaches the end
+ *
+ * @param id
+ */
+ public List<TRowResult> scannerGet(int id) throws IOError, IllegalArgument, TException;
+
+ /**
+ * Returns, starting at the scanner's current row value nbRows worth of
+ * rows and advances to the next row in the table. When there are no more
+ * rows in the table, or a key greater-than-or-equal-to the scanner's
+ * specified stopRow is reached, an empty list is returned.
+ *
+ * @param id id of a scanner returned by scannerOpen
+ * @param nbRows number of results to regturn
+ * @return a TRowResult containing the current row and a map of the columns to TCells.
+ * @throws IllegalArgument if ScannerID is invalid
+ * @throws NotFound when the scanner reaches the end
+ *
+ * @param id
+ * @param nbRows
+ */
+ public List<TRowResult> scannerGetList(int id, int nbRows) throws IOError, IllegalArgument, TException;
+
+ /**
+ * Closes the server-state associated with an open scanner.
+ *
+ * @param id id of a scanner returned by scannerOpen
+ * @throws IllegalArgument if ScannerID is invalid
+ *
+ * @param id
+ */
+ public void scannerClose(int id) throws IOError, IllegalArgument, TException;
+
+ }
+
+ public static class Client implements Iface {
+ public Client(TProtocol prot)
+ {
+ this(prot, prot);
+ }
+
+ public Client(TProtocol iprot, TProtocol oprot)
+ {
+ iprot_ = iprot;
+ oprot_ = oprot;
+ }
+
+ protected TProtocol iprot_;
+ protected TProtocol oprot_;
+
+ protected int seqid_;
+
+ public TProtocol getInputProtocol()
+ {
+ return this.iprot_;
+ }
+
+ public TProtocol getOutputProtocol()
+ {
+ return this.oprot_;
+ }
+
+ public void enableTable(byte[] tableName) throws IOError, TException
+ {
+ send_enableTable(tableName);
+ recv_enableTable();
+ }
+
+ public void send_enableTable(byte[] tableName) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("enableTable", TMessageType.CALL, seqid_));
+ enableTable_args args = new enableTable_args();
+ args.tableName = tableName;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_enableTable() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ enableTable_result result = new enableTable_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public void disableTable(byte[] tableName) throws IOError, TException
+ {
+ send_disableTable(tableName);
+ recv_disableTable();
+ }
+
+ public void send_disableTable(byte[] tableName) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("disableTable", TMessageType.CALL, seqid_));
+ disableTable_args args = new disableTable_args();
+ args.tableName = tableName;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_disableTable() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ disableTable_result result = new disableTable_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public boolean isTableEnabled(byte[] tableName) throws IOError, TException
+ {
+ send_isTableEnabled(tableName);
+ return recv_isTableEnabled();
+ }
+
+ public void send_isTableEnabled(byte[] tableName) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("isTableEnabled", TMessageType.CALL, seqid_));
+ isTableEnabled_args args = new isTableEnabled_args();
+ args.tableName = tableName;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public boolean recv_isTableEnabled() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ isTableEnabled_result result = new isTableEnabled_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "isTableEnabled failed: unknown result");
+ }
+
+ public void compact(byte[] tableNameOrRegionName) throws IOError, TException
+ {
+ send_compact(tableNameOrRegionName);
+ recv_compact();
+ }
+
+ public void send_compact(byte[] tableNameOrRegionName) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("compact", TMessageType.CALL, seqid_));
+ compact_args args = new compact_args();
+ args.tableNameOrRegionName = tableNameOrRegionName;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_compact() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ compact_result result = new compact_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public void majorCompact(byte[] tableNameOrRegionName) throws IOError, TException
+ {
+ send_majorCompact(tableNameOrRegionName);
+ recv_majorCompact();
+ }
+
+ public void send_majorCompact(byte[] tableNameOrRegionName) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("majorCompact", TMessageType.CALL, seqid_));
+ majorCompact_args args = new majorCompact_args();
+ args.tableNameOrRegionName = tableNameOrRegionName;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_majorCompact() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ majorCompact_result result = new majorCompact_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public List<byte[]> getTableNames() throws IOError, TException
+ {
+ send_getTableNames();
+ return recv_getTableNames();
+ }
+
+ public void send_getTableNames() throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getTableNames", TMessageType.CALL, seqid_));
+ getTableNames_args args = new getTableNames_args();
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<byte[]> recv_getTableNames() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getTableNames_result result = new getTableNames_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getTableNames failed: unknown result");
+ }
+
+ public Map<byte[],ColumnDescriptor> getColumnDescriptors(byte[] tableName) throws IOError, TException
+ {
+ send_getColumnDescriptors(tableName);
+ return recv_getColumnDescriptors();
+ }
+
+ public void send_getColumnDescriptors(byte[] tableName) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getColumnDescriptors", TMessageType.CALL, seqid_));
+ getColumnDescriptors_args args = new getColumnDescriptors_args();
+ args.tableName = tableName;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public Map<byte[],ColumnDescriptor> recv_getColumnDescriptors() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getColumnDescriptors_result result = new getColumnDescriptors_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getColumnDescriptors failed: unknown result");
+ }
+
+ public List<TRegionInfo> getTableRegions(byte[] tableName) throws IOError, TException
+ {
+ send_getTableRegions(tableName);
+ return recv_getTableRegions();
+ }
+
+ public void send_getTableRegions(byte[] tableName) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getTableRegions", TMessageType.CALL, seqid_));
+ getTableRegions_args args = new getTableRegions_args();
+ args.tableName = tableName;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TRegionInfo> recv_getTableRegions() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getTableRegions_result result = new getTableRegions_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getTableRegions failed: unknown result");
+ }
+
+ public void createTable(byte[] tableName, List<ColumnDescriptor> columnFamilies) throws IOError, IllegalArgument, AlreadyExists, TException
+ {
+ send_createTable(tableName, columnFamilies);
+ recv_createTable();
+ }
+
+ public void send_createTable(byte[] tableName, List<ColumnDescriptor> columnFamilies) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("createTable", TMessageType.CALL, seqid_));
+ createTable_args args = new createTable_args();
+ args.tableName = tableName;
+ args.columnFamilies = columnFamilies;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_createTable() throws IOError, IllegalArgument, AlreadyExists, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ createTable_result result = new createTable_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ if (result.exist != null) {
+ throw result.exist;
+ }
+ return;
+ }
+
+ public void deleteTable(byte[] tableName) throws IOError, TException
+ {
+ send_deleteTable(tableName);
+ recv_deleteTable();
+ }
+
+ public void send_deleteTable(byte[] tableName) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("deleteTable", TMessageType.CALL, seqid_));
+ deleteTable_args args = new deleteTable_args();
+ args.tableName = tableName;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_deleteTable() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ deleteTable_result result = new deleteTable_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public List<TCell> get(byte[] tableName, byte[] row, byte[] column) throws IOError, TException
+ {
+ send_get(tableName, row, column);
+ return recv_get();
+ }
+
+ public void send_get(byte[] tableName, byte[] row, byte[] column) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("get", TMessageType.CALL, seqid_));
+ get_args args = new get_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.column = column;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TCell> recv_get() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ get_result result = new get_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "get failed: unknown result");
+ }
+
+ public List<TCell> getVer(byte[] tableName, byte[] row, byte[] column, int numVersions) throws IOError, TException
+ {
+ send_getVer(tableName, row, column, numVersions);
+ return recv_getVer();
+ }
+
+ public void send_getVer(byte[] tableName, byte[] row, byte[] column, int numVersions) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getVer", TMessageType.CALL, seqid_));
+ getVer_args args = new getVer_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.column = column;
+ args.numVersions = numVersions;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TCell> recv_getVer() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getVer_result result = new getVer_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getVer failed: unknown result");
+ }
+
+ public List<TCell> getVerTs(byte[] tableName, byte[] row, byte[] column, long timestamp, int numVersions) throws IOError, TException
+ {
+ send_getVerTs(tableName, row, column, timestamp, numVersions);
+ return recv_getVerTs();
+ }
+
+ public void send_getVerTs(byte[] tableName, byte[] row, byte[] column, long timestamp, int numVersions) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getVerTs", TMessageType.CALL, seqid_));
+ getVerTs_args args = new getVerTs_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.column = column;
+ args.timestamp = timestamp;
+ args.numVersions = numVersions;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TCell> recv_getVerTs() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getVerTs_result result = new getVerTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getVerTs failed: unknown result");
+ }
+
+ public List<TRowResult> getRow(byte[] tableName, byte[] row) throws IOError, TException
+ {
+ send_getRow(tableName, row);
+ return recv_getRow();
+ }
+
+ public void send_getRow(byte[] tableName, byte[] row) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getRow", TMessageType.CALL, seqid_));
+ getRow_args args = new getRow_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TRowResult> recv_getRow() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getRow_result result = new getRow_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getRow failed: unknown result");
+ }
+
+ public List<TRowResult> getRowWithColumns(byte[] tableName, byte[] row, List<byte[]> columns) throws IOError, TException
+ {
+ send_getRowWithColumns(tableName, row, columns);
+ return recv_getRowWithColumns();
+ }
+
+ public void send_getRowWithColumns(byte[] tableName, byte[] row, List<byte[]> columns) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getRowWithColumns", TMessageType.CALL, seqid_));
+ getRowWithColumns_args args = new getRowWithColumns_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.columns = columns;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TRowResult> recv_getRowWithColumns() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getRowWithColumns_result result = new getRowWithColumns_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getRowWithColumns failed: unknown result");
+ }
+
+ public List<TRowResult> getRowTs(byte[] tableName, byte[] row, long timestamp) throws IOError, TException
+ {
+ send_getRowTs(tableName, row, timestamp);
+ return recv_getRowTs();
+ }
+
+ public void send_getRowTs(byte[] tableName, byte[] row, long timestamp) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getRowTs", TMessageType.CALL, seqid_));
+ getRowTs_args args = new getRowTs_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.timestamp = timestamp;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TRowResult> recv_getRowTs() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getRowTs_result result = new getRowTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getRowTs failed: unknown result");
+ }
+
+ public List<TRowResult> getRowWithColumnsTs(byte[] tableName, byte[] row, List<byte[]> columns, long timestamp) throws IOError, TException
+ {
+ send_getRowWithColumnsTs(tableName, row, columns, timestamp);
+ return recv_getRowWithColumnsTs();
+ }
+
+ public void send_getRowWithColumnsTs(byte[] tableName, byte[] row, List<byte[]> columns, long timestamp) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("getRowWithColumnsTs", TMessageType.CALL, seqid_));
+ getRowWithColumnsTs_args args = new getRowWithColumnsTs_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.columns = columns;
+ args.timestamp = timestamp;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TRowResult> recv_getRowWithColumnsTs() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ getRowWithColumnsTs_result result = new getRowWithColumnsTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "getRowWithColumnsTs failed: unknown result");
+ }
+
+ public void mutateRow(byte[] tableName, byte[] row, List<Mutation> mutations) throws IOError, IllegalArgument, TException
+ {
+ send_mutateRow(tableName, row, mutations);
+ recv_mutateRow();
+ }
+
+ public void send_mutateRow(byte[] tableName, byte[] row, List<Mutation> mutations) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("mutateRow", TMessageType.CALL, seqid_));
+ mutateRow_args args = new mutateRow_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.mutations = mutations;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_mutateRow() throws IOError, IllegalArgument, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ mutateRow_result result = new mutateRow_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ return;
+ }
+
+ public void mutateRowTs(byte[] tableName, byte[] row, List<Mutation> mutations, long timestamp) throws IOError, IllegalArgument, TException
+ {
+ send_mutateRowTs(tableName, row, mutations, timestamp);
+ recv_mutateRowTs();
+ }
+
+ public void send_mutateRowTs(byte[] tableName, byte[] row, List<Mutation> mutations, long timestamp) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("mutateRowTs", TMessageType.CALL, seqid_));
+ mutateRowTs_args args = new mutateRowTs_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.mutations = mutations;
+ args.timestamp = timestamp;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_mutateRowTs() throws IOError, IllegalArgument, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ mutateRowTs_result result = new mutateRowTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ return;
+ }
+
+ public void mutateRows(byte[] tableName, List<BatchMutation> rowBatches) throws IOError, IllegalArgument, TException
+ {
+ send_mutateRows(tableName, rowBatches);
+ recv_mutateRows();
+ }
+
+ public void send_mutateRows(byte[] tableName, List<BatchMutation> rowBatches) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("mutateRows", TMessageType.CALL, seqid_));
+ mutateRows_args args = new mutateRows_args();
+ args.tableName = tableName;
+ args.rowBatches = rowBatches;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_mutateRows() throws IOError, IllegalArgument, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ mutateRows_result result = new mutateRows_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ return;
+ }
+
+ public void mutateRowsTs(byte[] tableName, List<BatchMutation> rowBatches, long timestamp) throws IOError, IllegalArgument, TException
+ {
+ send_mutateRowsTs(tableName, rowBatches, timestamp);
+ recv_mutateRowsTs();
+ }
+
+ public void send_mutateRowsTs(byte[] tableName, List<BatchMutation> rowBatches, long timestamp) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("mutateRowsTs", TMessageType.CALL, seqid_));
+ mutateRowsTs_args args = new mutateRowsTs_args();
+ args.tableName = tableName;
+ args.rowBatches = rowBatches;
+ args.timestamp = timestamp;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_mutateRowsTs() throws IOError, IllegalArgument, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ mutateRowsTs_result result = new mutateRowsTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ return;
+ }
+
+ public long atomicIncrement(byte[] tableName, byte[] row, byte[] column, long value) throws IOError, IllegalArgument, TException
+ {
+ send_atomicIncrement(tableName, row, column, value);
+ return recv_atomicIncrement();
+ }
+
+ public void send_atomicIncrement(byte[] tableName, byte[] row, byte[] column, long value) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("atomicIncrement", TMessageType.CALL, seqid_));
+ atomicIncrement_args args = new atomicIncrement_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.column = column;
+ args.value = value;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public long recv_atomicIncrement() throws IOError, IllegalArgument, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ atomicIncrement_result result = new atomicIncrement_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "atomicIncrement failed: unknown result");
+ }
+
+ public void deleteAll(byte[] tableName, byte[] row, byte[] column) throws IOError, TException
+ {
+ send_deleteAll(tableName, row, column);
+ recv_deleteAll();
+ }
+
+ public void send_deleteAll(byte[] tableName, byte[] row, byte[] column) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("deleteAll", TMessageType.CALL, seqid_));
+ deleteAll_args args = new deleteAll_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.column = column;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_deleteAll() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ deleteAll_result result = new deleteAll_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public void deleteAllTs(byte[] tableName, byte[] row, byte[] column, long timestamp) throws IOError, TException
+ {
+ send_deleteAllTs(tableName, row, column, timestamp);
+ recv_deleteAllTs();
+ }
+
+ public void send_deleteAllTs(byte[] tableName, byte[] row, byte[] column, long timestamp) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("deleteAllTs", TMessageType.CALL, seqid_));
+ deleteAllTs_args args = new deleteAllTs_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.column = column;
+ args.timestamp = timestamp;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_deleteAllTs() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ deleteAllTs_result result = new deleteAllTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public void deleteAllRow(byte[] tableName, byte[] row) throws IOError, TException
+ {
+ send_deleteAllRow(tableName, row);
+ recv_deleteAllRow();
+ }
+
+ public void send_deleteAllRow(byte[] tableName, byte[] row) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("deleteAllRow", TMessageType.CALL, seqid_));
+ deleteAllRow_args args = new deleteAllRow_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_deleteAllRow() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ deleteAllRow_result result = new deleteAllRow_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public void deleteAllRowTs(byte[] tableName, byte[] row, long timestamp) throws IOError, TException
+ {
+ send_deleteAllRowTs(tableName, row, timestamp);
+ recv_deleteAllRowTs();
+ }
+
+ public void send_deleteAllRowTs(byte[] tableName, byte[] row, long timestamp) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("deleteAllRowTs", TMessageType.CALL, seqid_));
+ deleteAllRowTs_args args = new deleteAllRowTs_args();
+ args.tableName = tableName;
+ args.row = row;
+ args.timestamp = timestamp;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_deleteAllRowTs() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ deleteAllRowTs_result result = new deleteAllRowTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ return;
+ }
+
+ public int scannerOpen(byte[] tableName, byte[] startRow, List<byte[]> columns) throws IOError, TException
+ {
+ send_scannerOpen(tableName, startRow, columns);
+ return recv_scannerOpen();
+ }
+
+ public void send_scannerOpen(byte[] tableName, byte[] startRow, List<byte[]> columns) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("scannerOpen", TMessageType.CALL, seqid_));
+ scannerOpen_args args = new scannerOpen_args();
+ args.tableName = tableName;
+ args.startRow = startRow;
+ args.columns = columns;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public int recv_scannerOpen() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ scannerOpen_result result = new scannerOpen_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpen failed: unknown result");
+ }
+
+ public int scannerOpenWithStop(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns) throws IOError, TException
+ {
+ send_scannerOpenWithStop(tableName, startRow, stopRow, columns);
+ return recv_scannerOpenWithStop();
+ }
+
+ public void send_scannerOpenWithStop(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("scannerOpenWithStop", TMessageType.CALL, seqid_));
+ scannerOpenWithStop_args args = new scannerOpenWithStop_args();
+ args.tableName = tableName;
+ args.startRow = startRow;
+ args.stopRow = stopRow;
+ args.columns = columns;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public int recv_scannerOpenWithStop() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ scannerOpenWithStop_result result = new scannerOpenWithStop_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenWithStop failed: unknown result");
+ }
+
+ public int scannerOpenTs(byte[] tableName, byte[] startRow, List<byte[]> columns, long timestamp) throws IOError, TException
+ {
+ send_scannerOpenTs(tableName, startRow, columns, timestamp);
+ return recv_scannerOpenTs();
+ }
+
+ public void send_scannerOpenTs(byte[] tableName, byte[] startRow, List<byte[]> columns, long timestamp) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("scannerOpenTs", TMessageType.CALL, seqid_));
+ scannerOpenTs_args args = new scannerOpenTs_args();
+ args.tableName = tableName;
+ args.startRow = startRow;
+ args.columns = columns;
+ args.timestamp = timestamp;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public int recv_scannerOpenTs() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ scannerOpenTs_result result = new scannerOpenTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenTs failed: unknown result");
+ }
+
+ public int scannerOpenWithStopTs(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns, long timestamp) throws IOError, TException
+ {
+ send_scannerOpenWithStopTs(tableName, startRow, stopRow, columns, timestamp);
+ return recv_scannerOpenWithStopTs();
+ }
+
+ public void send_scannerOpenWithStopTs(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns, long timestamp) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("scannerOpenWithStopTs", TMessageType.CALL, seqid_));
+ scannerOpenWithStopTs_args args = new scannerOpenWithStopTs_args();
+ args.tableName = tableName;
+ args.startRow = startRow;
+ args.stopRow = stopRow;
+ args.columns = columns;
+ args.timestamp = timestamp;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public int recv_scannerOpenWithStopTs() throws IOError, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ scannerOpenWithStopTs_result result = new scannerOpenWithStopTs_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenWithStopTs failed: unknown result");
+ }
+
+ public List<TRowResult> scannerGet(int id) throws IOError, IllegalArgument, TException
+ {
+ send_scannerGet(id);
+ return recv_scannerGet();
+ }
+
+ public void send_scannerGet(int id) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("scannerGet", TMessageType.CALL, seqid_));
+ scannerGet_args args = new scannerGet_args();
+ args.id = id;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TRowResult> recv_scannerGet() throws IOError, IllegalArgument, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ scannerGet_result result = new scannerGet_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerGet failed: unknown result");
+ }
+
+ public List<TRowResult> scannerGetList(int id, int nbRows) throws IOError, IllegalArgument, TException
+ {
+ send_scannerGetList(id, nbRows);
+ return recv_scannerGetList();
+ }
+
+ public void send_scannerGetList(int id, int nbRows) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("scannerGetList", TMessageType.CALL, seqid_));
+ scannerGetList_args args = new scannerGetList_args();
+ args.id = id;
+ args.nbRows = nbRows;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public List<TRowResult> recv_scannerGetList() throws IOError, IllegalArgument, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ scannerGetList_result result = new scannerGetList_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.isSetSuccess()) {
+ return result.success;
+ }
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerGetList failed: unknown result");
+ }
+
+ public void scannerClose(int id) throws IOError, IllegalArgument, TException
+ {
+ send_scannerClose(id);
+ recv_scannerClose();
+ }
+
+ public void send_scannerClose(int id) throws TException
+ {
+ oprot_.writeMessageBegin(new TMessage("scannerClose", TMessageType.CALL, seqid_));
+ scannerClose_args args = new scannerClose_args();
+ args.id = id;
+ args.write(oprot_);
+ oprot_.writeMessageEnd();
+ oprot_.getTransport().flush();
+ }
+
+ public void recv_scannerClose() throws IOError, IllegalArgument, TException
+ {
+ TMessage msg = iprot_.readMessageBegin();
+ if (msg.type == TMessageType.EXCEPTION) {
+ TApplicationException x = TApplicationException.read(iprot_);
+ iprot_.readMessageEnd();
+ throw x;
+ }
+ scannerClose_result result = new scannerClose_result();
+ result.read(iprot_);
+ iprot_.readMessageEnd();
+ if (result.io != null) {
+ throw result.io;
+ }
+ if (result.ia != null) {
+ throw result.ia;
+ }
+ return;
+ }
+
+ }
+ public static class Processor implements TProcessor {
+ public Processor(Iface iface)
+ {
+ iface_ = iface;
+ processMap_.put("enableTable", new enableTable());
+ processMap_.put("disableTable", new disableTable());
+ processMap_.put("isTableEnabled", new isTableEnabled());
+ processMap_.put("compact", new compact());
+ processMap_.put("majorCompact", new majorCompact());
+ processMap_.put("getTableNames", new getTableNames());
+ processMap_.put("getColumnDescriptors", new getColumnDescriptors());
+ processMap_.put("getTableRegions", new getTableRegions());
+ processMap_.put("createTable", new createTable());
+ processMap_.put("deleteTable", new deleteTable());
+ processMap_.put("get", new get());
+ processMap_.put("getVer", new getVer());
+ processMap_.put("getVerTs", new getVerTs());
+ processMap_.put("getRow", new getRow());
+ processMap_.put("getRowWithColumns", new getRowWithColumns());
+ processMap_.put("getRowTs", new getRowTs());
+ processMap_.put("getRowWithColumnsTs", new getRowWithColumnsTs());
+ processMap_.put("mutateRow", new mutateRow());
+ processMap_.put("mutateRowTs", new mutateRowTs());
+ processMap_.put("mutateRows", new mutateRows());
+ processMap_.put("mutateRowsTs", new mutateRowsTs());
+ processMap_.put("atomicIncrement", new atomicIncrement());
+ processMap_.put("deleteAll", new deleteAll());
+ processMap_.put("deleteAllTs", new deleteAllTs());
+ processMap_.put("deleteAllRow", new deleteAllRow());
+ processMap_.put("deleteAllRowTs", new deleteAllRowTs());
+ processMap_.put("scannerOpen", new scannerOpen());
+ processMap_.put("scannerOpenWithStop", new scannerOpenWithStop());
+ processMap_.put("scannerOpenTs", new scannerOpenTs());
+ processMap_.put("scannerOpenWithStopTs", new scannerOpenWithStopTs());
+ processMap_.put("scannerGet", new scannerGet());
+ processMap_.put("scannerGetList", new scannerGetList());
+ processMap_.put("scannerClose", new scannerClose());
+ }
+
+ protected static interface ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException;
+ }
+
+ private Iface iface_;
+ protected final HashMap<String,ProcessFunction> processMap_ = new HashMap<String,ProcessFunction>();
+
+ public boolean process(TProtocol iprot, TProtocol oprot) throws TException
+ {
+ TMessage msg = iprot.readMessageBegin();
+ ProcessFunction fn = processMap_.get(msg.name);
+ if (fn == null) {
+ TProtocolUtil.skip(iprot, TType.STRUCT);
+ iprot.readMessageEnd();
+ TApplicationException x = new TApplicationException(TApplicationException.UNKNOWN_METHOD, "Invalid method name: '"+msg.name+"'");
+ oprot.writeMessageBegin(new TMessage(msg.name, TMessageType.EXCEPTION, msg.seqid));
+ x.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ return true;
+ }
+ fn.process(msg.seqid, iprot, oprot);
+ return true;
+ }
+
+ private class enableTable implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ enableTable_args args = new enableTable_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ enableTable_result result = new enableTable_result();
+ try {
+ iface_.enableTable(args.tableName);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("enableTable", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class disableTable implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ disableTable_args args = new disableTable_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ disableTable_result result = new disableTable_result();
+ try {
+ iface_.disableTable(args.tableName);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("disableTable", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class isTableEnabled implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ isTableEnabled_args args = new isTableEnabled_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ isTableEnabled_result result = new isTableEnabled_result();
+ try {
+ result.success = iface_.isTableEnabled(args.tableName);
+ result.__isset.success = true;
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("isTableEnabled", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class compact implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ compact_args args = new compact_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ compact_result result = new compact_result();
+ try {
+ iface_.compact(args.tableNameOrRegionName);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("compact", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class majorCompact implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ majorCompact_args args = new majorCompact_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ majorCompact_result result = new majorCompact_result();
+ try {
+ iface_.majorCompact(args.tableNameOrRegionName);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("majorCompact", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getTableNames implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getTableNames_args args = new getTableNames_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getTableNames_result result = new getTableNames_result();
+ try {
+ result.success = iface_.getTableNames();
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getTableNames", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getColumnDescriptors implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getColumnDescriptors_args args = new getColumnDescriptors_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getColumnDescriptors_result result = new getColumnDescriptors_result();
+ try {
+ result.success = iface_.getColumnDescriptors(args.tableName);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getColumnDescriptors", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getTableRegions implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getTableRegions_args args = new getTableRegions_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getTableRegions_result result = new getTableRegions_result();
+ try {
+ result.success = iface_.getTableRegions(args.tableName);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getTableRegions", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class createTable implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ createTable_args args = new createTable_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ createTable_result result = new createTable_result();
+ try {
+ iface_.createTable(args.tableName, args.columnFamilies);
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ } catch (AlreadyExists exist) {
+ result.exist = exist;
+ }
+ oprot.writeMessageBegin(new TMessage("createTable", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class deleteTable implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ deleteTable_args args = new deleteTable_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ deleteTable_result result = new deleteTable_result();
+ try {
+ iface_.deleteTable(args.tableName);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("deleteTable", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class get implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ get_args args = new get_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ get_result result = new get_result();
+ try {
+ result.success = iface_.get(args.tableName, args.row, args.column);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("get", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getVer implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getVer_args args = new getVer_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getVer_result result = new getVer_result();
+ try {
+ result.success = iface_.getVer(args.tableName, args.row, args.column, args.numVersions);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getVer", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getVerTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getVerTs_args args = new getVerTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getVerTs_result result = new getVerTs_result();
+ try {
+ result.success = iface_.getVerTs(args.tableName, args.row, args.column, args.timestamp, args.numVersions);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getVerTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getRow implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getRow_args args = new getRow_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getRow_result result = new getRow_result();
+ try {
+ result.success = iface_.getRow(args.tableName, args.row);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getRow", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getRowWithColumns implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getRowWithColumns_args args = new getRowWithColumns_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getRowWithColumns_result result = new getRowWithColumns_result();
+ try {
+ result.success = iface_.getRowWithColumns(args.tableName, args.row, args.columns);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getRowWithColumns", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getRowTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getRowTs_args args = new getRowTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getRowTs_result result = new getRowTs_result();
+ try {
+ result.success = iface_.getRowTs(args.tableName, args.row, args.timestamp);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getRowTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class getRowWithColumnsTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ getRowWithColumnsTs_args args = new getRowWithColumnsTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ getRowWithColumnsTs_result result = new getRowWithColumnsTs_result();
+ try {
+ result.success = iface_.getRowWithColumnsTs(args.tableName, args.row, args.columns, args.timestamp);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("getRowWithColumnsTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class mutateRow implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ mutateRow_args args = new mutateRow_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ mutateRow_result result = new mutateRow_result();
+ try {
+ iface_.mutateRow(args.tableName, args.row, args.mutations);
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ }
+ oprot.writeMessageBegin(new TMessage("mutateRow", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class mutateRowTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ mutateRowTs_args args = new mutateRowTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ mutateRowTs_result result = new mutateRowTs_result();
+ try {
+ iface_.mutateRowTs(args.tableName, args.row, args.mutations, args.timestamp);
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ }
+ oprot.writeMessageBegin(new TMessage("mutateRowTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class mutateRows implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ mutateRows_args args = new mutateRows_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ mutateRows_result result = new mutateRows_result();
+ try {
+ iface_.mutateRows(args.tableName, args.rowBatches);
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ }
+ oprot.writeMessageBegin(new TMessage("mutateRows", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class mutateRowsTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ mutateRowsTs_args args = new mutateRowsTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ mutateRowsTs_result result = new mutateRowsTs_result();
+ try {
+ iface_.mutateRowsTs(args.tableName, args.rowBatches, args.timestamp);
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ }
+ oprot.writeMessageBegin(new TMessage("mutateRowsTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class atomicIncrement implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ atomicIncrement_args args = new atomicIncrement_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ atomicIncrement_result result = new atomicIncrement_result();
+ try {
+ result.success = iface_.atomicIncrement(args.tableName, args.row, args.column, args.value);
+ result.__isset.success = true;
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ }
+ oprot.writeMessageBegin(new TMessage("atomicIncrement", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class deleteAll implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ deleteAll_args args = new deleteAll_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ deleteAll_result result = new deleteAll_result();
+ try {
+ iface_.deleteAll(args.tableName, args.row, args.column);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("deleteAll", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class deleteAllTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ deleteAllTs_args args = new deleteAllTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ deleteAllTs_result result = new deleteAllTs_result();
+ try {
+ iface_.deleteAllTs(args.tableName, args.row, args.column, args.timestamp);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("deleteAllTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class deleteAllRow implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ deleteAllRow_args args = new deleteAllRow_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ deleteAllRow_result result = new deleteAllRow_result();
+ try {
+ iface_.deleteAllRow(args.tableName, args.row);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("deleteAllRow", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class deleteAllRowTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ deleteAllRowTs_args args = new deleteAllRowTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ deleteAllRowTs_result result = new deleteAllRowTs_result();
+ try {
+ iface_.deleteAllRowTs(args.tableName, args.row, args.timestamp);
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("deleteAllRowTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class scannerOpen implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ scannerOpen_args args = new scannerOpen_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ scannerOpen_result result = new scannerOpen_result();
+ try {
+ result.success = iface_.scannerOpen(args.tableName, args.startRow, args.columns);
+ result.__isset.success = true;
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("scannerOpen", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class scannerOpenWithStop implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ scannerOpenWithStop_args args = new scannerOpenWithStop_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ scannerOpenWithStop_result result = new scannerOpenWithStop_result();
+ try {
+ result.success = iface_.scannerOpenWithStop(args.tableName, args.startRow, args.stopRow, args.columns);
+ result.__isset.success = true;
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("scannerOpenWithStop", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class scannerOpenTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ scannerOpenTs_args args = new scannerOpenTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ scannerOpenTs_result result = new scannerOpenTs_result();
+ try {
+ result.success = iface_.scannerOpenTs(args.tableName, args.startRow, args.columns, args.timestamp);
+ result.__isset.success = true;
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("scannerOpenTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class scannerOpenWithStopTs implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ scannerOpenWithStopTs_args args = new scannerOpenWithStopTs_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ scannerOpenWithStopTs_result result = new scannerOpenWithStopTs_result();
+ try {
+ result.success = iface_.scannerOpenWithStopTs(args.tableName, args.startRow, args.stopRow, args.columns, args.timestamp);
+ result.__isset.success = true;
+ } catch (IOError io) {
+ result.io = io;
+ }
+ oprot.writeMessageBegin(new TMessage("scannerOpenWithStopTs", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class scannerGet implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ scannerGet_args args = new scannerGet_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ scannerGet_result result = new scannerGet_result();
+ try {
+ result.success = iface_.scannerGet(args.id);
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ }
+ oprot.writeMessageBegin(new TMessage("scannerGet", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class scannerGetList implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ scannerGetList_args args = new scannerGetList_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ scannerGetList_result result = new scannerGetList_result();
+ try {
+ result.success = iface_.scannerGetList(args.id, args.nbRows);
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ }
+ oprot.writeMessageBegin(new TMessage("scannerGetList", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ private class scannerClose implements ProcessFunction {
+ public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+ {
+ scannerClose_args args = new scannerClose_args();
+ args.read(iprot);
+ iprot.readMessageEnd();
+ scannerClose_result result = new scannerClose_result();
+ try {
+ iface_.scannerClose(args.id);
+ } catch (IOError io) {
+ result.io = io;
+ } catch (IllegalArgument ia) {
+ result.ia = ia;
+ }
+ oprot.writeMessageBegin(new TMessage("scannerClose", TMessageType.REPLY, seqid));
+ result.write(oprot);
+ oprot.writeMessageEnd();
+ oprot.getTransport().flush();
+ }
+
+ }
+
+ }
+
+ public static class enableTable_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("enableTable_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(enableTable_args.class, metaDataMap);
+ }
+
+ public enableTable_args() {
+ }
+
+ public enableTable_args(
+ byte[] tableName)
+ {
+ this();
+ this.tableName = tableName;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public enableTable_args(enableTable_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ }
+
+ @Override
+ public enableTable_args clone() {
+ return new enableTable_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof enableTable_args)
+ return this.equals((enableTable_args)that);
+ return false;
+ }
+
+ public boolean equals(enableTable_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("enableTable_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class enableTable_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("enableTable_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(enableTable_result.class, metaDataMap);
+ }
+
+ public enableTable_result() {
+ }
+
+ public enableTable_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public enableTable_result(enableTable_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public enableTable_result clone() {
+ return new enableTable_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof enableTable_result)
+ return this.equals((enableTable_result)that);
+ return false;
+ }
+
+ public boolean equals(enableTable_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("enableTable_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class disableTable_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("disableTable_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(disableTable_args.class, metaDataMap);
+ }
+
+ public disableTable_args() {
+ }
+
+ public disableTable_args(
+ byte[] tableName)
+ {
+ this();
+ this.tableName = tableName;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public disableTable_args(disableTable_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ }
+
+ @Override
+ public disableTable_args clone() {
+ return new disableTable_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof disableTable_args)
+ return this.equals((disableTable_args)that);
+ return false;
+ }
+
+ public boolean equals(disableTable_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("disableTable_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class disableTable_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("disableTable_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(disableTable_result.class, metaDataMap);
+ }
+
+ public disableTable_result() {
+ }
+
+ public disableTable_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public disableTable_result(disableTable_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public disableTable_result clone() {
+ return new disableTable_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof disableTable_result)
+ return this.equals((disableTable_result)that);
+ return false;
+ }
+
+ public boolean equals(disableTable_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("disableTable_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class isTableEnabled_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("isTableEnabled_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(isTableEnabled_args.class, metaDataMap);
+ }
+
+ public isTableEnabled_args() {
+ }
+
+ public isTableEnabled_args(
+ byte[] tableName)
+ {
+ this();
+ this.tableName = tableName;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public isTableEnabled_args(isTableEnabled_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ }
+
+ @Override
+ public isTableEnabled_args clone() {
+ return new isTableEnabled_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof isTableEnabled_args)
+ return this.equals((isTableEnabled_args)that);
+ return false;
+ }
+
+ public boolean equals(isTableEnabled_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("isTableEnabled_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class isTableEnabled_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("isTableEnabled_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.BOOL, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public boolean success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean success = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.BOOL)));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(isTableEnabled_result.class, metaDataMap);
+ }
+
+ public isTableEnabled_result() {
+ }
+
+ public isTableEnabled_result(
+ boolean success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.__isset.success = true;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public isTableEnabled_result(isTableEnabled_result other) {
+ __isset.success = other.__isset.success;
+ this.success = other.success;
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public isTableEnabled_result clone() {
+ return new isTableEnabled_result(this);
+ }
+
+ public boolean isSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(boolean success) {
+ this.success = success;
+ this.__isset.success = true;
+ }
+
+ public void unsetSuccess() {
+ this.__isset.success = false;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.__isset.success;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ this.__isset.success = value;
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((Boolean)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return new Boolean(isSuccess());
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof isTableEnabled_result)
+ return this.equals((isTableEnabled_result)that);
+ return false;
+ }
+
+ public boolean equals(isTableEnabled_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true;
+ boolean that_present_success = true;
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (this.success != that.success)
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.BOOL) {
+ this.success = iprot.readBool();
+ this.__isset.success = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ oprot.writeBool(this.success);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("isTableEnabled_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ sb.append(this.success);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class compact_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("compact_args");
+ private static final TField TABLE_NAME_OR_REGION_NAME_FIELD_DESC = new TField("tableNameOrRegionName", TType.STRING, (short)1);
+
+ public byte[] tableNameOrRegionName;
+ public static final int TABLENAMEORREGIONNAME = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAMEORREGIONNAME, new FieldMetaData("tableNameOrRegionName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(compact_args.class, metaDataMap);
+ }
+
+ public compact_args() {
+ }
+
+ public compact_args(
+ byte[] tableNameOrRegionName)
+ {
+ this();
+ this.tableNameOrRegionName = tableNameOrRegionName;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public compact_args(compact_args other) {
+ if (other.isSetTableNameOrRegionName()) {
+ this.tableNameOrRegionName = other.tableNameOrRegionName;
+ }
+ }
+
+ @Override
+ public compact_args clone() {
+ return new compact_args(this);
+ }
+
+ public byte[] getTableNameOrRegionName() {
+ return this.tableNameOrRegionName;
+ }
+
+ public void setTableNameOrRegionName(byte[] tableNameOrRegionName) {
+ this.tableNameOrRegionName = tableNameOrRegionName;
+ }
+
+ public void unsetTableNameOrRegionName() {
+ this.tableNameOrRegionName = null;
+ }
+
+ // Returns true if field tableNameOrRegionName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableNameOrRegionName() {
+ return this.tableNameOrRegionName != null;
+ }
+
+ public void setTableNameOrRegionNameIsSet(boolean value) {
+ if (!value) {
+ this.tableNameOrRegionName = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAMEORREGIONNAME:
+ if (value == null) {
+ unsetTableNameOrRegionName();
+ } else {
+ setTableNameOrRegionName((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAMEORREGIONNAME:
+ return getTableNameOrRegionName();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAMEORREGIONNAME:
+ return isSetTableNameOrRegionName();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof compact_args)
+ return this.equals((compact_args)that);
+ return false;
+ }
+
+ public boolean equals(compact_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableNameOrRegionName = true && this.isSetTableNameOrRegionName();
+ boolean that_present_tableNameOrRegionName = true && that.isSetTableNameOrRegionName();
+ if (this_present_tableNameOrRegionName || that_present_tableNameOrRegionName) {
+ if (!(this_present_tableNameOrRegionName && that_present_tableNameOrRegionName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableNameOrRegionName, that.tableNameOrRegionName))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAMEORREGIONNAME:
+ if (field.type == TType.STRING) {
+ this.tableNameOrRegionName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableNameOrRegionName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_OR_REGION_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableNameOrRegionName);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("compact_args(");
+ boolean first = true;
+
+ sb.append("tableNameOrRegionName:");
+ if (this.tableNameOrRegionName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableNameOrRegionName);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class compact_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("compact_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(compact_result.class, metaDataMap);
+ }
+
+ public compact_result() {
+ }
+
+ public compact_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public compact_result(compact_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public compact_result clone() {
+ return new compact_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof compact_result)
+ return this.equals((compact_result)that);
+ return false;
+ }
+
+ public boolean equals(compact_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("compact_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class majorCompact_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("majorCompact_args");
+ private static final TField TABLE_NAME_OR_REGION_NAME_FIELD_DESC = new TField("tableNameOrRegionName", TType.STRING, (short)1);
+
+ public byte[] tableNameOrRegionName;
+ public static final int TABLENAMEORREGIONNAME = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAMEORREGIONNAME, new FieldMetaData("tableNameOrRegionName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(majorCompact_args.class, metaDataMap);
+ }
+
+ public majorCompact_args() {
+ }
+
+ public majorCompact_args(
+ byte[] tableNameOrRegionName)
+ {
+ this();
+ this.tableNameOrRegionName = tableNameOrRegionName;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public majorCompact_args(majorCompact_args other) {
+ if (other.isSetTableNameOrRegionName()) {
+ this.tableNameOrRegionName = other.tableNameOrRegionName;
+ }
+ }
+
+ @Override
+ public majorCompact_args clone() {
+ return new majorCompact_args(this);
+ }
+
+ public byte[] getTableNameOrRegionName() {
+ return this.tableNameOrRegionName;
+ }
+
+ public void setTableNameOrRegionName(byte[] tableNameOrRegionName) {
+ this.tableNameOrRegionName = tableNameOrRegionName;
+ }
+
+ public void unsetTableNameOrRegionName() {
+ this.tableNameOrRegionName = null;
+ }
+
+ // Returns true if field tableNameOrRegionName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableNameOrRegionName() {
+ return this.tableNameOrRegionName != null;
+ }
+
+ public void setTableNameOrRegionNameIsSet(boolean value) {
+ if (!value) {
+ this.tableNameOrRegionName = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAMEORREGIONNAME:
+ if (value == null) {
+ unsetTableNameOrRegionName();
+ } else {
+ setTableNameOrRegionName((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAMEORREGIONNAME:
+ return getTableNameOrRegionName();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAMEORREGIONNAME:
+ return isSetTableNameOrRegionName();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof majorCompact_args)
+ return this.equals((majorCompact_args)that);
+ return false;
+ }
+
+ public boolean equals(majorCompact_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableNameOrRegionName = true && this.isSetTableNameOrRegionName();
+ boolean that_present_tableNameOrRegionName = true && that.isSetTableNameOrRegionName();
+ if (this_present_tableNameOrRegionName || that_present_tableNameOrRegionName) {
+ if (!(this_present_tableNameOrRegionName && that_present_tableNameOrRegionName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableNameOrRegionName, that.tableNameOrRegionName))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAMEORREGIONNAME:
+ if (field.type == TType.STRING) {
+ this.tableNameOrRegionName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableNameOrRegionName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_OR_REGION_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableNameOrRegionName);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("majorCompact_args(");
+ boolean first = true;
+
+ sb.append("tableNameOrRegionName:");
+ if (this.tableNameOrRegionName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableNameOrRegionName);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class majorCompact_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("majorCompact_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(majorCompact_result.class, metaDataMap);
+ }
+
+ public majorCompact_result() {
+ }
+
+ public majorCompact_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public majorCompact_result(majorCompact_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public majorCompact_result clone() {
+ return new majorCompact_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof majorCompact_result)
+ return this.equals((majorCompact_result)that);
+ return false;
+ }
+
+ public boolean equals(majorCompact_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("majorCompact_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getTableNames_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getTableNames_args");
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getTableNames_args.class, metaDataMap);
+ }
+
+ public getTableNames_args() {
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getTableNames_args(getTableNames_args other) {
+ }
+
+ @Override
+ public getTableNames_args clone() {
+ return new getTableNames_args(this);
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getTableNames_args)
+ return this.equals((getTableNames_args)that);
+ return false;
+ }
+
+ public boolean equals(getTableNames_args that) {
+ if (that == null)
+ return false;
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getTableNames_args(");
+ boolean first = true;
+
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getTableNames_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getTableNames_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<byte[]> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new FieldValueMetaData(TType.STRING))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getTableNames_result.class, metaDataMap);
+ }
+
+ public getTableNames_result() {
+ }
+
+ public getTableNames_result(
+ List<byte[]> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getTableNames_result(getTableNames_result other) {
+ if (other.isSetSuccess()) {
+ List<byte[]> __this__success = new ArrayList<byte[]>();
+ for (byte[] other_element : other.success) {
+ __this__success.add(other_element);
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getTableNames_result clone() {
+ return new getTableNames_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<byte[]> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(byte[] elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<byte[]>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<byte[]> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<byte[]> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<byte[]>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getTableNames_result)
+ return this.equals((getTableNames_result)that);
+ return false;
+ }
+
+ public boolean equals(getTableNames_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list9 = iprot.readListBegin();
+ this.success = new ArrayList<byte[]>(_list9.size);
+ for (int _i10 = 0; _i10 < _list9.size; ++_i10)
+ {
+ byte[] _elem11;
+ _elem11 = iprot.readBinary();
+ this.success.add(_elem11);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRING, this.success.size()));
+ for (byte[] _iter12 : this.success) {
+ oprot.writeBinary(_iter12);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getTableNames_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getColumnDescriptors_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getColumnDescriptors_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getColumnDescriptors_args.class, metaDataMap);
+ }
+
+ public getColumnDescriptors_args() {
+ }
+
+ public getColumnDescriptors_args(
+ byte[] tableName)
+ {
+ this();
+ this.tableName = tableName;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getColumnDescriptors_args(getColumnDescriptors_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ }
+
+ @Override
+ public getColumnDescriptors_args clone() {
+ return new getColumnDescriptors_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getColumnDescriptors_args)
+ return this.equals((getColumnDescriptors_args)that);
+ return false;
+ }
+
+ public boolean equals(getColumnDescriptors_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getColumnDescriptors_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getColumnDescriptors_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getColumnDescriptors_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.MAP, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public Map<byte[],ColumnDescriptor> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new MapMetaData(TType.MAP,
+ new FieldValueMetaData(TType.STRING),
+ new StructMetaData(TType.STRUCT, ColumnDescriptor.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getColumnDescriptors_result.class, metaDataMap);
+ }
+
+ public getColumnDescriptors_result() {
+ }
+
+ public getColumnDescriptors_result(
+ Map<byte[],ColumnDescriptor> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getColumnDescriptors_result(getColumnDescriptors_result other) {
+ if (other.isSetSuccess()) {
+ Map<byte[],ColumnDescriptor> __this__success = new HashMap<byte[],ColumnDescriptor>();
+ for (Map.Entry<byte[], ColumnDescriptor> other_element : other.success.entrySet()) {
+
+ byte[] other_element_key = other_element.getKey();
+ ColumnDescriptor other_element_value = other_element.getValue();
+
+ byte[] __this__success_copy_key = other_element_key;
+
+ ColumnDescriptor __this__success_copy_value = new ColumnDescriptor(other_element_value);
+
+ __this__success.put(__this__success_copy_key, __this__success_copy_value);
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getColumnDescriptors_result clone() {
+ return new getColumnDescriptors_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public void putToSuccess(byte[] key, ColumnDescriptor val) {
+ if (this.success == null) {
+ this.success = new HashMap<byte[],ColumnDescriptor>();
+ }
+ this.success.put(key, val);
+ }
+
+ public Map<byte[],ColumnDescriptor> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(Map<byte[],ColumnDescriptor> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((Map<byte[],ColumnDescriptor>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getColumnDescriptors_result)
+ return this.equals((getColumnDescriptors_result)that);
+ return false;
+ }
+
+ public boolean equals(getColumnDescriptors_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.MAP) {
+ {
+ TMap _map13 = iprot.readMapBegin();
+ this.success = new HashMap<byte[],ColumnDescriptor>(2*_map13.size);
+ for (int _i14 = 0; _i14 < _map13.size; ++_i14)
+ {
+ byte[] _key15;
+ ColumnDescriptor _val16;
+ _key15 = iprot.readBinary();
+ _val16 = new ColumnDescriptor();
+ _val16.read(iprot);
+ this.success.put(_key15, _val16);
+ }
+ iprot.readMapEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeMapBegin(new TMap(TType.STRING, TType.STRUCT, this.success.size()));
+ for (Map.Entry<byte[], ColumnDescriptor> _iter17 : this.success.entrySet()) {
+ oprot.writeBinary(_iter17.getKey());
+ _iter17.getValue().write(oprot);
+ }
+ oprot.writeMapEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getColumnDescriptors_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getTableRegions_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getTableRegions_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getTableRegions_args.class, metaDataMap);
+ }
+
+ public getTableRegions_args() {
+ }
+
+ public getTableRegions_args(
+ byte[] tableName)
+ {
+ this();
+ this.tableName = tableName;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getTableRegions_args(getTableRegions_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ }
+
+ @Override
+ public getTableRegions_args clone() {
+ return new getTableRegions_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getTableRegions_args)
+ return this.equals((getTableRegions_args)that);
+ return false;
+ }
+
+ public boolean equals(getTableRegions_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getTableRegions_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getTableRegions_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getTableRegions_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<TRegionInfo> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TRegionInfo.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getTableRegions_result.class, metaDataMap);
+ }
+
+ public getTableRegions_result() {
+ }
+
+ public getTableRegions_result(
+ List<TRegionInfo> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getTableRegions_result(getTableRegions_result other) {
+ if (other.isSetSuccess()) {
+ List<TRegionInfo> __this__success = new ArrayList<TRegionInfo>();
+ for (TRegionInfo other_element : other.success) {
+ __this__success.add(new TRegionInfo(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getTableRegions_result clone() {
+ return new getTableRegions_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TRegionInfo> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TRegionInfo elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TRegionInfo>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TRegionInfo> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TRegionInfo> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TRegionInfo>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getTableRegions_result)
+ return this.equals((getTableRegions_result)that);
+ return false;
+ }
+
+ public boolean equals(getTableRegions_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list18 = iprot.readListBegin();
+ this.success = new ArrayList<TRegionInfo>(_list18.size);
+ for (int _i19 = 0; _i19 < _list18.size; ++_i19)
+ {
+ TRegionInfo _elem20;
+ _elem20 = new TRegionInfo();
+ _elem20.read(iprot);
+ this.success.add(_elem20);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TRegionInfo _iter21 : this.success) {
+ _iter21.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getTableRegions_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class createTable_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("createTable_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField COLUMN_FAMILIES_FIELD_DESC = new TField("columnFamilies", TType.LIST, (short)2);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public List<ColumnDescriptor> columnFamilies;
+ public static final int COLUMNFAMILIES = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMNFAMILIES, new FieldMetaData("columnFamilies", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, ColumnDescriptor.class))));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(createTable_args.class, metaDataMap);
+ }
+
+ public createTable_args() {
+ }
+
+ public createTable_args(
+ byte[] tableName,
+ List<ColumnDescriptor> columnFamilies)
+ {
+ this();
+ this.tableName = tableName;
+ this.columnFamilies = columnFamilies;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public createTable_args(createTable_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetColumnFamilies()) {
+ List<ColumnDescriptor> __this__columnFamilies = new ArrayList<ColumnDescriptor>();
+ for (ColumnDescriptor other_element : other.columnFamilies) {
+ __this__columnFamilies.add(new ColumnDescriptor(other_element));
+ }
+ this.columnFamilies = __this__columnFamilies;
+ }
+ }
+
+ @Override
+ public createTable_args clone() {
+ return new createTable_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public int getColumnFamiliesSize() {
+ return (this.columnFamilies == null) ? 0 : this.columnFamilies.size();
+ }
+
+ public java.util.Iterator<ColumnDescriptor> getColumnFamiliesIterator() {
+ return (this.columnFamilies == null) ? null : this.columnFamilies.iterator();
+ }
+
+ public void addToColumnFamilies(ColumnDescriptor elem) {
+ if (this.columnFamilies == null) {
+ this.columnFamilies = new ArrayList<ColumnDescriptor>();
+ }
+ this.columnFamilies.add(elem);
+ }
+
+ public List<ColumnDescriptor> getColumnFamilies() {
+ return this.columnFamilies;
+ }
+
+ public void setColumnFamilies(List<ColumnDescriptor> columnFamilies) {
+ this.columnFamilies = columnFamilies;
+ }
+
+ public void unsetColumnFamilies() {
+ this.columnFamilies = null;
+ }
+
+ // Returns true if field columnFamilies is set (has been asigned a value) and false otherwise
+ public boolean isSetColumnFamilies() {
+ return this.columnFamilies != null;
+ }
+
+ public void setColumnFamiliesIsSet(boolean value) {
+ if (!value) {
+ this.columnFamilies = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case COLUMNFAMILIES:
+ if (value == null) {
+ unsetColumnFamilies();
+ } else {
+ setColumnFamilies((List<ColumnDescriptor>)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case COLUMNFAMILIES:
+ return getColumnFamilies();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case COLUMNFAMILIES:
+ return isSetColumnFamilies();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof createTable_args)
+ return this.equals((createTable_args)that);
+ return false;
+ }
+
+ public boolean equals(createTable_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_columnFamilies = true && this.isSetColumnFamilies();
+ boolean that_present_columnFamilies = true && that.isSetColumnFamilies();
+ if (this_present_columnFamilies || that_present_columnFamilies) {
+ if (!(this_present_columnFamilies && that_present_columnFamilies))
+ return false;
+ if (!this.columnFamilies.equals(that.columnFamilies))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMNFAMILIES:
+ if (field.type == TType.LIST) {
+ {
+ TList _list22 = iprot.readListBegin();
+ this.columnFamilies = new ArrayList<ColumnDescriptor>(_list22.size);
+ for (int _i23 = 0; _i23 < _list22.size; ++_i23)
+ {
+ ColumnDescriptor _elem24;
+ _elem24 = new ColumnDescriptor();
+ _elem24.read(iprot);
+ this.columnFamilies.add(_elem24);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.columnFamilies != null) {
+ oprot.writeFieldBegin(COLUMN_FAMILIES_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.columnFamilies.size()));
+ for (ColumnDescriptor _iter25 : this.columnFamilies) {
+ _iter25.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("createTable_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("columnFamilies:");
+ if (this.columnFamilies == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.columnFamilies);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class createTable_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("createTable_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+ private static final TField EXIST_FIELD_DESC = new TField("exist", TType.STRUCT, (short)3);
+
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+ public AlreadyExists exist;
+ public static final int EXIST = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(EXIST, new FieldMetaData("exist", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(createTable_result.class, metaDataMap);
+ }
+
+ public createTable_result() {
+ }
+
+ public createTable_result(
+ IOError io,
+ IllegalArgument ia,
+ AlreadyExists exist)
+ {
+ this();
+ this.io = io;
+ this.ia = ia;
+ this.exist = exist;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public createTable_result(createTable_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ if (other.isSetExist()) {
+ this.exist = new AlreadyExists(other.exist);
+ }
+ }
+
+ @Override
+ public createTable_result clone() {
+ return new createTable_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public AlreadyExists getExist() {
+ return this.exist;
+ }
+
+ public void setExist(AlreadyExists exist) {
+ this.exist = exist;
+ }
+
+ public void unsetExist() {
+ this.exist = null;
+ }
+
+ // Returns true if field exist is set (has been asigned a value) and false otherwise
+ public boolean isSetExist() {
+ return this.exist != null;
+ }
+
+ public void setExistIsSet(boolean value) {
+ if (!value) {
+ this.exist = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ case EXIST:
+ if (value == null) {
+ unsetExist();
+ } else {
+ setExist((AlreadyExists)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ case EXIST:
+ return getExist();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ case EXIST:
+ return isSetExist();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof createTable_result)
+ return this.equals((createTable_result)that);
+ return false;
+ }
+
+ public boolean equals(createTable_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ boolean this_present_exist = true && this.isSetExist();
+ boolean that_present_exist = true && that.isSetExist();
+ if (this_present_exist || that_present_exist) {
+ if (!(this_present_exist && that_present_exist))
+ return false;
+ if (!this.exist.equals(that.exist))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case EXIST:
+ if (field.type == TType.STRUCT) {
+ this.exist = new AlreadyExists();
+ this.exist.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetExist()) {
+ oprot.writeFieldBegin(EXIST_FIELD_DESC);
+ this.exist.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("createTable_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("exist:");
+ if (this.exist == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.exist);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteTable_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteTable_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteTable_args.class, metaDataMap);
+ }
+
+ public deleteTable_args() {
+ }
+
+ public deleteTable_args(
+ byte[] tableName)
+ {
+ this();
+ this.tableName = tableName;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteTable_args(deleteTable_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ }
+
+ @Override
+ public deleteTable_args clone() {
+ return new deleteTable_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteTable_args)
+ return this.equals((deleteTable_args)that);
+ return false;
+ }
+
+ public boolean equals(deleteTable_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteTable_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteTable_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteTable_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteTable_result.class, metaDataMap);
+ }
+
+ public deleteTable_result() {
+ }
+
+ public deleteTable_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteTable_result(deleteTable_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public deleteTable_result clone() {
+ return new deleteTable_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteTable_result)
+ return this.equals((deleteTable_result)that);
+ return false;
+ }
+
+ public boolean equals(deleteTable_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteTable_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class get_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("get_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public byte[] column;
+ public static final int COLUMN = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(get_args.class, metaDataMap);
+ }
+
+ public get_args() {
+ }
+
+ public get_args(
+ byte[] tableName,
+ byte[] row,
+ byte[] column)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.column = column;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public get_args(get_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumn()) {
+ this.column = other.column;
+ }
+ }
+
+ @Override
+ public get_args clone() {
+ return new get_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public byte[] getColumn() {
+ return this.column;
+ }
+
+ public void setColumn(byte[] column) {
+ this.column = column;
+ }
+
+ public void unsetColumn() {
+ this.column = null;
+ }
+
+ // Returns true if field column is set (has been asigned a value) and false otherwise
+ public boolean isSetColumn() {
+ return this.column != null;
+ }
+
+ public void setColumnIsSet(boolean value) {
+ if (!value) {
+ this.column = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMN:
+ if (value == null) {
+ unsetColumn();
+ } else {
+ setColumn((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case COLUMN:
+ return getColumn();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case COLUMN:
+ return isSetColumn();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof get_args)
+ return this.equals((get_args)that);
+ return false;
+ }
+
+ public boolean equals(get_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_column = true && this.isSetColumn();
+ boolean that_present_column = true && that.isSetColumn();
+ if (this_present_column || that_present_column) {
+ if (!(this_present_column && that_present_column))
+ return false;
+ if (!java.util.Arrays.equals(this.column, that.column))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMN:
+ if (field.type == TType.STRING) {
+ this.column = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.column != null) {
+ oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+ oprot.writeBinary(this.column);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("get_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("column:");
+ if (this.column == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.column);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class get_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("get_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<TCell> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TCell.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(get_result.class, metaDataMap);
+ }
+
+ public get_result() {
+ }
+
+ public get_result(
+ List<TCell> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public get_result(get_result other) {
+ if (other.isSetSuccess()) {
+ List<TCell> __this__success = new ArrayList<TCell>();
+ for (TCell other_element : other.success) {
+ __this__success.add(new TCell(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public get_result clone() {
+ return new get_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TCell> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TCell elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TCell>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TCell> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TCell> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TCell>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof get_result)
+ return this.equals((get_result)that);
+ return false;
+ }
+
+ public boolean equals(get_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list26 = iprot.readListBegin();
+ this.success = new ArrayList<TCell>(_list26.size);
+ for (int _i27 = 0; _i27 < _list26.size; ++_i27)
+ {
+ TCell _elem28;
+ _elem28 = new TCell();
+ _elem28.read(iprot);
+ this.success.add(_elem28);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TCell _iter29 : this.success) {
+ _iter29.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("get_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getVer_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getVer_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+ private static final TField NUM_VERSIONS_FIELD_DESC = new TField("numVersions", TType.I32, (short)4);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public byte[] column;
+ public static final int COLUMN = 3;
+ public int numVersions;
+ public static final int NUMVERSIONS = 4;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean numVersions = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(NUMVERSIONS, new FieldMetaData("numVersions", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getVer_args.class, metaDataMap);
+ }
+
+ public getVer_args() {
+ }
+
+ public getVer_args(
+ byte[] tableName,
+ byte[] row,
+ byte[] column,
+ int numVersions)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.column = column;
+ this.numVersions = numVersions;
+ this.__isset.numVersions = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getVer_args(getVer_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumn()) {
+ this.column = other.column;
+ }
+ __isset.numVersions = other.__isset.numVersions;
+ this.numVersions = other.numVersions;
+ }
+
+ @Override
+ public getVer_args clone() {
+ return new getVer_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public byte[] getColumn() {
+ return this.column;
+ }
+
+ public void setColumn(byte[] column) {
+ this.column = column;
+ }
+
+ public void unsetColumn() {
+ this.column = null;
+ }
+
+ // Returns true if field column is set (has been asigned a value) and false otherwise
+ public boolean isSetColumn() {
+ return this.column != null;
+ }
+
+ public void setColumnIsSet(boolean value) {
+ if (!value) {
+ this.column = null;
+ }
+ }
+
+ public int getNumVersions() {
+ return this.numVersions;
+ }
+
+ public void setNumVersions(int numVersions) {
+ this.numVersions = numVersions;
+ this.__isset.numVersions = true;
+ }
+
+ public void unsetNumVersions() {
+ this.__isset.numVersions = false;
+ }
+
+ // Returns true if field numVersions is set (has been asigned a value) and false otherwise
+ public boolean isSetNumVersions() {
+ return this.__isset.numVersions;
+ }
+
+ public void setNumVersionsIsSet(boolean value) {
+ this.__isset.numVersions = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMN:
+ if (value == null) {
+ unsetColumn();
+ } else {
+ setColumn((byte[])value);
+ }
+ break;
+
+ case NUMVERSIONS:
+ if (value == null) {
+ unsetNumVersions();
+ } else {
+ setNumVersions((Integer)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case COLUMN:
+ return getColumn();
+
+ case NUMVERSIONS:
+ return new Integer(getNumVersions());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case COLUMN:
+ return isSetColumn();
+ case NUMVERSIONS:
+ return isSetNumVersions();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getVer_args)
+ return this.equals((getVer_args)that);
+ return false;
+ }
+
+ public boolean equals(getVer_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_column = true && this.isSetColumn();
+ boolean that_present_column = true && that.isSetColumn();
+ if (this_present_column || that_present_column) {
+ if (!(this_present_column && that_present_column))
+ return false;
+ if (!java.util.Arrays.equals(this.column, that.column))
+ return false;
+ }
+
+ boolean this_present_numVersions = true;
+ boolean that_present_numVersions = true;
+ if (this_present_numVersions || that_present_numVersions) {
+ if (!(this_present_numVersions && that_present_numVersions))
+ return false;
+ if (this.numVersions != that.numVersions)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMN:
+ if (field.type == TType.STRING) {
+ this.column = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case NUMVERSIONS:
+ if (field.type == TType.I32) {
+ this.numVersions = iprot.readI32();
+ this.__isset.numVersions = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.column != null) {
+ oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+ oprot.writeBinary(this.column);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(NUM_VERSIONS_FIELD_DESC);
+ oprot.writeI32(this.numVersions);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getVer_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("column:");
+ if (this.column == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.column);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("numVersions:");
+ sb.append(this.numVersions);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getVer_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getVer_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<TCell> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TCell.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getVer_result.class, metaDataMap);
+ }
+
+ public getVer_result() {
+ }
+
+ public getVer_result(
+ List<TCell> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getVer_result(getVer_result other) {
+ if (other.isSetSuccess()) {
+ List<TCell> __this__success = new ArrayList<TCell>();
+ for (TCell other_element : other.success) {
+ __this__success.add(new TCell(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getVer_result clone() {
+ return new getVer_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TCell> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TCell elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TCell>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TCell> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TCell> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TCell>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getVer_result)
+ return this.equals((getVer_result)that);
+ return false;
+ }
+
+ public boolean equals(getVer_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list30 = iprot.readListBegin();
+ this.success = new ArrayList<TCell>(_list30.size);
+ for (int _i31 = 0; _i31 < _list30.size; ++_i31)
+ {
+ TCell _elem32;
+ _elem32 = new TCell();
+ _elem32.read(iprot);
+ this.success.add(_elem32);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TCell _iter33 : this.success) {
+ _iter33.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getVer_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getVerTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getVerTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+ private static final TField NUM_VERSIONS_FIELD_DESC = new TField("numVersions", TType.I32, (short)5);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public byte[] column;
+ public static final int COLUMN = 3;
+ public long timestamp;
+ public static final int TIMESTAMP = 4;
+ public int numVersions;
+ public static final int NUMVERSIONS = 5;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ public boolean numVersions = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ put(NUMVERSIONS, new FieldMetaData("numVersions", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getVerTs_args.class, metaDataMap);
+ }
+
+ public getVerTs_args() {
+ }
+
+ public getVerTs_args(
+ byte[] tableName,
+ byte[] row,
+ byte[] column,
+ long timestamp,
+ int numVersions)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.column = column;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ this.numVersions = numVersions;
+ this.__isset.numVersions = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getVerTs_args(getVerTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumn()) {
+ this.column = other.column;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ __isset.numVersions = other.__isset.numVersions;
+ this.numVersions = other.numVersions;
+ }
+
+ @Override
+ public getVerTs_args clone() {
+ return new getVerTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public byte[] getColumn() {
+ return this.column;
+ }
+
+ public void setColumn(byte[] column) {
+ this.column = column;
+ }
+
+ public void unsetColumn() {
+ this.column = null;
+ }
+
+ // Returns true if field column is set (has been asigned a value) and false otherwise
+ public boolean isSetColumn() {
+ return this.column != null;
+ }
+
+ public void setColumnIsSet(boolean value) {
+ if (!value) {
+ this.column = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public int getNumVersions() {
+ return this.numVersions;
+ }
+
+ public void setNumVersions(int numVersions) {
+ this.numVersions = numVersions;
+ this.__isset.numVersions = true;
+ }
+
+ public void unsetNumVersions() {
+ this.__isset.numVersions = false;
+ }
+
+ // Returns true if field numVersions is set (has been asigned a value) and false otherwise
+ public boolean isSetNumVersions() {
+ return this.__isset.numVersions;
+ }
+
+ public void setNumVersionsIsSet(boolean value) {
+ this.__isset.numVersions = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMN:
+ if (value == null) {
+ unsetColumn();
+ } else {
+ setColumn((byte[])value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ case NUMVERSIONS:
+ if (value == null) {
+ unsetNumVersions();
+ } else {
+ setNumVersions((Integer)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case COLUMN:
+ return getColumn();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ case NUMVERSIONS:
+ return new Integer(getNumVersions());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case COLUMN:
+ return isSetColumn();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ case NUMVERSIONS:
+ return isSetNumVersions();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getVerTs_args)
+ return this.equals((getVerTs_args)that);
+ return false;
+ }
+
+ public boolean equals(getVerTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_column = true && this.isSetColumn();
+ boolean that_present_column = true && that.isSetColumn();
+ if (this_present_column || that_present_column) {
+ if (!(this_present_column && that_present_column))
+ return false;
+ if (!java.util.Arrays.equals(this.column, that.column))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ boolean this_present_numVersions = true;
+ boolean that_present_numVersions = true;
+ if (this_present_numVersions || that_present_numVersions) {
+ if (!(this_present_numVersions && that_present_numVersions))
+ return false;
+ if (this.numVersions != that.numVersions)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMN:
+ if (field.type == TType.STRING) {
+ this.column = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case NUMVERSIONS:
+ if (field.type == TType.I32) {
+ this.numVersions = iprot.readI32();
+ this.__isset.numVersions = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.column != null) {
+ oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+ oprot.writeBinary(this.column);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldBegin(NUM_VERSIONS_FIELD_DESC);
+ oprot.writeI32(this.numVersions);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getVerTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("column:");
+ if (this.column == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.column);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("numVersions:");
+ sb.append(this.numVersions);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getVerTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getVerTs_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<TCell> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TCell.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getVerTs_result.class, metaDataMap);
+ }
+
+ public getVerTs_result() {
+ }
+
+ public getVerTs_result(
+ List<TCell> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getVerTs_result(getVerTs_result other) {
+ if (other.isSetSuccess()) {
+ List<TCell> __this__success = new ArrayList<TCell>();
+ for (TCell other_element : other.success) {
+ __this__success.add(new TCell(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getVerTs_result clone() {
+ return new getVerTs_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TCell> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TCell elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TCell>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TCell> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TCell> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TCell>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getVerTs_result)
+ return this.equals((getVerTs_result)that);
+ return false;
+ }
+
+ public boolean equals(getVerTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list34 = iprot.readListBegin();
+ this.success = new ArrayList<TCell>(_list34.size);
+ for (int _i35 = 0; _i35 < _list34.size; ++_i35)
+ {
+ TCell _elem36;
+ _elem36 = new TCell();
+ _elem36.read(iprot);
+ this.success.add(_elem36);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TCell _iter37 : this.success) {
+ _iter37.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getVerTs_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getRow_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getRow_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getRow_args.class, metaDataMap);
+ }
+
+ public getRow_args() {
+ }
+
+ public getRow_args(
+ byte[] tableName,
+ byte[] row)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getRow_args(getRow_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ }
+
+ @Override
+ public getRow_args clone() {
+ return new getRow_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getRow_args)
+ return this.equals((getRow_args)that);
+ return false;
+ }
+
+ public boolean equals(getRow_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getRow_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getRow_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getRow_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<TRowResult> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TRowResult.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getRow_result.class, metaDataMap);
+ }
+
+ public getRow_result() {
+ }
+
+ public getRow_result(
+ List<TRowResult> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getRow_result(getRow_result other) {
+ if (other.isSetSuccess()) {
+ List<TRowResult> __this__success = new ArrayList<TRowResult>();
+ for (TRowResult other_element : other.success) {
+ __this__success.add(new TRowResult(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getRow_result clone() {
+ return new getRow_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TRowResult> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TRowResult elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TRowResult>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TRowResult> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TRowResult> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TRowResult>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getRow_result)
+ return this.equals((getRow_result)that);
+ return false;
+ }
+
+ public boolean equals(getRow_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list38 = iprot.readListBegin();
+ this.success = new ArrayList<TRowResult>(_list38.size);
+ for (int _i39 = 0; _i39 < _list38.size; ++_i39)
+ {
+ TRowResult _elem40;
+ _elem40 = new TRowResult();
+ _elem40.read(iprot);
+ this.success.add(_elem40);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TRowResult _iter41 : this.success) {
+ _iter41.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getRow_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getRowWithColumns_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getRowWithColumns_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public List<byte[]> columns;
+ public static final int COLUMNS = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new FieldValueMetaData(TType.STRING))));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getRowWithColumns_args.class, metaDataMap);
+ }
+
+ public getRowWithColumns_args() {
+ }
+
+ public getRowWithColumns_args(
+ byte[] tableName,
+ byte[] row,
+ List<byte[]> columns)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.columns = columns;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getRowWithColumns_args(getRowWithColumns_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumns()) {
+ List<byte[]> __this__columns = new ArrayList<byte[]>();
+ for (byte[] other_element : other.columns) {
+ __this__columns.add(other_element);
+ }
+ this.columns = __this__columns;
+ }
+ }
+
+ @Override
+ public getRowWithColumns_args clone() {
+ return new getRowWithColumns_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public int getColumnsSize() {
+ return (this.columns == null) ? 0 : this.columns.size();
+ }
+
+ public java.util.Iterator<byte[]> getColumnsIterator() {
+ return (this.columns == null) ? null : this.columns.iterator();
+ }
+
+ public void addToColumns(byte[] elem) {
+ if (this.columns == null) {
+ this.columns = new ArrayList<byte[]>();
+ }
+ this.columns.add(elem);
+ }
+
+ public List<byte[]> getColumns() {
+ return this.columns;
+ }
+
+ public void setColumns(List<byte[]> columns) {
+ this.columns = columns;
+ }
+
+ public void unsetColumns() {
+ this.columns = null;
+ }
+
+ // Returns true if field columns is set (has been asigned a value) and false otherwise
+ public boolean isSetColumns() {
+ return this.columns != null;
+ }
+
+ public void setColumnsIsSet(boolean value) {
+ if (!value) {
+ this.columns = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMNS:
+ if (value == null) {
+ unsetColumns();
+ } else {
+ setColumns((List<byte[]>)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case COLUMNS:
+ return getColumns();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case COLUMNS:
+ return isSetColumns();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getRowWithColumns_args)
+ return this.equals((getRowWithColumns_args)that);
+ return false;
+ }
+
+ public boolean equals(getRowWithColumns_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_columns = true && this.isSetColumns();
+ boolean that_present_columns = true && that.isSetColumns();
+ if (this_present_columns || that_present_columns) {
+ if (!(this_present_columns && that_present_columns))
+ return false;
+ if (!this.columns.equals(that.columns))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMNS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list42 = iprot.readListBegin();
+ this.columns = new ArrayList<byte[]>(_list42.size);
+ for (int _i43 = 0; _i43 < _list42.size; ++_i43)
+ {
+ byte[] _elem44;
+ _elem44 = iprot.readBinary();
+ this.columns.add(_elem44);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.columns != null) {
+ oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+ for (byte[] _iter45 : this.columns) {
+ oprot.writeBinary(_iter45);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getRowWithColumns_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("columns:");
+ if (this.columns == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.columns);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getRowWithColumns_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getRowWithColumns_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<TRowResult> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TRowResult.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getRowWithColumns_result.class, metaDataMap);
+ }
+
+ public getRowWithColumns_result() {
+ }
+
+ public getRowWithColumns_result(
+ List<TRowResult> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getRowWithColumns_result(getRowWithColumns_result other) {
+ if (other.isSetSuccess()) {
+ List<TRowResult> __this__success = new ArrayList<TRowResult>();
+ for (TRowResult other_element : other.success) {
+ __this__success.add(new TRowResult(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getRowWithColumns_result clone() {
+ return new getRowWithColumns_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TRowResult> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TRowResult elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TRowResult>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TRowResult> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TRowResult> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TRowResult>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getRowWithColumns_result)
+ return this.equals((getRowWithColumns_result)that);
+ return false;
+ }
+
+ public boolean equals(getRowWithColumns_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list46 = iprot.readListBegin();
+ this.success = new ArrayList<TRowResult>(_list46.size);
+ for (int _i47 = 0; _i47 < _list46.size; ++_i47)
+ {
+ TRowResult _elem48;
+ _elem48 = new TRowResult();
+ _elem48.read(iprot);
+ this.success.add(_elem48);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TRowResult _iter49 : this.success) {
+ _iter49.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getRowWithColumns_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getRowTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getRowTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)3);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public long timestamp;
+ public static final int TIMESTAMP = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getRowTs_args.class, metaDataMap);
+ }
+
+ public getRowTs_args() {
+ }
+
+ public getRowTs_args(
+ byte[] tableName,
+ byte[] row,
+ long timestamp)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getRowTs_args(getRowTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public getRowTs_args clone() {
+ return new getRowTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getRowTs_args)
+ return this.equals((getRowTs_args)that);
+ return false;
+ }
+
+ public boolean equals(getRowTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getRowTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getRowTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getRowTs_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<TRowResult> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TRowResult.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getRowTs_result.class, metaDataMap);
+ }
+
+ public getRowTs_result() {
+ }
+
+ public getRowTs_result(
+ List<TRowResult> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getRowTs_result(getRowTs_result other) {
+ if (other.isSetSuccess()) {
+ List<TRowResult> __this__success = new ArrayList<TRowResult>();
+ for (TRowResult other_element : other.success) {
+ __this__success.add(new TRowResult(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getRowTs_result clone() {
+ return new getRowTs_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TRowResult> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TRowResult elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TRowResult>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TRowResult> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TRowResult> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TRowResult>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getRowTs_result)
+ return this.equals((getRowTs_result)that);
+ return false;
+ }
+
+ public boolean equals(getRowTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list50 = iprot.readListBegin();
+ this.success = new ArrayList<TRowResult>(_list50.size);
+ for (int _i51 = 0; _i51 < _list50.size; ++_i51)
+ {
+ TRowResult _elem52;
+ _elem52 = new TRowResult();
+ _elem52.read(iprot);
+ this.success.add(_elem52);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TRowResult _iter53 : this.success) {
+ _iter53.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getRowTs_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getRowWithColumnsTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getRowWithColumnsTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public List<byte[]> columns;
+ public static final int COLUMNS = 3;
+ public long timestamp;
+ public static final int TIMESTAMP = 4;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new FieldValueMetaData(TType.STRING))));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getRowWithColumnsTs_args.class, metaDataMap);
+ }
+
+ public getRowWithColumnsTs_args() {
+ }
+
+ public getRowWithColumnsTs_args(
+ byte[] tableName,
+ byte[] row,
+ List<byte[]> columns,
+ long timestamp)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.columns = columns;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getRowWithColumnsTs_args(getRowWithColumnsTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumns()) {
+ List<byte[]> __this__columns = new ArrayList<byte[]>();
+ for (byte[] other_element : other.columns) {
+ __this__columns.add(other_element);
+ }
+ this.columns = __this__columns;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public getRowWithColumnsTs_args clone() {
+ return new getRowWithColumnsTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public int getColumnsSize() {
+ return (this.columns == null) ? 0 : this.columns.size();
+ }
+
+ public java.util.Iterator<byte[]> getColumnsIterator() {
+ return (this.columns == null) ? null : this.columns.iterator();
+ }
+
+ public void addToColumns(byte[] elem) {
+ if (this.columns == null) {
+ this.columns = new ArrayList<byte[]>();
+ }
+ this.columns.add(elem);
+ }
+
+ public List<byte[]> getColumns() {
+ return this.columns;
+ }
+
+ public void setColumns(List<byte[]> columns) {
+ this.columns = columns;
+ }
+
+ public void unsetColumns() {
+ this.columns = null;
+ }
+
+ // Returns true if field columns is set (has been asigned a value) and false otherwise
+ public boolean isSetColumns() {
+ return this.columns != null;
+ }
+
+ public void setColumnsIsSet(boolean value) {
+ if (!value) {
+ this.columns = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMNS:
+ if (value == null) {
+ unsetColumns();
+ } else {
+ setColumns((List<byte[]>)value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case COLUMNS:
+ return getColumns();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case COLUMNS:
+ return isSetColumns();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getRowWithColumnsTs_args)
+ return this.equals((getRowWithColumnsTs_args)that);
+ return false;
+ }
+
+ public boolean equals(getRowWithColumnsTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_columns = true && this.isSetColumns();
+ boolean that_present_columns = true && that.isSetColumns();
+ if (this_present_columns || that_present_columns) {
+ if (!(this_present_columns && that_present_columns))
+ return false;
+ if (!this.columns.equals(that.columns))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMNS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list54 = iprot.readListBegin();
+ this.columns = new ArrayList<byte[]>(_list54.size);
+ for (int _i55 = 0; _i55 < _list54.size; ++_i55)
+ {
+ byte[] _elem56;
+ _elem56 = iprot.readBinary();
+ this.columns.add(_elem56);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.columns != null) {
+ oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+ for (byte[] _iter57 : this.columns) {
+ oprot.writeBinary(_iter57);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getRowWithColumnsTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("columns:");
+ if (this.columns == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.columns);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class getRowWithColumnsTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("getRowWithColumnsTs_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public List<TRowResult> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TRowResult.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(getRowWithColumnsTs_result.class, metaDataMap);
+ }
+
+ public getRowWithColumnsTs_result() {
+ }
+
+ public getRowWithColumnsTs_result(
+ List<TRowResult> success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public getRowWithColumnsTs_result(getRowWithColumnsTs_result other) {
+ if (other.isSetSuccess()) {
+ List<TRowResult> __this__success = new ArrayList<TRowResult>();
+ for (TRowResult other_element : other.success) {
+ __this__success.add(new TRowResult(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public getRowWithColumnsTs_result clone() {
+ return new getRowWithColumnsTs_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TRowResult> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TRowResult elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TRowResult>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TRowResult> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TRowResult> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TRowResult>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof getRowWithColumnsTs_result)
+ return this.equals((getRowWithColumnsTs_result)that);
+ return false;
+ }
+
+ public boolean equals(getRowWithColumnsTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list58 = iprot.readListBegin();
+ this.success = new ArrayList<TRowResult>(_list58.size);
+ for (int _i59 = 0; _i59 < _list58.size; ++_i59)
+ {
+ TRowResult _elem60;
+ _elem60 = new TRowResult();
+ _elem60.read(iprot);
+ this.success.add(_elem60);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TRowResult _iter61 : this.success) {
+ _iter61.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("getRowWithColumnsTs_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class mutateRow_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("mutateRow_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField MUTATIONS_FIELD_DESC = new TField("mutations", TType.LIST, (short)3);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public List<Mutation> mutations;
+ public static final int MUTATIONS = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(MUTATIONS, new FieldMetaData("mutations", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, Mutation.class))));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(mutateRow_args.class, metaDataMap);
+ }
+
+ public mutateRow_args() {
+ }
+
+ public mutateRow_args(
+ byte[] tableName,
+ byte[] row,
+ List<Mutation> mutations)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.mutations = mutations;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public mutateRow_args(mutateRow_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetMutations()) {
+ List<Mutation> __this__mutations = new ArrayList<Mutation>();
+ for (Mutation other_element : other.mutations) {
+ __this__mutations.add(new Mutation(other_element));
+ }
+ this.mutations = __this__mutations;
+ }
+ }
+
+ @Override
+ public mutateRow_args clone() {
+ return new mutateRow_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public int getMutationsSize() {
+ return (this.mutations == null) ? 0 : this.mutations.size();
+ }
+
+ public java.util.Iterator<Mutation> getMutationsIterator() {
+ return (this.mutations == null) ? null : this.mutations.iterator();
+ }
+
+ public void addToMutations(Mutation elem) {
+ if (this.mutations == null) {
+ this.mutations = new ArrayList<Mutation>();
+ }
+ this.mutations.add(elem);
+ }
+
+ public List<Mutation> getMutations() {
+ return this.mutations;
+ }
+
+ public void setMutations(List<Mutation> mutations) {
+ this.mutations = mutations;
+ }
+
+ public void unsetMutations() {
+ this.mutations = null;
+ }
+
+ // Returns true if field mutations is set (has been asigned a value) and false otherwise
+ public boolean isSetMutations() {
+ return this.mutations != null;
+ }
+
+ public void setMutationsIsSet(boolean value) {
+ if (!value) {
+ this.mutations = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case MUTATIONS:
+ if (value == null) {
+ unsetMutations();
+ } else {
+ setMutations((List<Mutation>)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case MUTATIONS:
+ return getMutations();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case MUTATIONS:
+ return isSetMutations();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof mutateRow_args)
+ return this.equals((mutateRow_args)that);
+ return false;
+ }
+
+ public boolean equals(mutateRow_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_mutations = true && this.isSetMutations();
+ boolean that_present_mutations = true && that.isSetMutations();
+ if (this_present_mutations || that_present_mutations) {
+ if (!(this_present_mutations && that_present_mutations))
+ return false;
+ if (!this.mutations.equals(that.mutations))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case MUTATIONS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list62 = iprot.readListBegin();
+ this.mutations = new ArrayList<Mutation>(_list62.size);
+ for (int _i63 = 0; _i63 < _list62.size; ++_i63)
+ {
+ Mutation _elem64;
+ _elem64 = new Mutation();
+ _elem64.read(iprot);
+ this.mutations.add(_elem64);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.mutations != null) {
+ oprot.writeFieldBegin(MUTATIONS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.mutations.size()));
+ for (Mutation _iter65 : this.mutations) {
+ _iter65.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("mutateRow_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("mutations:");
+ if (this.mutations == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.mutations);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class mutateRow_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("mutateRow_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(mutateRow_result.class, metaDataMap);
+ }
+
+ public mutateRow_result() {
+ }
+
+ public mutateRow_result(
+ IOError io,
+ IllegalArgument ia)
+ {
+ this();
+ this.io = io;
+ this.ia = ia;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public mutateRow_result(mutateRow_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ }
+
+ @Override
+ public mutateRow_result clone() {
+ return new mutateRow_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof mutateRow_result)
+ return this.equals((mutateRow_result)that);
+ return false;
+ }
+
+ public boolean equals(mutateRow_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("mutateRow_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class mutateRowTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("mutateRowTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField MUTATIONS_FIELD_DESC = new TField("mutations", TType.LIST, (short)3);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public List<Mutation> mutations;
+ public static final int MUTATIONS = 3;
+ public long timestamp;
+ public static final int TIMESTAMP = 4;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(MUTATIONS, new FieldMetaData("mutations", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, Mutation.class))));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(mutateRowTs_args.class, metaDataMap);
+ }
+
+ public mutateRowTs_args() {
+ }
+
+ public mutateRowTs_args(
+ byte[] tableName,
+ byte[] row,
+ List<Mutation> mutations,
+ long timestamp)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.mutations = mutations;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public mutateRowTs_args(mutateRowTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetMutations()) {
+ List<Mutation> __this__mutations = new ArrayList<Mutation>();
+ for (Mutation other_element : other.mutations) {
+ __this__mutations.add(new Mutation(other_element));
+ }
+ this.mutations = __this__mutations;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public mutateRowTs_args clone() {
+ return new mutateRowTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public int getMutationsSize() {
+ return (this.mutations == null) ? 0 : this.mutations.size();
+ }
+
+ public java.util.Iterator<Mutation> getMutationsIterator() {
+ return (this.mutations == null) ? null : this.mutations.iterator();
+ }
+
+ public void addToMutations(Mutation elem) {
+ if (this.mutations == null) {
+ this.mutations = new ArrayList<Mutation>();
+ }
+ this.mutations.add(elem);
+ }
+
+ public List<Mutation> getMutations() {
+ return this.mutations;
+ }
+
+ public void setMutations(List<Mutation> mutations) {
+ this.mutations = mutations;
+ }
+
+ public void unsetMutations() {
+ this.mutations = null;
+ }
+
+ // Returns true if field mutations is set (has been asigned a value) and false otherwise
+ public boolean isSetMutations() {
+ return this.mutations != null;
+ }
+
+ public void setMutationsIsSet(boolean value) {
+ if (!value) {
+ this.mutations = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case MUTATIONS:
+ if (value == null) {
+ unsetMutations();
+ } else {
+ setMutations((List<Mutation>)value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case MUTATIONS:
+ return getMutations();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case MUTATIONS:
+ return isSetMutations();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof mutateRowTs_args)
+ return this.equals((mutateRowTs_args)that);
+ return false;
+ }
+
+ public boolean equals(mutateRowTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_mutations = true && this.isSetMutations();
+ boolean that_present_mutations = true && that.isSetMutations();
+ if (this_present_mutations || that_present_mutations) {
+ if (!(this_present_mutations && that_present_mutations))
+ return false;
+ if (!this.mutations.equals(that.mutations))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case MUTATIONS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list66 = iprot.readListBegin();
+ this.mutations = new ArrayList<Mutation>(_list66.size);
+ for (int _i67 = 0; _i67 < _list66.size; ++_i67)
+ {
+ Mutation _elem68;
+ _elem68 = new Mutation();
+ _elem68.read(iprot);
+ this.mutations.add(_elem68);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.mutations != null) {
+ oprot.writeFieldBegin(MUTATIONS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.mutations.size()));
+ for (Mutation _iter69 : this.mutations) {
+ _iter69.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("mutateRowTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("mutations:");
+ if (this.mutations == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.mutations);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class mutateRowTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("mutateRowTs_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(mutateRowTs_result.class, metaDataMap);
+ }
+
+ public mutateRowTs_result() {
+ }
+
+ public mutateRowTs_result(
+ IOError io,
+ IllegalArgument ia)
+ {
+ this();
+ this.io = io;
+ this.ia = ia;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public mutateRowTs_result(mutateRowTs_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ }
+
+ @Override
+ public mutateRowTs_result clone() {
+ return new mutateRowTs_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof mutateRowTs_result)
+ return this.equals((mutateRowTs_result)that);
+ return false;
+ }
+
+ public boolean equals(mutateRowTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("mutateRowTs_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class mutateRows_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("mutateRows_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_BATCHES_FIELD_DESC = new TField("rowBatches", TType.LIST, (short)2);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public List<BatchMutation> rowBatches;
+ public static final int ROWBATCHES = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROWBATCHES, new FieldMetaData("rowBatches", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, BatchMutation.class))));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(mutateRows_args.class, metaDataMap);
+ }
+
+ public mutateRows_args() {
+ }
+
+ public mutateRows_args(
+ byte[] tableName,
+ List<BatchMutation> rowBatches)
+ {
+ this();
+ this.tableName = tableName;
+ this.rowBatches = rowBatches;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public mutateRows_args(mutateRows_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRowBatches()) {
+ List<BatchMutation> __this__rowBatches = new ArrayList<BatchMutation>();
+ for (BatchMutation other_element : other.rowBatches) {
+ __this__rowBatches.add(new BatchMutation(other_element));
+ }
+ this.rowBatches = __this__rowBatches;
+ }
+ }
+
+ @Override
+ public mutateRows_args clone() {
+ return new mutateRows_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public int getRowBatchesSize() {
+ return (this.rowBatches == null) ? 0 : this.rowBatches.size();
+ }
+
+ public java.util.Iterator<BatchMutation> getRowBatchesIterator() {
+ return (this.rowBatches == null) ? null : this.rowBatches.iterator();
+ }
+
+ public void addToRowBatches(BatchMutation elem) {
+ if (this.rowBatches == null) {
+ this.rowBatches = new ArrayList<BatchMutation>();
+ }
+ this.rowBatches.add(elem);
+ }
+
+ public List<BatchMutation> getRowBatches() {
+ return this.rowBatches;
+ }
+
+ public void setRowBatches(List<BatchMutation> rowBatches) {
+ this.rowBatches = rowBatches;
+ }
+
+ public void unsetRowBatches() {
+ this.rowBatches = null;
+ }
+
+ // Returns true if field rowBatches is set (has been asigned a value) and false otherwise
+ public boolean isSetRowBatches() {
+ return this.rowBatches != null;
+ }
+
+ public void setRowBatchesIsSet(boolean value) {
+ if (!value) {
+ this.rowBatches = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROWBATCHES:
+ if (value == null) {
+ unsetRowBatches();
+ } else {
+ setRowBatches((List<BatchMutation>)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROWBATCHES:
+ return getRowBatches();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROWBATCHES:
+ return isSetRowBatches();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof mutateRows_args)
+ return this.equals((mutateRows_args)that);
+ return false;
+ }
+
+ public boolean equals(mutateRows_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_rowBatches = true && this.isSetRowBatches();
+ boolean that_present_rowBatches = true && that.isSetRowBatches();
+ if (this_present_rowBatches || that_present_rowBatches) {
+ if (!(this_present_rowBatches && that_present_rowBatches))
+ return false;
+ if (!this.rowBatches.equals(that.rowBatches))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROWBATCHES:
+ if (field.type == TType.LIST) {
+ {
+ TList _list70 = iprot.readListBegin();
+ this.rowBatches = new ArrayList<BatchMutation>(_list70.size);
+ for (int _i71 = 0; _i71 < _list70.size; ++_i71)
+ {
+ BatchMutation _elem72;
+ _elem72 = new BatchMutation();
+ _elem72.read(iprot);
+ this.rowBatches.add(_elem72);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.rowBatches != null) {
+ oprot.writeFieldBegin(ROW_BATCHES_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.rowBatches.size()));
+ for (BatchMutation _iter73 : this.rowBatches) {
+ _iter73.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("mutateRows_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("rowBatches:");
+ if (this.rowBatches == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.rowBatches);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class mutateRows_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("mutateRows_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(mutateRows_result.class, metaDataMap);
+ }
+
+ public mutateRows_result() {
+ }
+
+ public mutateRows_result(
+ IOError io,
+ IllegalArgument ia)
+ {
+ this();
+ this.io = io;
+ this.ia = ia;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public mutateRows_result(mutateRows_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ }
+
+ @Override
+ public mutateRows_result clone() {
+ return new mutateRows_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof mutateRows_result)
+ return this.equals((mutateRows_result)that);
+ return false;
+ }
+
+ public boolean equals(mutateRows_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("mutateRows_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class mutateRowsTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("mutateRowsTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_BATCHES_FIELD_DESC = new TField("rowBatches", TType.LIST, (short)2);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)3);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public List<BatchMutation> rowBatches;
+ public static final int ROWBATCHES = 2;
+ public long timestamp;
+ public static final int TIMESTAMP = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROWBATCHES, new FieldMetaData("rowBatches", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, BatchMutation.class))));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(mutateRowsTs_args.class, metaDataMap);
+ }
+
+ public mutateRowsTs_args() {
+ }
+
+ public mutateRowsTs_args(
+ byte[] tableName,
+ List<BatchMutation> rowBatches,
+ long timestamp)
+ {
+ this();
+ this.tableName = tableName;
+ this.rowBatches = rowBatches;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public mutateRowsTs_args(mutateRowsTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRowBatches()) {
+ List<BatchMutation> __this__rowBatches = new ArrayList<BatchMutation>();
+ for (BatchMutation other_element : other.rowBatches) {
+ __this__rowBatches.add(new BatchMutation(other_element));
+ }
+ this.rowBatches = __this__rowBatches;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public mutateRowsTs_args clone() {
+ return new mutateRowsTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public int getRowBatchesSize() {
+ return (this.rowBatches == null) ? 0 : this.rowBatches.size();
+ }
+
+ public java.util.Iterator<BatchMutation> getRowBatchesIterator() {
+ return (this.rowBatches == null) ? null : this.rowBatches.iterator();
+ }
+
+ public void addToRowBatches(BatchMutation elem) {
+ if (this.rowBatches == null) {
+ this.rowBatches = new ArrayList<BatchMutation>();
+ }
+ this.rowBatches.add(elem);
+ }
+
+ public List<BatchMutation> getRowBatches() {
+ return this.rowBatches;
+ }
+
+ public void setRowBatches(List<BatchMutation> rowBatches) {
+ this.rowBatches = rowBatches;
+ }
+
+ public void unsetRowBatches() {
+ this.rowBatches = null;
+ }
+
+ // Returns true if field rowBatches is set (has been asigned a value) and false otherwise
+ public boolean isSetRowBatches() {
+ return this.rowBatches != null;
+ }
+
+ public void setRowBatchesIsSet(boolean value) {
+ if (!value) {
+ this.rowBatches = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROWBATCHES:
+ if (value == null) {
+ unsetRowBatches();
+ } else {
+ setRowBatches((List<BatchMutation>)value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROWBATCHES:
+ return getRowBatches();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROWBATCHES:
+ return isSetRowBatches();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof mutateRowsTs_args)
+ return this.equals((mutateRowsTs_args)that);
+ return false;
+ }
+
+ public boolean equals(mutateRowsTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_rowBatches = true && this.isSetRowBatches();
+ boolean that_present_rowBatches = true && that.isSetRowBatches();
+ if (this_present_rowBatches || that_present_rowBatches) {
+ if (!(this_present_rowBatches && that_present_rowBatches))
+ return false;
+ if (!this.rowBatches.equals(that.rowBatches))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROWBATCHES:
+ if (field.type == TType.LIST) {
+ {
+ TList _list74 = iprot.readListBegin();
+ this.rowBatches = new ArrayList<BatchMutation>(_list74.size);
+ for (int _i75 = 0; _i75 < _list74.size; ++_i75)
+ {
+ BatchMutation _elem76;
+ _elem76 = new BatchMutation();
+ _elem76.read(iprot);
+ this.rowBatches.add(_elem76);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.rowBatches != null) {
+ oprot.writeFieldBegin(ROW_BATCHES_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.rowBatches.size()));
+ for (BatchMutation _iter77 : this.rowBatches) {
+ _iter77.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("mutateRowsTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("rowBatches:");
+ if (this.rowBatches == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.rowBatches);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class mutateRowsTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("mutateRowsTs_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(mutateRowsTs_result.class, metaDataMap);
+ }
+
+ public mutateRowsTs_result() {
+ }
+
+ public mutateRowsTs_result(
+ IOError io,
+ IllegalArgument ia)
+ {
+ this();
+ this.io = io;
+ this.ia = ia;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public mutateRowsTs_result(mutateRowsTs_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ }
+
+ @Override
+ public mutateRowsTs_result clone() {
+ return new mutateRowsTs_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof mutateRowsTs_result)
+ return this.equals((mutateRowsTs_result)that);
+ return false;
+ }
+
+ public boolean equals(mutateRowsTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("mutateRowsTs_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class atomicIncrement_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("atomicIncrement_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+ private static final TField VALUE_FIELD_DESC = new TField("value", TType.I64, (short)4);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public byte[] column;
+ public static final int COLUMN = 3;
+ public long value;
+ public static final int VALUE = 4;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean value = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(VALUE, new FieldMetaData("value", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(atomicIncrement_args.class, metaDataMap);
+ }
+
+ public atomicIncrement_args() {
+ }
+
+ public atomicIncrement_args(
+ byte[] tableName,
+ byte[] row,
+ byte[] column,
+ long value)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.column = column;
+ this.value = value;
+ this.__isset.value = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public atomicIncrement_args(atomicIncrement_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumn()) {
+ this.column = other.column;
+ }
+ __isset.value = other.__isset.value;
+ this.value = other.value;
+ }
+
+ @Override
+ public atomicIncrement_args clone() {
+ return new atomicIncrement_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public byte[] getColumn() {
+ return this.column;
+ }
+
+ public void setColumn(byte[] column) {
+ this.column = column;
+ }
+
+ public void unsetColumn() {
+ this.column = null;
+ }
+
+ // Returns true if field column is set (has been asigned a value) and false otherwise
+ public boolean isSetColumn() {
+ return this.column != null;
+ }
+
+ public void setColumnIsSet(boolean value) {
+ if (!value) {
+ this.column = null;
+ }
+ }
+
+ public long getValue() {
+ return this.value;
+ }
+
+ public void setValue(long value) {
+ this.value = value;
+ this.__isset.value = true;
+ }
+
+ public void unsetValue() {
+ this.__isset.value = false;
+ }
+
+ // Returns true if field value is set (has been asigned a value) and false otherwise
+ public boolean isSetValue() {
+ return this.__isset.value;
+ }
+
+ public void setValueIsSet(boolean value) {
+ this.__isset.value = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMN:
+ if (value == null) {
+ unsetColumn();
+ } else {
+ setColumn((byte[])value);
+ }
+ break;
+
+ case VALUE:
+ if (value == null) {
+ unsetValue();
+ } else {
+ setValue((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case COLUMN:
+ return getColumn();
+
+ case VALUE:
+ return new Long(getValue());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case COLUMN:
+ return isSetColumn();
+ case VALUE:
+ return isSetValue();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof atomicIncrement_args)
+ return this.equals((atomicIncrement_args)that);
+ return false;
+ }
+
+ public boolean equals(atomicIncrement_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_column = true && this.isSetColumn();
+ boolean that_present_column = true && that.isSetColumn();
+ if (this_present_column || that_present_column) {
+ if (!(this_present_column && that_present_column))
+ return false;
+ if (!java.util.Arrays.equals(this.column, that.column))
+ return false;
+ }
+
+ boolean this_present_value = true;
+ boolean that_present_value = true;
+ if (this_present_value || that_present_value) {
+ if (!(this_present_value && that_present_value))
+ return false;
+ if (this.value != that.value)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMN:
+ if (field.type == TType.STRING) {
+ this.column = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case VALUE:
+ if (field.type == TType.I64) {
+ this.value = iprot.readI64();
+ this.__isset.value = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.column != null) {
+ oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+ oprot.writeBinary(this.column);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(VALUE_FIELD_DESC);
+ oprot.writeI64(this.value);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("atomicIncrement_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("column:");
+ if (this.column == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.column);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("value:");
+ sb.append(this.value);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class atomicIncrement_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("atomicIncrement_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I64, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+ public long success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean success = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(atomicIncrement_result.class, metaDataMap);
+ }
+
+ public atomicIncrement_result() {
+ }
+
+ public atomicIncrement_result(
+ long success,
+ IOError io,
+ IllegalArgument ia)
+ {
+ this();
+ this.success = success;
+ this.__isset.success = true;
+ this.io = io;
+ this.ia = ia;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public atomicIncrement_result(atomicIncrement_result other) {
+ __isset.success = other.__isset.success;
+ this.success = other.success;
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ }
+
+ @Override
+ public atomicIncrement_result clone() {
+ return new atomicIncrement_result(this);
+ }
+
+ public long getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(long success) {
+ this.success = success;
+ this.__isset.success = true;
+ }
+
+ public void unsetSuccess() {
+ this.__isset.success = false;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.__isset.success;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ this.__isset.success = value;
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((Long)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return new Long(getSuccess());
+
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof atomicIncrement_result)
+ return this.equals((atomicIncrement_result)that);
+ return false;
+ }
+
+ public boolean equals(atomicIncrement_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true;
+ boolean that_present_success = true;
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (this.success != that.success)
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.I64) {
+ this.success = iprot.readI64();
+ this.__isset.success = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ oprot.writeI64(this.success);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("atomicIncrement_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ sb.append(this.success);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteAll_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteAll_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public byte[] column;
+ public static final int COLUMN = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteAll_args.class, metaDataMap);
+ }
+
+ public deleteAll_args() {
+ }
+
+ public deleteAll_args(
+ byte[] tableName,
+ byte[] row,
+ byte[] column)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.column = column;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteAll_args(deleteAll_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumn()) {
+ this.column = other.column;
+ }
+ }
+
+ @Override
+ public deleteAll_args clone() {
+ return new deleteAll_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public byte[] getColumn() {
+ return this.column;
+ }
+
+ public void setColumn(byte[] column) {
+ this.column = column;
+ }
+
+ public void unsetColumn() {
+ this.column = null;
+ }
+
+ // Returns true if field column is set (has been asigned a value) and false otherwise
+ public boolean isSetColumn() {
+ return this.column != null;
+ }
+
+ public void setColumnIsSet(boolean value) {
+ if (!value) {
+ this.column = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMN:
+ if (value == null) {
+ unsetColumn();
+ } else {
+ setColumn((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case COLUMN:
+ return getColumn();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case COLUMN:
+ return isSetColumn();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteAll_args)
+ return this.equals((deleteAll_args)that);
+ return false;
+ }
+
+ public boolean equals(deleteAll_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_column = true && this.isSetColumn();
+ boolean that_present_column = true && that.isSetColumn();
+ if (this_present_column || that_present_column) {
+ if (!(this_present_column && that_present_column))
+ return false;
+ if (!java.util.Arrays.equals(this.column, that.column))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMN:
+ if (field.type == TType.STRING) {
+ this.column = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.column != null) {
+ oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+ oprot.writeBinary(this.column);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteAll_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("column:");
+ if (this.column == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.column);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteAll_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteAll_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteAll_result.class, metaDataMap);
+ }
+
+ public deleteAll_result() {
+ }
+
+ public deleteAll_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteAll_result(deleteAll_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public deleteAll_result clone() {
+ return new deleteAll_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteAll_result)
+ return this.equals((deleteAll_result)that);
+ return false;
+ }
+
+ public boolean equals(deleteAll_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteAll_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteAllTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteAllTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public byte[] column;
+ public static final int COLUMN = 3;
+ public long timestamp;
+ public static final int TIMESTAMP = 4;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteAllTs_args.class, metaDataMap);
+ }
+
+ public deleteAllTs_args() {
+ }
+
+ public deleteAllTs_args(
+ byte[] tableName,
+ byte[] row,
+ byte[] column,
+ long timestamp)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.column = column;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteAllTs_args(deleteAllTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumn()) {
+ this.column = other.column;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public deleteAllTs_args clone() {
+ return new deleteAllTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public byte[] getColumn() {
+ return this.column;
+ }
+
+ public void setColumn(byte[] column) {
+ this.column = column;
+ }
+
+ public void unsetColumn() {
+ this.column = null;
+ }
+
+ // Returns true if field column is set (has been asigned a value) and false otherwise
+ public boolean isSetColumn() {
+ return this.column != null;
+ }
+
+ public void setColumnIsSet(boolean value) {
+ if (!value) {
+ this.column = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMN:
+ if (value == null) {
+ unsetColumn();
+ } else {
+ setColumn((byte[])value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case COLUMN:
+ return getColumn();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case COLUMN:
+ return isSetColumn();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteAllTs_args)
+ return this.equals((deleteAllTs_args)that);
+ return false;
+ }
+
+ public boolean equals(deleteAllTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_column = true && this.isSetColumn();
+ boolean that_present_column = true && that.isSetColumn();
+ if (this_present_column || that_present_column) {
+ if (!(this_present_column && that_present_column))
+ return false;
+ if (!java.util.Arrays.equals(this.column, that.column))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMN:
+ if (field.type == TType.STRING) {
+ this.column = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.column != null) {
+ oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+ oprot.writeBinary(this.column);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteAllTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("column:");
+ if (this.column == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.column);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteAllTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteAllTs_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteAllTs_result.class, metaDataMap);
+ }
+
+ public deleteAllTs_result() {
+ }
+
+ public deleteAllTs_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteAllTs_result(deleteAllTs_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public deleteAllTs_result clone() {
+ return new deleteAllTs_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteAllTs_result)
+ return this.equals((deleteAllTs_result)that);
+ return false;
+ }
+
+ public boolean equals(deleteAllTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteAllTs_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteAllRow_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteAllRow_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteAllRow_args.class, metaDataMap);
+ }
+
+ public deleteAllRow_args() {
+ }
+
+ public deleteAllRow_args(
+ byte[] tableName,
+ byte[] row)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteAllRow_args(deleteAllRow_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ }
+
+ @Override
+ public deleteAllRow_args clone() {
+ return new deleteAllRow_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteAllRow_args)
+ return this.equals((deleteAllRow_args)that);
+ return false;
+ }
+
+ public boolean equals(deleteAllRow_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteAllRow_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteAllRow_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteAllRow_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteAllRow_result.class, metaDataMap);
+ }
+
+ public deleteAllRow_result() {
+ }
+
+ public deleteAllRow_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteAllRow_result(deleteAllRow_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public deleteAllRow_result clone() {
+ return new deleteAllRow_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteAllRow_result)
+ return this.equals((deleteAllRow_result)that);
+ return false;
+ }
+
+ public boolean equals(deleteAllRow_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteAllRow_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteAllRowTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteAllRowTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)3);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] row;
+ public static final int ROW = 2;
+ public long timestamp;
+ public static final int TIMESTAMP = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteAllRowTs_args.class, metaDataMap);
+ }
+
+ public deleteAllRowTs_args() {
+ }
+
+ public deleteAllRowTs_args(
+ byte[] tableName,
+ byte[] row,
+ long timestamp)
+ {
+ this();
+ this.tableName = tableName;
+ this.row = row;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteAllRowTs_args(deleteAllRowTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public deleteAllRowTs_args clone() {
+ return new deleteAllRowTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case ROW:
+ return getRow();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case ROW:
+ return isSetRow();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteAllRowTs_args)
+ return this.equals((deleteAllRowTs_args)that);
+ return false;
+ }
+
+ public boolean equals(deleteAllRowTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteAllRowTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class deleteAllRowTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("deleteAllRowTs_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(deleteAllRowTs_result.class, metaDataMap);
+ }
+
+ public deleteAllRowTs_result() {
+ }
+
+ public deleteAllRowTs_result(
+ IOError io)
+ {
+ this();
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public deleteAllRowTs_result(deleteAllRowTs_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public deleteAllRowTs_result clone() {
+ return new deleteAllRowTs_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof deleteAllRowTs_result)
+ return this.equals((deleteAllRowTs_result)that);
+ return false;
+ }
+
+ public boolean equals(deleteAllRowTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("deleteAllRowTs_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerOpen_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerOpen_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField START_ROW_FIELD_DESC = new TField("startRow", TType.STRING, (short)2);
+ private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] startRow;
+ public static final int STARTROW = 2;
+ public List<byte[]> columns;
+ public static final int COLUMNS = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(STARTROW, new FieldMetaData("startRow", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new FieldValueMetaData(TType.STRING))));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerOpen_args.class, metaDataMap);
+ }
+
+ public scannerOpen_args() {
+ }
+
+ public scannerOpen_args(
+ byte[] tableName,
+ byte[] startRow,
+ List<byte[]> columns)
+ {
+ this();
+ this.tableName = tableName;
+ this.startRow = startRow;
+ this.columns = columns;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerOpen_args(scannerOpen_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetStartRow()) {
+ this.startRow = other.startRow;
+ }
+ if (other.isSetColumns()) {
+ List<byte[]> __this__columns = new ArrayList<byte[]>();
+ for (byte[] other_element : other.columns) {
+ __this__columns.add(other_element);
+ }
+ this.columns = __this__columns;
+ }
+ }
+
+ @Override
+ public scannerOpen_args clone() {
+ return new scannerOpen_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getStartRow() {
+ return this.startRow;
+ }
+
+ public void setStartRow(byte[] startRow) {
+ this.startRow = startRow;
+ }
+
+ public void unsetStartRow() {
+ this.startRow = null;
+ }
+
+ // Returns true if field startRow is set (has been asigned a value) and false otherwise
+ public boolean isSetStartRow() {
+ return this.startRow != null;
+ }
+
+ public void setStartRowIsSet(boolean value) {
+ if (!value) {
+ this.startRow = null;
+ }
+ }
+
+ public int getColumnsSize() {
+ return (this.columns == null) ? 0 : this.columns.size();
+ }
+
+ public java.util.Iterator<byte[]> getColumnsIterator() {
+ return (this.columns == null) ? null : this.columns.iterator();
+ }
+
+ public void addToColumns(byte[] elem) {
+ if (this.columns == null) {
+ this.columns = new ArrayList<byte[]>();
+ }
+ this.columns.add(elem);
+ }
+
+ public List<byte[]> getColumns() {
+ return this.columns;
+ }
+
+ public void setColumns(List<byte[]> columns) {
+ this.columns = columns;
+ }
+
+ public void unsetColumns() {
+ this.columns = null;
+ }
+
+ // Returns true if field columns is set (has been asigned a value) and false otherwise
+ public boolean isSetColumns() {
+ return this.columns != null;
+ }
+
+ public void setColumnsIsSet(boolean value) {
+ if (!value) {
+ this.columns = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case STARTROW:
+ if (value == null) {
+ unsetStartRow();
+ } else {
+ setStartRow((byte[])value);
+ }
+ break;
+
+ case COLUMNS:
+ if (value == null) {
+ unsetColumns();
+ } else {
+ setColumns((List<byte[]>)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case STARTROW:
+ return getStartRow();
+
+ case COLUMNS:
+ return getColumns();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case STARTROW:
+ return isSetStartRow();
+ case COLUMNS:
+ return isSetColumns();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerOpen_args)
+ return this.equals((scannerOpen_args)that);
+ return false;
+ }
+
+ public boolean equals(scannerOpen_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_startRow = true && this.isSetStartRow();
+ boolean that_present_startRow = true && that.isSetStartRow();
+ if (this_present_startRow || that_present_startRow) {
+ if (!(this_present_startRow && that_present_startRow))
+ return false;
+ if (!java.util.Arrays.equals(this.startRow, that.startRow))
+ return false;
+ }
+
+ boolean this_present_columns = true && this.isSetColumns();
+ boolean that_present_columns = true && that.isSetColumns();
+ if (this_present_columns || that_present_columns) {
+ if (!(this_present_columns && that_present_columns))
+ return false;
+ if (!this.columns.equals(that.columns))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case STARTROW:
+ if (field.type == TType.STRING) {
+ this.startRow = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMNS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list78 = iprot.readListBegin();
+ this.columns = new ArrayList<byte[]>(_list78.size);
+ for (int _i79 = 0; _i79 < _list78.size; ++_i79)
+ {
+ byte[] _elem80;
+ _elem80 = iprot.readBinary();
+ this.columns.add(_elem80);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.startRow != null) {
+ oprot.writeFieldBegin(START_ROW_FIELD_DESC);
+ oprot.writeBinary(this.startRow);
+ oprot.writeFieldEnd();
+ }
+ if (this.columns != null) {
+ oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+ for (byte[] _iter81 : this.columns) {
+ oprot.writeBinary(_iter81);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerOpen_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("startRow:");
+ if (this.startRow == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.startRow);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("columns:");
+ if (this.columns == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.columns);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerOpen_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerOpen_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public int success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean success = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerOpen_result.class, metaDataMap);
+ }
+
+ public scannerOpen_result() {
+ }
+
+ public scannerOpen_result(
+ int success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.__isset.success = true;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerOpen_result(scannerOpen_result other) {
+ __isset.success = other.__isset.success;
+ this.success = other.success;
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public scannerOpen_result clone() {
+ return new scannerOpen_result(this);
+ }
+
+ public int getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(int success) {
+ this.success = success;
+ this.__isset.success = true;
+ }
+
+ public void unsetSuccess() {
+ this.__isset.success = false;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.__isset.success;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ this.__isset.success = value;
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((Integer)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return new Integer(getSuccess());
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerOpen_result)
+ return this.equals((scannerOpen_result)that);
+ return false;
+ }
+
+ public boolean equals(scannerOpen_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true;
+ boolean that_present_success = true;
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (this.success != that.success)
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.I32) {
+ this.success = iprot.readI32();
+ this.__isset.success = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ oprot.writeI32(this.success);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerOpen_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ sb.append(this.success);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerOpenWithStop_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithStop_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField START_ROW_FIELD_DESC = new TField("startRow", TType.STRING, (short)2);
+ private static final TField STOP_ROW_FIELD_DESC = new TField("stopRow", TType.STRING, (short)3);
+ private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)4);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] startRow;
+ public static final int STARTROW = 2;
+ public byte[] stopRow;
+ public static final int STOPROW = 3;
+ public List<byte[]> columns;
+ public static final int COLUMNS = 4;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(STARTROW, new FieldMetaData("startRow", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(STOPROW, new FieldMetaData("stopRow", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new FieldValueMetaData(TType.STRING))));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerOpenWithStop_args.class, metaDataMap);
+ }
+
+ public scannerOpenWithStop_args() {
+ }
+
+ public scannerOpenWithStop_args(
+ byte[] tableName,
+ byte[] startRow,
+ byte[] stopRow,
+ List<byte[]> columns)
+ {
+ this();
+ this.tableName = tableName;
+ this.startRow = startRow;
+ this.stopRow = stopRow;
+ this.columns = columns;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerOpenWithStop_args(scannerOpenWithStop_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetStartRow()) {
+ this.startRow = other.startRow;
+ }
+ if (other.isSetStopRow()) {
+ this.stopRow = other.stopRow;
+ }
+ if (other.isSetColumns()) {
+ List<byte[]> __this__columns = new ArrayList<byte[]>();
+ for (byte[] other_element : other.columns) {
+ __this__columns.add(other_element);
+ }
+ this.columns = __this__columns;
+ }
+ }
+
+ @Override
+ public scannerOpenWithStop_args clone() {
+ return new scannerOpenWithStop_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getStartRow() {
+ return this.startRow;
+ }
+
+ public void setStartRow(byte[] startRow) {
+ this.startRow = startRow;
+ }
+
+ public void unsetStartRow() {
+ this.startRow = null;
+ }
+
+ // Returns true if field startRow is set (has been asigned a value) and false otherwise
+ public boolean isSetStartRow() {
+ return this.startRow != null;
+ }
+
+ public void setStartRowIsSet(boolean value) {
+ if (!value) {
+ this.startRow = null;
+ }
+ }
+
+ public byte[] getStopRow() {
+ return this.stopRow;
+ }
+
+ public void setStopRow(byte[] stopRow) {
+ this.stopRow = stopRow;
+ }
+
+ public void unsetStopRow() {
+ this.stopRow = null;
+ }
+
+ // Returns true if field stopRow is set (has been asigned a value) and false otherwise
+ public boolean isSetStopRow() {
+ return this.stopRow != null;
+ }
+
+ public void setStopRowIsSet(boolean value) {
+ if (!value) {
+ this.stopRow = null;
+ }
+ }
+
+ public int getColumnsSize() {
+ return (this.columns == null) ? 0 : this.columns.size();
+ }
+
+ public java.util.Iterator<byte[]> getColumnsIterator() {
+ return (this.columns == null) ? null : this.columns.iterator();
+ }
+
+ public void addToColumns(byte[] elem) {
+ if (this.columns == null) {
+ this.columns = new ArrayList<byte[]>();
+ }
+ this.columns.add(elem);
+ }
+
+ public List<byte[]> getColumns() {
+ return this.columns;
+ }
+
+ public void setColumns(List<byte[]> columns) {
+ this.columns = columns;
+ }
+
+ public void unsetColumns() {
+ this.columns = null;
+ }
+
+ // Returns true if field columns is set (has been asigned a value) and false otherwise
+ public boolean isSetColumns() {
+ return this.columns != null;
+ }
+
+ public void setColumnsIsSet(boolean value) {
+ if (!value) {
+ this.columns = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case STARTROW:
+ if (value == null) {
+ unsetStartRow();
+ } else {
+ setStartRow((byte[])value);
+ }
+ break;
+
+ case STOPROW:
+ if (value == null) {
+ unsetStopRow();
+ } else {
+ setStopRow((byte[])value);
+ }
+ break;
+
+ case COLUMNS:
+ if (value == null) {
+ unsetColumns();
+ } else {
+ setColumns((List<byte[]>)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case STARTROW:
+ return getStartRow();
+
+ case STOPROW:
+ return getStopRow();
+
+ case COLUMNS:
+ return getColumns();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case STARTROW:
+ return isSetStartRow();
+ case STOPROW:
+ return isSetStopRow();
+ case COLUMNS:
+ return isSetColumns();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerOpenWithStop_args)
+ return this.equals((scannerOpenWithStop_args)that);
+ return false;
+ }
+
+ public boolean equals(scannerOpenWithStop_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_startRow = true && this.isSetStartRow();
+ boolean that_present_startRow = true && that.isSetStartRow();
+ if (this_present_startRow || that_present_startRow) {
+ if (!(this_present_startRow && that_present_startRow))
+ return false;
+ if (!java.util.Arrays.equals(this.startRow, that.startRow))
+ return false;
+ }
+
+ boolean this_present_stopRow = true && this.isSetStopRow();
+ boolean that_present_stopRow = true && that.isSetStopRow();
+ if (this_present_stopRow || that_present_stopRow) {
+ if (!(this_present_stopRow && that_present_stopRow))
+ return false;
+ if (!java.util.Arrays.equals(this.stopRow, that.stopRow))
+ return false;
+ }
+
+ boolean this_present_columns = true && this.isSetColumns();
+ boolean that_present_columns = true && that.isSetColumns();
+ if (this_present_columns || that_present_columns) {
+ if (!(this_present_columns && that_present_columns))
+ return false;
+ if (!this.columns.equals(that.columns))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case STARTROW:
+ if (field.type == TType.STRING) {
+ this.startRow = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case STOPROW:
+ if (field.type == TType.STRING) {
+ this.stopRow = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMNS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list82 = iprot.readListBegin();
+ this.columns = new ArrayList<byte[]>(_list82.size);
+ for (int _i83 = 0; _i83 < _list82.size; ++_i83)
+ {
+ byte[] _elem84;
+ _elem84 = iprot.readBinary();
+ this.columns.add(_elem84);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.startRow != null) {
+ oprot.writeFieldBegin(START_ROW_FIELD_DESC);
+ oprot.writeBinary(this.startRow);
+ oprot.writeFieldEnd();
+ }
+ if (this.stopRow != null) {
+ oprot.writeFieldBegin(STOP_ROW_FIELD_DESC);
+ oprot.writeBinary(this.stopRow);
+ oprot.writeFieldEnd();
+ }
+ if (this.columns != null) {
+ oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+ for (byte[] _iter85 : this.columns) {
+ oprot.writeBinary(_iter85);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerOpenWithStop_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("startRow:");
+ if (this.startRow == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.startRow);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("stopRow:");
+ if (this.stopRow == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.stopRow);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("columns:");
+ if (this.columns == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.columns);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerOpenWithStop_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithStop_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public int success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean success = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerOpenWithStop_result.class, metaDataMap);
+ }
+
+ public scannerOpenWithStop_result() {
+ }
+
+ public scannerOpenWithStop_result(
+ int success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.__isset.success = true;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerOpenWithStop_result(scannerOpenWithStop_result other) {
+ __isset.success = other.__isset.success;
+ this.success = other.success;
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public scannerOpenWithStop_result clone() {
+ return new scannerOpenWithStop_result(this);
+ }
+
+ public int getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(int success) {
+ this.success = success;
+ this.__isset.success = true;
+ }
+
+ public void unsetSuccess() {
+ this.__isset.success = false;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.__isset.success;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ this.__isset.success = value;
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((Integer)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return new Integer(getSuccess());
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerOpenWithStop_result)
+ return this.equals((scannerOpenWithStop_result)that);
+ return false;
+ }
+
+ public boolean equals(scannerOpenWithStop_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true;
+ boolean that_present_success = true;
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (this.success != that.success)
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.I32) {
+ this.success = iprot.readI32();
+ this.__isset.success = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ oprot.writeI32(this.success);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerOpenWithStop_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ sb.append(this.success);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerOpenTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerOpenTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField START_ROW_FIELD_DESC = new TField("startRow", TType.STRING, (short)2);
+ private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] startRow;
+ public static final int STARTROW = 2;
+ public List<byte[]> columns;
+ public static final int COLUMNS = 3;
+ public long timestamp;
+ public static final int TIMESTAMP = 4;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(STARTROW, new FieldMetaData("startRow", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new FieldValueMetaData(TType.STRING))));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerOpenTs_args.class, metaDataMap);
+ }
+
+ public scannerOpenTs_args() {
+ }
+
+ public scannerOpenTs_args(
+ byte[] tableName,
+ byte[] startRow,
+ List<byte[]> columns,
+ long timestamp)
+ {
+ this();
+ this.tableName = tableName;
+ this.startRow = startRow;
+ this.columns = columns;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerOpenTs_args(scannerOpenTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetStartRow()) {
+ this.startRow = other.startRow;
+ }
+ if (other.isSetColumns()) {
+ List<byte[]> __this__columns = new ArrayList<byte[]>();
+ for (byte[] other_element : other.columns) {
+ __this__columns.add(other_element);
+ }
+ this.columns = __this__columns;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public scannerOpenTs_args clone() {
+ return new scannerOpenTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getStartRow() {
+ return this.startRow;
+ }
+
+ public void setStartRow(byte[] startRow) {
+ this.startRow = startRow;
+ }
+
+ public void unsetStartRow() {
+ this.startRow = null;
+ }
+
+ // Returns true if field startRow is set (has been asigned a value) and false otherwise
+ public boolean isSetStartRow() {
+ return this.startRow != null;
+ }
+
+ public void setStartRowIsSet(boolean value) {
+ if (!value) {
+ this.startRow = null;
+ }
+ }
+
+ public int getColumnsSize() {
+ return (this.columns == null) ? 0 : this.columns.size();
+ }
+
+ public java.util.Iterator<byte[]> getColumnsIterator() {
+ return (this.columns == null) ? null : this.columns.iterator();
+ }
+
+ public void addToColumns(byte[] elem) {
+ if (this.columns == null) {
+ this.columns = new ArrayList<byte[]>();
+ }
+ this.columns.add(elem);
+ }
+
+ public List<byte[]> getColumns() {
+ return this.columns;
+ }
+
+ public void setColumns(List<byte[]> columns) {
+ this.columns = columns;
+ }
+
+ public void unsetColumns() {
+ this.columns = null;
+ }
+
+ // Returns true if field columns is set (has been asigned a value) and false otherwise
+ public boolean isSetColumns() {
+ return this.columns != null;
+ }
+
+ public void setColumnsIsSet(boolean value) {
+ if (!value) {
+ this.columns = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case STARTROW:
+ if (value == null) {
+ unsetStartRow();
+ } else {
+ setStartRow((byte[])value);
+ }
+ break;
+
+ case COLUMNS:
+ if (value == null) {
+ unsetColumns();
+ } else {
+ setColumns((List<byte[]>)value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case STARTROW:
+ return getStartRow();
+
+ case COLUMNS:
+ return getColumns();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case STARTROW:
+ return isSetStartRow();
+ case COLUMNS:
+ return isSetColumns();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerOpenTs_args)
+ return this.equals((scannerOpenTs_args)that);
+ return false;
+ }
+
+ public boolean equals(scannerOpenTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_startRow = true && this.isSetStartRow();
+ boolean that_present_startRow = true && that.isSetStartRow();
+ if (this_present_startRow || that_present_startRow) {
+ if (!(this_present_startRow && that_present_startRow))
+ return false;
+ if (!java.util.Arrays.equals(this.startRow, that.startRow))
+ return false;
+ }
+
+ boolean this_present_columns = true && this.isSetColumns();
+ boolean that_present_columns = true && that.isSetColumns();
+ if (this_present_columns || that_present_columns) {
+ if (!(this_present_columns && that_present_columns))
+ return false;
+ if (!this.columns.equals(that.columns))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case STARTROW:
+ if (field.type == TType.STRING) {
+ this.startRow = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMNS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list86 = iprot.readListBegin();
+ this.columns = new ArrayList<byte[]>(_list86.size);
+ for (int _i87 = 0; _i87 < _list86.size; ++_i87)
+ {
+ byte[] _elem88;
+ _elem88 = iprot.readBinary();
+ this.columns.add(_elem88);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.startRow != null) {
+ oprot.writeFieldBegin(START_ROW_FIELD_DESC);
+ oprot.writeBinary(this.startRow);
+ oprot.writeFieldEnd();
+ }
+ if (this.columns != null) {
+ oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+ for (byte[] _iter89 : this.columns) {
+ oprot.writeBinary(_iter89);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerOpenTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("startRow:");
+ if (this.startRow == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.startRow);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("columns:");
+ if (this.columns == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.columns);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerOpenTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerOpenTs_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public int success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean success = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerOpenTs_result.class, metaDataMap);
+ }
+
+ public scannerOpenTs_result() {
+ }
+
+ public scannerOpenTs_result(
+ int success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.__isset.success = true;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerOpenTs_result(scannerOpenTs_result other) {
+ __isset.success = other.__isset.success;
+ this.success = other.success;
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public scannerOpenTs_result clone() {
+ return new scannerOpenTs_result(this);
+ }
+
+ public int getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(int success) {
+ this.success = success;
+ this.__isset.success = true;
+ }
+
+ public void unsetSuccess() {
+ this.__isset.success = false;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.__isset.success;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ this.__isset.success = value;
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((Integer)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return new Integer(getSuccess());
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerOpenTs_result)
+ return this.equals((scannerOpenTs_result)that);
+ return false;
+ }
+
+ public boolean equals(scannerOpenTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true;
+ boolean that_present_success = true;
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (this.success != that.success)
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.I32) {
+ this.success = iprot.readI32();
+ this.__isset.success = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ oprot.writeI32(this.success);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerOpenTs_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ sb.append(this.success);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerOpenWithStopTs_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithStopTs_args");
+ private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+ private static final TField START_ROW_FIELD_DESC = new TField("startRow", TType.STRING, (short)2);
+ private static final TField STOP_ROW_FIELD_DESC = new TField("stopRow", TType.STRING, (short)3);
+ private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)4);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)5);
+
+ public byte[] tableName;
+ public static final int TABLENAME = 1;
+ public byte[] startRow;
+ public static final int STARTROW = 2;
+ public byte[] stopRow;
+ public static final int STOPROW = 3;
+ public List<byte[]> columns;
+ public static final int COLUMNS = 4;
+ public long timestamp;
+ public static final int TIMESTAMP = 5;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(TABLENAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(STARTROW, new FieldMetaData("startRow", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(STOPROW, new FieldMetaData("stopRow", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new FieldValueMetaData(TType.STRING))));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerOpenWithStopTs_args.class, metaDataMap);
+ }
+
+ public scannerOpenWithStopTs_args() {
+ }
+
+ public scannerOpenWithStopTs_args(
+ byte[] tableName,
+ byte[] startRow,
+ byte[] stopRow,
+ List<byte[]> columns,
+ long timestamp)
+ {
+ this();
+ this.tableName = tableName;
+ this.startRow = startRow;
+ this.stopRow = stopRow;
+ this.columns = columns;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerOpenWithStopTs_args(scannerOpenWithStopTs_args other) {
+ if (other.isSetTableName()) {
+ this.tableName = other.tableName;
+ }
+ if (other.isSetStartRow()) {
+ this.startRow = other.startRow;
+ }
+ if (other.isSetStopRow()) {
+ this.stopRow = other.stopRow;
+ }
+ if (other.isSetColumns()) {
+ List<byte[]> __this__columns = new ArrayList<byte[]>();
+ for (byte[] other_element : other.columns) {
+ __this__columns.add(other_element);
+ }
+ this.columns = __this__columns;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public scannerOpenWithStopTs_args clone() {
+ return new scannerOpenWithStopTs_args(this);
+ }
+
+ public byte[] getTableName() {
+ return this.tableName;
+ }
+
+ public void setTableName(byte[] tableName) {
+ this.tableName = tableName;
+ }
+
+ public void unsetTableName() {
+ this.tableName = null;
+ }
+
+ // Returns true if field tableName is set (has been asigned a value) and false otherwise
+ public boolean isSetTableName() {
+ return this.tableName != null;
+ }
+
+ public void setTableNameIsSet(boolean value) {
+ if (!value) {
+ this.tableName = null;
+ }
+ }
+
+ public byte[] getStartRow() {
+ return this.startRow;
+ }
+
+ public void setStartRow(byte[] startRow) {
+ this.startRow = startRow;
+ }
+
+ public void unsetStartRow() {
+ this.startRow = null;
+ }
+
+ // Returns true if field startRow is set (has been asigned a value) and false otherwise
+ public boolean isSetStartRow() {
+ return this.startRow != null;
+ }
+
+ public void setStartRowIsSet(boolean value) {
+ if (!value) {
+ this.startRow = null;
+ }
+ }
+
+ public byte[] getStopRow() {
+ return this.stopRow;
+ }
+
+ public void setStopRow(byte[] stopRow) {
+ this.stopRow = stopRow;
+ }
+
+ public void unsetStopRow() {
+ this.stopRow = null;
+ }
+
+ // Returns true if field stopRow is set (has been asigned a value) and false otherwise
+ public boolean isSetStopRow() {
+ return this.stopRow != null;
+ }
+
+ public void setStopRowIsSet(boolean value) {
+ if (!value) {
+ this.stopRow = null;
+ }
+ }
+
+ public int getColumnsSize() {
+ return (this.columns == null) ? 0 : this.columns.size();
+ }
+
+ public java.util.Iterator<byte[]> getColumnsIterator() {
+ return (this.columns == null) ? null : this.columns.iterator();
+ }
+
+ public void addToColumns(byte[] elem) {
+ if (this.columns == null) {
+ this.columns = new ArrayList<byte[]>();
+ }
+ this.columns.add(elem);
+ }
+
+ public List<byte[]> getColumns() {
+ return this.columns;
+ }
+
+ public void setColumns(List<byte[]> columns) {
+ this.columns = columns;
+ }
+
+ public void unsetColumns() {
+ this.columns = null;
+ }
+
+ // Returns true if field columns is set (has been asigned a value) and false otherwise
+ public boolean isSetColumns() {
+ return this.columns != null;
+ }
+
+ public void setColumnsIsSet(boolean value) {
+ if (!value) {
+ this.columns = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case TABLENAME:
+ if (value == null) {
+ unsetTableName();
+ } else {
+ setTableName((byte[])value);
+ }
+ break;
+
+ case STARTROW:
+ if (value == null) {
+ unsetStartRow();
+ } else {
+ setStartRow((byte[])value);
+ }
+ break;
+
+ case STOPROW:
+ if (value == null) {
+ unsetStopRow();
+ } else {
+ setStopRow((byte[])value);
+ }
+ break;
+
+ case COLUMNS:
+ if (value == null) {
+ unsetColumns();
+ } else {
+ setColumns((List<byte[]>)value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return getTableName();
+
+ case STARTROW:
+ return getStartRow();
+
+ case STOPROW:
+ return getStopRow();
+
+ case COLUMNS:
+ return getColumns();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case TABLENAME:
+ return isSetTableName();
+ case STARTROW:
+ return isSetStartRow();
+ case STOPROW:
+ return isSetStopRow();
+ case COLUMNS:
+ return isSetColumns();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerOpenWithStopTs_args)
+ return this.equals((scannerOpenWithStopTs_args)that);
+ return false;
+ }
+
+ public boolean equals(scannerOpenWithStopTs_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_tableName = true && this.isSetTableName();
+ boolean that_present_tableName = true && that.isSetTableName();
+ if (this_present_tableName || that_present_tableName) {
+ if (!(this_present_tableName && that_present_tableName))
+ return false;
+ if (!java.util.Arrays.equals(this.tableName, that.tableName))
+ return false;
+ }
+
+ boolean this_present_startRow = true && this.isSetStartRow();
+ boolean that_present_startRow = true && that.isSetStartRow();
+ if (this_present_startRow || that_present_startRow) {
+ if (!(this_present_startRow && that_present_startRow))
+ return false;
+ if (!java.util.Arrays.equals(this.startRow, that.startRow))
+ return false;
+ }
+
+ boolean this_present_stopRow = true && this.isSetStopRow();
+ boolean that_present_stopRow = true && that.isSetStopRow();
+ if (this_present_stopRow || that_present_stopRow) {
+ if (!(this_present_stopRow && that_present_stopRow))
+ return false;
+ if (!java.util.Arrays.equals(this.stopRow, that.stopRow))
+ return false;
+ }
+
+ boolean this_present_columns = true && this.isSetColumns();
+ boolean that_present_columns = true && that.isSetColumns();
+ if (this_present_columns || that_present_columns) {
+ if (!(this_present_columns && that_present_columns))
+ return false;
+ if (!this.columns.equals(that.columns))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case TABLENAME:
+ if (field.type == TType.STRING) {
+ this.tableName = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case STARTROW:
+ if (field.type == TType.STRING) {
+ this.startRow = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case STOPROW:
+ if (field.type == TType.STRING) {
+ this.stopRow = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMNS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list90 = iprot.readListBegin();
+ this.columns = new ArrayList<byte[]>(_list90.size);
+ for (int _i91 = 0; _i91 < _list90.size; ++_i91)
+ {
+ byte[] _elem92;
+ _elem92 = iprot.readBinary();
+ this.columns.add(_elem92);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.tableName != null) {
+ oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+ oprot.writeBinary(this.tableName);
+ oprot.writeFieldEnd();
+ }
+ if (this.startRow != null) {
+ oprot.writeFieldBegin(START_ROW_FIELD_DESC);
+ oprot.writeBinary(this.startRow);
+ oprot.writeFieldEnd();
+ }
+ if (this.stopRow != null) {
+ oprot.writeFieldBegin(STOP_ROW_FIELD_DESC);
+ oprot.writeBinary(this.stopRow);
+ oprot.writeFieldEnd();
+ }
+ if (this.columns != null) {
+ oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+ for (byte[] _iter93 : this.columns) {
+ oprot.writeBinary(_iter93);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerOpenWithStopTs_args(");
+ boolean first = true;
+
+ sb.append("tableName:");
+ if (this.tableName == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.tableName);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("startRow:");
+ if (this.startRow == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.startRow);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("stopRow:");
+ if (this.stopRow == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.stopRow);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("columns:");
+ if (this.columns == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.columns);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerOpenWithStopTs_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithStopTs_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+ public int success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean success = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerOpenWithStopTs_result.class, metaDataMap);
+ }
+
+ public scannerOpenWithStopTs_result() {
+ }
+
+ public scannerOpenWithStopTs_result(
+ int success,
+ IOError io)
+ {
+ this();
+ this.success = success;
+ this.__isset.success = true;
+ this.io = io;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerOpenWithStopTs_result(scannerOpenWithStopTs_result other) {
+ __isset.success = other.__isset.success;
+ this.success = other.success;
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ }
+
+ @Override
+ public scannerOpenWithStopTs_result clone() {
+ return new scannerOpenWithStopTs_result(this);
+ }
+
+ public int getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(int success) {
+ this.success = success;
+ this.__isset.success = true;
+ }
+
+ public void unsetSuccess() {
+ this.__isset.success = false;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.__isset.success;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ this.__isset.success = value;
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((Integer)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return new Integer(getSuccess());
+
+ case IO:
+ return getIo();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerOpenWithStopTs_result)
+ return this.equals((scannerOpenWithStopTs_result)that);
+ return false;
+ }
+
+ public boolean equals(scannerOpenWithStopTs_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true;
+ boolean that_present_success = true;
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (this.success != that.success)
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.I32) {
+ this.success = iprot.readI32();
+ this.__isset.success = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ oprot.writeI32(this.success);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerOpenWithStopTs_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ sb.append(this.success);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerGet_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerGet_args");
+ private static final TField ID_FIELD_DESC = new TField("id", TType.I32, (short)1);
+
+ public int id;
+ public static final int ID = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean id = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(ID, new FieldMetaData("id", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerGet_args.class, metaDataMap);
+ }
+
+ public scannerGet_args() {
+ }
+
+ public scannerGet_args(
+ int id)
+ {
+ this();
+ this.id = id;
+ this.__isset.id = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerGet_args(scannerGet_args other) {
+ __isset.id = other.__isset.id;
+ this.id = other.id;
+ }
+
+ @Override
+ public scannerGet_args clone() {
+ return new scannerGet_args(this);
+ }
+
+ public int getId() {
+ return this.id;
+ }
+
+ public void setId(int id) {
+ this.id = id;
+ this.__isset.id = true;
+ }
+
+ public void unsetId() {
+ this.__isset.id = false;
+ }
+
+ // Returns true if field id is set (has been asigned a value) and false otherwise
+ public boolean isSetId() {
+ return this.__isset.id;
+ }
+
+ public void setIdIsSet(boolean value) {
+ this.__isset.id = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case ID:
+ if (value == null) {
+ unsetId();
+ } else {
+ setId((Integer)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case ID:
+ return new Integer(getId());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case ID:
+ return isSetId();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerGet_args)
+ return this.equals((scannerGet_args)that);
+ return false;
+ }
+
+ public boolean equals(scannerGet_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_id = true;
+ boolean that_present_id = true;
+ if (this_present_id || that_present_id) {
+ if (!(this_present_id && that_present_id))
+ return false;
+ if (this.id != that.id)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case ID:
+ if (field.type == TType.I32) {
+ this.id = iprot.readI32();
+ this.__isset.id = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ oprot.writeFieldBegin(ID_FIELD_DESC);
+ oprot.writeI32(this.id);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerGet_args(");
+ boolean first = true;
+
+ sb.append("id:");
+ sb.append(this.id);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerGet_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerGet_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+ public List<TRowResult> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TRowResult.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerGet_result.class, metaDataMap);
+ }
+
+ public scannerGet_result() {
+ }
+
+ public scannerGet_result(
+ List<TRowResult> success,
+ IOError io,
+ IllegalArgument ia)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ this.ia = ia;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerGet_result(scannerGet_result other) {
+ if (other.isSetSuccess()) {
+ List<TRowResult> __this__success = new ArrayList<TRowResult>();
+ for (TRowResult other_element : other.success) {
+ __this__success.add(new TRowResult(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ }
+
+ @Override
+ public scannerGet_result clone() {
+ return new scannerGet_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TRowResult> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TRowResult elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TRowResult>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TRowResult> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TRowResult> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TRowResult>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerGet_result)
+ return this.equals((scannerGet_result)that);
+ return false;
+ }
+
+ public boolean equals(scannerGet_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list94 = iprot.readListBegin();
+ this.success = new ArrayList<TRowResult>(_list94.size);
+ for (int _i95 = 0; _i95 < _list94.size; ++_i95)
+ {
+ TRowResult _elem96;
+ _elem96 = new TRowResult();
+ _elem96.read(iprot);
+ this.success.add(_elem96);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TRowResult _iter97 : this.success) {
+ _iter97.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerGet_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerGetList_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerGetList_args");
+ private static final TField ID_FIELD_DESC = new TField("id", TType.I32, (short)1);
+ private static final TField NB_ROWS_FIELD_DESC = new TField("nbRows", TType.I32, (short)2);
+
+ public int id;
+ public static final int ID = 1;
+ public int nbRows;
+ public static final int NBROWS = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean id = false;
+ public boolean nbRows = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(ID, new FieldMetaData("id", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ put(NBROWS, new FieldMetaData("nbRows", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerGetList_args.class, metaDataMap);
+ }
+
+ public scannerGetList_args() {
+ }
+
+ public scannerGetList_args(
+ int id,
+ int nbRows)
+ {
+ this();
+ this.id = id;
+ this.__isset.id = true;
+ this.nbRows = nbRows;
+ this.__isset.nbRows = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerGetList_args(scannerGetList_args other) {
+ __isset.id = other.__isset.id;
+ this.id = other.id;
+ __isset.nbRows = other.__isset.nbRows;
+ this.nbRows = other.nbRows;
+ }
+
+ @Override
+ public scannerGetList_args clone() {
+ return new scannerGetList_args(this);
+ }
+
+ public int getId() {
+ return this.id;
+ }
+
+ public void setId(int id) {
+ this.id = id;
+ this.__isset.id = true;
+ }
+
+ public void unsetId() {
+ this.__isset.id = false;
+ }
+
+ // Returns true if field id is set (has been asigned a value) and false otherwise
+ public boolean isSetId() {
+ return this.__isset.id;
+ }
+
+ public void setIdIsSet(boolean value) {
+ this.__isset.id = value;
+ }
+
+ public int getNbRows() {
+ return this.nbRows;
+ }
+
+ public void setNbRows(int nbRows) {
+ this.nbRows = nbRows;
+ this.__isset.nbRows = true;
+ }
+
+ public void unsetNbRows() {
+ this.__isset.nbRows = false;
+ }
+
+ // Returns true if field nbRows is set (has been asigned a value) and false otherwise
+ public boolean isSetNbRows() {
+ return this.__isset.nbRows;
+ }
+
+ public void setNbRowsIsSet(boolean value) {
+ this.__isset.nbRows = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case ID:
+ if (value == null) {
+ unsetId();
+ } else {
+ setId((Integer)value);
+ }
+ break;
+
+ case NBROWS:
+ if (value == null) {
+ unsetNbRows();
+ } else {
+ setNbRows((Integer)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case ID:
+ return new Integer(getId());
+
+ case NBROWS:
+ return new Integer(getNbRows());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case ID:
+ return isSetId();
+ case NBROWS:
+ return isSetNbRows();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerGetList_args)
+ return this.equals((scannerGetList_args)that);
+ return false;
+ }
+
+ public boolean equals(scannerGetList_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_id = true;
+ boolean that_present_id = true;
+ if (this_present_id || that_present_id) {
+ if (!(this_present_id && that_present_id))
+ return false;
+ if (this.id != that.id)
+ return false;
+ }
+
+ boolean this_present_nbRows = true;
+ boolean that_present_nbRows = true;
+ if (this_present_nbRows || that_present_nbRows) {
+ if (!(this_present_nbRows && that_present_nbRows))
+ return false;
+ if (this.nbRows != that.nbRows)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case ID:
+ if (field.type == TType.I32) {
+ this.id = iprot.readI32();
+ this.__isset.id = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case NBROWS:
+ if (field.type == TType.I32) {
+ this.nbRows = iprot.readI32();
+ this.__isset.nbRows = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ oprot.writeFieldBegin(ID_FIELD_DESC);
+ oprot.writeI32(this.id);
+ oprot.writeFieldEnd();
+ oprot.writeFieldBegin(NB_ROWS_FIELD_DESC);
+ oprot.writeI32(this.nbRows);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerGetList_args(");
+ boolean first = true;
+
+ sb.append("id:");
+ sb.append(this.id);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("nbRows:");
+ sb.append(this.nbRows);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerGetList_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerGetList_result");
+ private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+ public List<TRowResult> success;
+ public static final int SUCCESS = 0;
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+ new ListMetaData(TType.LIST,
+ new StructMetaData(TType.STRUCT, TRowResult.class))));
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerGetList_result.class, metaDataMap);
+ }
+
+ public scannerGetList_result() {
+ }
+
+ public scannerGetList_result(
+ List<TRowResult> success,
+ IOError io,
+ IllegalArgument ia)
+ {
+ this();
+ this.success = success;
+ this.io = io;
+ this.ia = ia;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerGetList_result(scannerGetList_result other) {
+ if (other.isSetSuccess()) {
+ List<TRowResult> __this__success = new ArrayList<TRowResult>();
+ for (TRowResult other_element : other.success) {
+ __this__success.add(new TRowResult(other_element));
+ }
+ this.success = __this__success;
+ }
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ }
+
+ @Override
+ public scannerGetList_result clone() {
+ return new scannerGetList_result(this);
+ }
+
+ public int getSuccessSize() {
+ return (this.success == null) ? 0 : this.success.size();
+ }
+
+ public java.util.Iterator<TRowResult> getSuccessIterator() {
+ return (this.success == null) ? null : this.success.iterator();
+ }
+
+ public void addToSuccess(TRowResult elem) {
+ if (this.success == null) {
+ this.success = new ArrayList<TRowResult>();
+ }
+ this.success.add(elem);
+ }
+
+ public List<TRowResult> getSuccess() {
+ return this.success;
+ }
+
+ public void setSuccess(List<TRowResult> success) {
+ this.success = success;
+ }
+
+ public void unsetSuccess() {
+ this.success = null;
+ }
+
+ // Returns true if field success is set (has been asigned a value) and false otherwise
+ public boolean isSetSuccess() {
+ return this.success != null;
+ }
+
+ public void setSuccessIsSet(boolean value) {
+ if (!value) {
+ this.success = null;
+ }
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case SUCCESS:
+ if (value == null) {
+ unsetSuccess();
+ } else {
+ setSuccess((List<TRowResult>)value);
+ }
+ break;
+
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return getSuccess();
+
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case SUCCESS:
+ return isSetSuccess();
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerGetList_result)
+ return this.equals((scannerGetList_result)that);
+ return false;
+ }
+
+ public boolean equals(scannerGetList_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_success = true && this.isSetSuccess();
+ boolean that_present_success = true && that.isSetSuccess();
+ if (this_present_success || that_present_success) {
+ if (!(this_present_success && that_present_success))
+ return false;
+ if (!this.success.equals(that.success))
+ return false;
+ }
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case SUCCESS:
+ if (field.type == TType.LIST) {
+ {
+ TList _list98 = iprot.readListBegin();
+ this.success = new ArrayList<TRowResult>(_list98.size);
+ for (int _i99 = 0; _i99 < _list98.size; ++_i99)
+ {
+ TRowResult _elem100;
+ _elem100 = new TRowResult();
+ _elem100.read(iprot);
+ this.success.add(_elem100);
+ }
+ iprot.readListEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetSuccess()) {
+ oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+ {
+ oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+ for (TRowResult _iter101 : this.success) {
+ _iter101.write(oprot);
+ }
+ oprot.writeListEnd();
+ }
+ oprot.writeFieldEnd();
+ } else if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerGetList_result(");
+ boolean first = true;
+
+ sb.append("success:");
+ if (this.success == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.success);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerClose_args implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerClose_args");
+ private static final TField ID_FIELD_DESC = new TField("id", TType.I32, (short)1);
+
+ public int id;
+ public static final int ID = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean id = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(ID, new FieldMetaData("id", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I32)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerClose_args.class, metaDataMap);
+ }
+
+ public scannerClose_args() {
+ }
+
+ public scannerClose_args(
+ int id)
+ {
+ this();
+ this.id = id;
+ this.__isset.id = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerClose_args(scannerClose_args other) {
+ __isset.id = other.__isset.id;
+ this.id = other.id;
+ }
+
+ @Override
+ public scannerClose_args clone() {
+ return new scannerClose_args(this);
+ }
+
+ public int getId() {
+ return this.id;
+ }
+
+ public void setId(int id) {
+ this.id = id;
+ this.__isset.id = true;
+ }
+
+ public void unsetId() {
+ this.__isset.id = false;
+ }
+
+ // Returns true if field id is set (has been asigned a value) and false otherwise
+ public boolean isSetId() {
+ return this.__isset.id;
+ }
+
+ public void setIdIsSet(boolean value) {
+ this.__isset.id = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case ID:
+ if (value == null) {
+ unsetId();
+ } else {
+ setId((Integer)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case ID:
+ return new Integer(getId());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case ID:
+ return isSetId();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerClose_args)
+ return this.equals((scannerClose_args)that);
+ return false;
+ }
+
+ public boolean equals(scannerClose_args that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_id = true;
+ boolean that_present_id = true;
+ if (this_present_id || that_present_id) {
+ if (!(this_present_id && that_present_id))
+ return false;
+ if (this.id != that.id)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case ID:
+ if (field.type == TType.I32) {
+ this.id = iprot.readI32();
+ this.__isset.id = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ oprot.writeFieldBegin(ID_FIELD_DESC);
+ oprot.writeI32(this.id);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerClose_args(");
+ boolean first = true;
+
+ sb.append("id:");
+ sb.append(this.id);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+ public static class scannerClose_result implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("scannerClose_result");
+ private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+ private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+ public IOError io;
+ public static final int IO = 1;
+ public IllegalArgument ia;
+ public static final int IA = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ put(IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRUCT)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(scannerClose_result.class, metaDataMap);
+ }
+
+ public scannerClose_result() {
+ }
+
+ public scannerClose_result(
+ IOError io,
+ IllegalArgument ia)
+ {
+ this();
+ this.io = io;
+ this.ia = ia;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public scannerClose_result(scannerClose_result other) {
+ if (other.isSetIo()) {
+ this.io = new IOError(other.io);
+ }
+ if (other.isSetIa()) {
+ this.ia = new IllegalArgument(other.ia);
+ }
+ }
+
+ @Override
+ public scannerClose_result clone() {
+ return new scannerClose_result(this);
+ }
+
+ public IOError getIo() {
+ return this.io;
+ }
+
+ public void setIo(IOError io) {
+ this.io = io;
+ }
+
+ public void unsetIo() {
+ this.io = null;
+ }
+
+ // Returns true if field io is set (has been asigned a value) and false otherwise
+ public boolean isSetIo() {
+ return this.io != null;
+ }
+
+ public void setIoIsSet(boolean value) {
+ if (!value) {
+ this.io = null;
+ }
+ }
+
+ public IllegalArgument getIa() {
+ return this.ia;
+ }
+
+ public void setIa(IllegalArgument ia) {
+ this.ia = ia;
+ }
+
+ public void unsetIa() {
+ this.ia = null;
+ }
+
+ // Returns true if field ia is set (has been asigned a value) and false otherwise
+ public boolean isSetIa() {
+ return this.ia != null;
+ }
+
+ public void setIaIsSet(boolean value) {
+ if (!value) {
+ this.ia = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case IO:
+ if (value == null) {
+ unsetIo();
+ } else {
+ setIo((IOError)value);
+ }
+ break;
+
+ case IA:
+ if (value == null) {
+ unsetIa();
+ } else {
+ setIa((IllegalArgument)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return getIo();
+
+ case IA:
+ return getIa();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case IO:
+ return isSetIo();
+ case IA:
+ return isSetIa();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof scannerClose_result)
+ return this.equals((scannerClose_result)that);
+ return false;
+ }
+
+ public boolean equals(scannerClose_result that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_io = true && this.isSetIo();
+ boolean that_present_io = true && that.isSetIo();
+ if (this_present_io || that_present_io) {
+ if (!(this_present_io && that_present_io))
+ return false;
+ if (!this.io.equals(that.io))
+ return false;
+ }
+
+ boolean this_present_ia = true && this.isSetIa();
+ boolean that_present_ia = true && that.isSetIa();
+ if (this_present_ia || that_present_ia) {
+ if (!(this_present_ia && that_present_ia))
+ return false;
+ if (!this.ia.equals(that.ia))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case IO:
+ if (field.type == TType.STRUCT) {
+ this.io = new IOError();
+ this.io.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case IA:
+ if (field.type == TType.STRUCT) {
+ this.ia = new IllegalArgument();
+ this.ia.read(iprot);
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ oprot.writeStructBegin(STRUCT_DESC);
+
+ if (this.isSetIo()) {
+ oprot.writeFieldBegin(IO_FIELD_DESC);
+ this.io.write(oprot);
+ oprot.writeFieldEnd();
+ } else if (this.isSetIa()) {
+ oprot.writeFieldBegin(IA_FIELD_DESC);
+ this.ia.write(oprot);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("scannerClose_result(");
+ boolean first = true;
+
+ sb.append("io:");
+ if (this.io == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.io);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("ia:");
+ if (this.ia == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.ia);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/IOError.java b/src/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
new file mode 100644
index 0000000..4280d0d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
@@ -0,0 +1,240 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * An IOError exception signals that an error occurred communicating
+ * to the Hbase master or an Hbase region server. Also used to return
+ * more general Hbase error conditions.
+ */
+public class IOError extends Exception implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("IOError");
+ private static final TField MESSAGE_FIELD_DESC = new TField("message", TType.STRING, (short)1);
+
+ public String message;
+ public static final int MESSAGE = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(MESSAGE, new FieldMetaData("message", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(IOError.class, metaDataMap);
+ }
+
+ public IOError() {
+ }
+
+ public IOError(
+ String message)
+ {
+ this();
+ this.message = message;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public IOError(IOError other) {
+ if (other.isSetMessage()) {
+ this.message = other.message;
+ }
+ }
+
+ @Override
+ public IOError clone() {
+ return new IOError(this);
+ }
+
+ public String getMessage() {
+ return this.message;
+ }
+
+ public void setMessage(String message) {
+ this.message = message;
+ }
+
+ public void unsetMessage() {
+ this.message = null;
+ }
+
+ // Returns true if field message is set (has been asigned a value) and false otherwise
+ public boolean isSetMessage() {
+ return this.message != null;
+ }
+
+ public void setMessageIsSet(boolean value) {
+ if (!value) {
+ this.message = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case MESSAGE:
+ if (value == null) {
+ unsetMessage();
+ } else {
+ setMessage((String)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case MESSAGE:
+ return getMessage();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case MESSAGE:
+ return isSetMessage();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof IOError)
+ return this.equals((IOError)that);
+ return false;
+ }
+
+ public boolean equals(IOError that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_message = true && this.isSetMessage();
+ boolean that_present_message = true && that.isSetMessage();
+ if (this_present_message || that_present_message) {
+ if (!(this_present_message && that_present_message))
+ return false;
+ if (!this.message.equals(that.message))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case MESSAGE:
+ if (field.type == TType.STRING) {
+ this.message = iprot.readString();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.message != null) {
+ oprot.writeFieldBegin(MESSAGE_FIELD_DESC);
+ oprot.writeString(this.message);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("IOError(");
+ boolean first = true;
+
+ sb.append("message:");
+ if (this.message == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.message);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java b/src/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
new file mode 100644
index 0000000..b51ef61
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
@@ -0,0 +1,239 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * An IllegalArgument exception indicates an illegal or invalid
+ * argument was passed into a procedure.
+ */
+public class IllegalArgument extends Exception implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("IllegalArgument");
+ private static final TField MESSAGE_FIELD_DESC = new TField("message", TType.STRING, (short)1);
+
+ public String message;
+ public static final int MESSAGE = 1;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(MESSAGE, new FieldMetaData("message", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(IllegalArgument.class, metaDataMap);
+ }
+
+ public IllegalArgument() {
+ }
+
+ public IllegalArgument(
+ String message)
+ {
+ this();
+ this.message = message;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public IllegalArgument(IllegalArgument other) {
+ if (other.isSetMessage()) {
+ this.message = other.message;
+ }
+ }
+
+ @Override
+ public IllegalArgument clone() {
+ return new IllegalArgument(this);
+ }
+
+ public String getMessage() {
+ return this.message;
+ }
+
+ public void setMessage(String message) {
+ this.message = message;
+ }
+
+ public void unsetMessage() {
+ this.message = null;
+ }
+
+ // Returns true if field message is set (has been asigned a value) and false otherwise
+ public boolean isSetMessage() {
+ return this.message != null;
+ }
+
+ public void setMessageIsSet(boolean value) {
+ if (!value) {
+ this.message = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case MESSAGE:
+ if (value == null) {
+ unsetMessage();
+ } else {
+ setMessage((String)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case MESSAGE:
+ return getMessage();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case MESSAGE:
+ return isSetMessage();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof IllegalArgument)
+ return this.equals((IllegalArgument)that);
+ return false;
+ }
+
+ public boolean equals(IllegalArgument that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_message = true && this.isSetMessage();
+ boolean that_present_message = true && that.isSetMessage();
+ if (this_present_message || that_present_message) {
+ if (!(this_present_message && that_present_message))
+ return false;
+ if (!this.message.equals(that.message))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case MESSAGE:
+ if (field.type == TType.STRING) {
+ this.message = iprot.readString();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.message != null) {
+ oprot.writeFieldBegin(MESSAGE_FIELD_DESC);
+ oprot.writeString(this.message);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("IllegalArgument(");
+ boolean first = true;
+
+ sb.append("message:");
+ if (this.message == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.message);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java b/src/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
new file mode 100644
index 0000000..921beaa
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
@@ -0,0 +1,385 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * A Mutation object is used to either update or delete a column-value.
+ */
+public class Mutation implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("Mutation");
+ private static final TField IS_DELETE_FIELD_DESC = new TField("isDelete", TType.BOOL, (short)1);
+ private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)2);
+ private static final TField VALUE_FIELD_DESC = new TField("value", TType.STRING, (short)3);
+
+ public boolean isDelete;
+ public static final int ISDELETE = 1;
+ public byte[] column;
+ public static final int COLUMN = 2;
+ public byte[] value;
+ public static final int VALUE = 3;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean isDelete = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(ISDELETE, new FieldMetaData("isDelete", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.BOOL)));
+ put(COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(VALUE, new FieldMetaData("value", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(Mutation.class, metaDataMap);
+ }
+
+ public Mutation() {
+ this.isDelete = false;
+
+ }
+
+ public Mutation(
+ boolean isDelete,
+ byte[] column,
+ byte[] value)
+ {
+ this();
+ this.isDelete = isDelete;
+ this.__isset.isDelete = true;
+ this.column = column;
+ this.value = value;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public Mutation(Mutation other) {
+ __isset.isDelete = other.__isset.isDelete;
+ this.isDelete = other.isDelete;
+ if (other.isSetColumn()) {
+ this.column = other.column;
+ }
+ if (other.isSetValue()) {
+ this.value = other.value;
+ }
+ }
+
+ @Override
+ public Mutation clone() {
+ return new Mutation(this);
+ }
+
+ public boolean isIsDelete() {
+ return this.isDelete;
+ }
+
+ public void setIsDelete(boolean isDelete) {
+ this.isDelete = isDelete;
+ this.__isset.isDelete = true;
+ }
+
+ public void unsetIsDelete() {
+ this.__isset.isDelete = false;
+ }
+
+ // Returns true if field isDelete is set (has been asigned a value) and false otherwise
+ public boolean isSetIsDelete() {
+ return this.__isset.isDelete;
+ }
+
+ public void setIsDeleteIsSet(boolean value) {
+ this.__isset.isDelete = value;
+ }
+
+ public byte[] getColumn() {
+ return this.column;
+ }
+
+ public void setColumn(byte[] column) {
+ this.column = column;
+ }
+
+ public void unsetColumn() {
+ this.column = null;
+ }
+
+ // Returns true if field column is set (has been asigned a value) and false otherwise
+ public boolean isSetColumn() {
+ return this.column != null;
+ }
+
+ public void setColumnIsSet(boolean value) {
+ if (!value) {
+ this.column = null;
+ }
+ }
+
+ public byte[] getValue() {
+ return this.value;
+ }
+
+ public void setValue(byte[] value) {
+ this.value = value;
+ }
+
+ public void unsetValue() {
+ this.value = null;
+ }
+
+ // Returns true if field value is set (has been asigned a value) and false otherwise
+ public boolean isSetValue() {
+ return this.value != null;
+ }
+
+ public void setValueIsSet(boolean value) {
+ if (!value) {
+ this.value = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case ISDELETE:
+ if (value == null) {
+ unsetIsDelete();
+ } else {
+ setIsDelete((Boolean)value);
+ }
+ break;
+
+ case COLUMN:
+ if (value == null) {
+ unsetColumn();
+ } else {
+ setColumn((byte[])value);
+ }
+ break;
+
+ case VALUE:
+ if (value == null) {
+ unsetValue();
+ } else {
+ setValue((byte[])value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case ISDELETE:
+ return new Boolean(isIsDelete());
+
+ case COLUMN:
+ return getColumn();
+
+ case VALUE:
+ return getValue();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case ISDELETE:
+ return isSetIsDelete();
+ case COLUMN:
+ return isSetColumn();
+ case VALUE:
+ return isSetValue();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof Mutation)
+ return this.equals((Mutation)that);
+ return false;
+ }
+
+ public boolean equals(Mutation that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_isDelete = true;
+ boolean that_present_isDelete = true;
+ if (this_present_isDelete || that_present_isDelete) {
+ if (!(this_present_isDelete && that_present_isDelete))
+ return false;
+ if (this.isDelete != that.isDelete)
+ return false;
+ }
+
+ boolean this_present_column = true && this.isSetColumn();
+ boolean that_present_column = true && that.isSetColumn();
+ if (this_present_column || that_present_column) {
+ if (!(this_present_column && that_present_column))
+ return false;
+ if (!java.util.Arrays.equals(this.column, that.column))
+ return false;
+ }
+
+ boolean this_present_value = true && this.isSetValue();
+ boolean that_present_value = true && that.isSetValue();
+ if (this_present_value || that_present_value) {
+ if (!(this_present_value && that_present_value))
+ return false;
+ if (!java.util.Arrays.equals(this.value, that.value))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case ISDELETE:
+ if (field.type == TType.BOOL) {
+ this.isDelete = iprot.readBool();
+ this.__isset.isDelete = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMN:
+ if (field.type == TType.STRING) {
+ this.column = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case VALUE:
+ if (field.type == TType.STRING) {
+ this.value = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ oprot.writeFieldBegin(IS_DELETE_FIELD_DESC);
+ oprot.writeBool(this.isDelete);
+ oprot.writeFieldEnd();
+ if (this.column != null) {
+ oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+ oprot.writeBinary(this.column);
+ oprot.writeFieldEnd();
+ }
+ if (this.value != null) {
+ oprot.writeFieldBegin(VALUE_FIELD_DESC);
+ oprot.writeBinary(this.value);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("Mutation(");
+ boolean first = true;
+
+ sb.append("isDelete:");
+ sb.append(this.isDelete);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("column:");
+ if (this.column == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.column);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("value:");
+ if (this.value == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.value);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/TCell.java b/src/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
new file mode 100644
index 0000000..cc94058
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
@@ -0,0 +1,311 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * TCell - Used to transport a cell value (byte[]) and the timestamp it was
+ * stored with together as a result for get and getRow methods. This promotes
+ * the timestamp of a cell to a first-class value, making it easy to take
+ * note of temporal data. Cell is used all the way from HStore up to HTable.
+ */
+public class TCell implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("TCell");
+ private static final TField VALUE_FIELD_DESC = new TField("value", TType.STRING, (short)1);
+ private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)2);
+
+ public byte[] value;
+ public static final int VALUE = 1;
+ public long timestamp;
+ public static final int TIMESTAMP = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean timestamp = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(VALUE, new FieldMetaData("value", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(TCell.class, metaDataMap);
+ }
+
+ public TCell() {
+ }
+
+ public TCell(
+ byte[] value,
+ long timestamp)
+ {
+ this();
+ this.value = value;
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public TCell(TCell other) {
+ if (other.isSetValue()) {
+ this.value = other.value;
+ }
+ __isset.timestamp = other.__isset.timestamp;
+ this.timestamp = other.timestamp;
+ }
+
+ @Override
+ public TCell clone() {
+ return new TCell(this);
+ }
+
+ public byte[] getValue() {
+ return this.value;
+ }
+
+ public void setValue(byte[] value) {
+ this.value = value;
+ }
+
+ public void unsetValue() {
+ this.value = null;
+ }
+
+ // Returns true if field value is set (has been asigned a value) and false otherwise
+ public boolean isSetValue() {
+ return this.value != null;
+ }
+
+ public void setValueIsSet(boolean value) {
+ if (!value) {
+ this.value = null;
+ }
+ }
+
+ public long getTimestamp() {
+ return this.timestamp;
+ }
+
+ public void setTimestamp(long timestamp) {
+ this.timestamp = timestamp;
+ this.__isset.timestamp = true;
+ }
+
+ public void unsetTimestamp() {
+ this.__isset.timestamp = false;
+ }
+
+ // Returns true if field timestamp is set (has been asigned a value) and false otherwise
+ public boolean isSetTimestamp() {
+ return this.__isset.timestamp;
+ }
+
+ public void setTimestampIsSet(boolean value) {
+ this.__isset.timestamp = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case VALUE:
+ if (value == null) {
+ unsetValue();
+ } else {
+ setValue((byte[])value);
+ }
+ break;
+
+ case TIMESTAMP:
+ if (value == null) {
+ unsetTimestamp();
+ } else {
+ setTimestamp((Long)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case VALUE:
+ return getValue();
+
+ case TIMESTAMP:
+ return new Long(getTimestamp());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case VALUE:
+ return isSetValue();
+ case TIMESTAMP:
+ return isSetTimestamp();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof TCell)
+ return this.equals((TCell)that);
+ return false;
+ }
+
+ public boolean equals(TCell that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_value = true && this.isSetValue();
+ boolean that_present_value = true && that.isSetValue();
+ if (this_present_value || that_present_value) {
+ if (!(this_present_value && that_present_value))
+ return false;
+ if (!java.util.Arrays.equals(this.value, that.value))
+ return false;
+ }
+
+ boolean this_present_timestamp = true;
+ boolean that_present_timestamp = true;
+ if (this_present_timestamp || that_present_timestamp) {
+ if (!(this_present_timestamp && that_present_timestamp))
+ return false;
+ if (this.timestamp != that.timestamp)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case VALUE:
+ if (field.type == TType.STRING) {
+ this.value = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case TIMESTAMP:
+ if (field.type == TType.I64) {
+ this.timestamp = iprot.readI64();
+ this.__isset.timestamp = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.value != null) {
+ oprot.writeFieldBegin(VALUE_FIELD_DESC);
+ oprot.writeBinary(this.value);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+ oprot.writeI64(this.timestamp);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("TCell(");
+ boolean first = true;
+
+ sb.append("value:");
+ if (this.value == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.value);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("timestamp:");
+ sb.append(this.timestamp);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java b/src/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
new file mode 100644
index 0000000..4f36936
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
@@ -0,0 +1,528 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * A TRegionInfo contains information about an HTable region.
+ */
+public class TRegionInfo implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("TRegionInfo");
+ private static final TField START_KEY_FIELD_DESC = new TField("startKey", TType.STRING, (short)1);
+ private static final TField END_KEY_FIELD_DESC = new TField("endKey", TType.STRING, (short)2);
+ private static final TField ID_FIELD_DESC = new TField("id", TType.I64, (short)3);
+ private static final TField NAME_FIELD_DESC = new TField("name", TType.STRING, (short)4);
+ private static final TField VERSION_FIELD_DESC = new TField("version", TType.BYTE, (short)5);
+
+ public byte[] startKey;
+ public static final int STARTKEY = 1;
+ public byte[] endKey;
+ public static final int ENDKEY = 2;
+ public long id;
+ public static final int ID = 3;
+ public byte[] name;
+ public static final int NAME = 4;
+ public byte version;
+ public static final int VERSION = 5;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ public boolean id = false;
+ public boolean version = false;
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(STARTKEY, new FieldMetaData("startKey", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ENDKEY, new FieldMetaData("endKey", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(ID, new FieldMetaData("id", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.I64)));
+ put(NAME, new FieldMetaData("name", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(VERSION, new FieldMetaData("version", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.BYTE)));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(TRegionInfo.class, metaDataMap);
+ }
+
+ public TRegionInfo() {
+ }
+
+ public TRegionInfo(
+ byte[] startKey,
+ byte[] endKey,
+ long id,
+ byte[] name,
+ byte version)
+ {
+ this();
+ this.startKey = startKey;
+ this.endKey = endKey;
+ this.id = id;
+ this.__isset.id = true;
+ this.name = name;
+ this.version = version;
+ this.__isset.version = true;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public TRegionInfo(TRegionInfo other) {
+ if (other.isSetStartKey()) {
+ this.startKey = other.startKey;
+ }
+ if (other.isSetEndKey()) {
+ this.endKey = other.endKey;
+ }
+ __isset.id = other.__isset.id;
+ this.id = other.id;
+ if (other.isSetName()) {
+ this.name = other.name;
+ }
+ __isset.version = other.__isset.version;
+ this.version = other.version;
+ }
+
+ @Override
+ public TRegionInfo clone() {
+ return new TRegionInfo(this);
+ }
+
+ public byte[] getStartKey() {
+ return this.startKey;
+ }
+
+ public void setStartKey(byte[] startKey) {
+ this.startKey = startKey;
+ }
+
+ public void unsetStartKey() {
+ this.startKey = null;
+ }
+
+ // Returns true if field startKey is set (has been asigned a value) and false otherwise
+ public boolean isSetStartKey() {
+ return this.startKey != null;
+ }
+
+ public void setStartKeyIsSet(boolean value) {
+ if (!value) {
+ this.startKey = null;
+ }
+ }
+
+ public byte[] getEndKey() {
+ return this.endKey;
+ }
+
+ public void setEndKey(byte[] endKey) {
+ this.endKey = endKey;
+ }
+
+ public void unsetEndKey() {
+ this.endKey = null;
+ }
+
+ // Returns true if field endKey is set (has been asigned a value) and false otherwise
+ public boolean isSetEndKey() {
+ return this.endKey != null;
+ }
+
+ public void setEndKeyIsSet(boolean value) {
+ if (!value) {
+ this.endKey = null;
+ }
+ }
+
+ public long getId() {
+ return this.id;
+ }
+
+ public void setId(long id) {
+ this.id = id;
+ this.__isset.id = true;
+ }
+
+ public void unsetId() {
+ this.__isset.id = false;
+ }
+
+ // Returns true if field id is set (has been asigned a value) and false otherwise
+ public boolean isSetId() {
+ return this.__isset.id;
+ }
+
+ public void setIdIsSet(boolean value) {
+ this.__isset.id = value;
+ }
+
+ public byte[] getName() {
+ return this.name;
+ }
+
+ public void setName(byte[] name) {
+ this.name = name;
+ }
+
+ public void unsetName() {
+ this.name = null;
+ }
+
+ // Returns true if field name is set (has been asigned a value) and false otherwise
+ public boolean isSetName() {
+ return this.name != null;
+ }
+
+ public void setNameIsSet(boolean value) {
+ if (!value) {
+ this.name = null;
+ }
+ }
+
+ public byte getVersion() {
+ return this.version;
+ }
+
+ public void setVersion(byte version) {
+ this.version = version;
+ this.__isset.version = true;
+ }
+
+ public void unsetVersion() {
+ this.__isset.version = false;
+ }
+
+ // Returns true if field version is set (has been asigned a value) and false otherwise
+ public boolean isSetVersion() {
+ return this.__isset.version;
+ }
+
+ public void setVersionIsSet(boolean value) {
+ this.__isset.version = value;
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case STARTKEY:
+ if (value == null) {
+ unsetStartKey();
+ } else {
+ setStartKey((byte[])value);
+ }
+ break;
+
+ case ENDKEY:
+ if (value == null) {
+ unsetEndKey();
+ } else {
+ setEndKey((byte[])value);
+ }
+ break;
+
+ case ID:
+ if (value == null) {
+ unsetId();
+ } else {
+ setId((Long)value);
+ }
+ break;
+
+ case NAME:
+ if (value == null) {
+ unsetName();
+ } else {
+ setName((byte[])value);
+ }
+ break;
+
+ case VERSION:
+ if (value == null) {
+ unsetVersion();
+ } else {
+ setVersion((Byte)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case STARTKEY:
+ return getStartKey();
+
+ case ENDKEY:
+ return getEndKey();
+
+ case ID:
+ return new Long(getId());
+
+ case NAME:
+ return getName();
+
+ case VERSION:
+ return new Byte(getVersion());
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case STARTKEY:
+ return isSetStartKey();
+ case ENDKEY:
+ return isSetEndKey();
+ case ID:
+ return isSetId();
+ case NAME:
+ return isSetName();
+ case VERSION:
+ return isSetVersion();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof TRegionInfo)
+ return this.equals((TRegionInfo)that);
+ return false;
+ }
+
+ public boolean equals(TRegionInfo that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_startKey = true && this.isSetStartKey();
+ boolean that_present_startKey = true && that.isSetStartKey();
+ if (this_present_startKey || that_present_startKey) {
+ if (!(this_present_startKey && that_present_startKey))
+ return false;
+ if (!java.util.Arrays.equals(this.startKey, that.startKey))
+ return false;
+ }
+
+ boolean this_present_endKey = true && this.isSetEndKey();
+ boolean that_present_endKey = true && that.isSetEndKey();
+ if (this_present_endKey || that_present_endKey) {
+ if (!(this_present_endKey && that_present_endKey))
+ return false;
+ if (!java.util.Arrays.equals(this.endKey, that.endKey))
+ return false;
+ }
+
+ boolean this_present_id = true;
+ boolean that_present_id = true;
+ if (this_present_id || that_present_id) {
+ if (!(this_present_id && that_present_id))
+ return false;
+ if (this.id != that.id)
+ return false;
+ }
+
+ boolean this_present_name = true && this.isSetName();
+ boolean that_present_name = true && that.isSetName();
+ if (this_present_name || that_present_name) {
+ if (!(this_present_name && that_present_name))
+ return false;
+ if (!java.util.Arrays.equals(this.name, that.name))
+ return false;
+ }
+
+ boolean this_present_version = true;
+ boolean that_present_version = true;
+ if (this_present_version || that_present_version) {
+ if (!(this_present_version && that_present_version))
+ return false;
+ if (this.version != that.version)
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case STARTKEY:
+ if (field.type == TType.STRING) {
+ this.startKey = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ENDKEY:
+ if (field.type == TType.STRING) {
+ this.endKey = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case ID:
+ if (field.type == TType.I64) {
+ this.id = iprot.readI64();
+ this.__isset.id = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case NAME:
+ if (field.type == TType.STRING) {
+ this.name = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case VERSION:
+ if (field.type == TType.BYTE) {
+ this.version = iprot.readByte();
+ this.__isset.version = true;
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.startKey != null) {
+ oprot.writeFieldBegin(START_KEY_FIELD_DESC);
+ oprot.writeBinary(this.startKey);
+ oprot.writeFieldEnd();
+ }
+ if (this.endKey != null) {
+ oprot.writeFieldBegin(END_KEY_FIELD_DESC);
+ oprot.writeBinary(this.endKey);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(ID_FIELD_DESC);
+ oprot.writeI64(this.id);
+ oprot.writeFieldEnd();
+ if (this.name != null) {
+ oprot.writeFieldBegin(NAME_FIELD_DESC);
+ oprot.writeBinary(this.name);
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldBegin(VERSION_FIELD_DESC);
+ oprot.writeByte(this.version);
+ oprot.writeFieldEnd();
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("TRegionInfo(");
+ boolean first = true;
+
+ sb.append("startKey:");
+ if (this.startKey == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.startKey);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("endKey:");
+ if (this.endKey == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.endKey);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("id:");
+ sb.append(this.id);
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("name:");
+ if (this.name == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.name);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("version:");
+ sb.append(this.version);
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java b/src/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java
new file mode 100644
index 0000000..c00eb58
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java
@@ -0,0 +1,358 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * Autogenerated by Thrift
+ *
+ * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.Collections;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * Holds row name and then a map of columns to cells.
+ */
+public class TRowResult implements TBase, java.io.Serializable, Cloneable {
+ private static final TStruct STRUCT_DESC = new TStruct("TRowResult");
+ private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)1);
+ private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.MAP, (short)2);
+
+ public byte[] row;
+ public static final int ROW = 1;
+ public Map<byte[],TCell> columns;
+ public static final int COLUMNS = 2;
+
+ private final Isset __isset = new Isset();
+ private static final class Isset implements java.io.Serializable {
+ }
+
+ public static final Map<Integer, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new HashMap<Integer, FieldMetaData>() {{
+ put(ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+ new FieldValueMetaData(TType.STRING)));
+ put(COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+ new MapMetaData(TType.MAP,
+ new FieldValueMetaData(TType.STRING),
+ new StructMetaData(TType.STRUCT, TCell.class))));
+ }});
+
+ static {
+ FieldMetaData.addStructMetaDataMap(TRowResult.class, metaDataMap);
+ }
+
+ public TRowResult() {
+ }
+
+ public TRowResult(
+ byte[] row,
+ Map<byte[],TCell> columns)
+ {
+ this();
+ this.row = row;
+ this.columns = columns;
+ }
+
+ /**
+ * Performs a deep copy on <i>other</i>.
+ */
+ public TRowResult(TRowResult other) {
+ if (other.isSetRow()) {
+ this.row = other.row;
+ }
+ if (other.isSetColumns()) {
+ Map<byte[],TCell> __this__columns = new HashMap<byte[],TCell>();
+ for (Map.Entry<byte[], TCell> other_element : other.columns.entrySet()) {
+
+ byte[] other_element_key = other_element.getKey();
+ TCell other_element_value = other_element.getValue();
+
+ byte[] __this__columns_copy_key = other_element_key;
+
+ TCell __this__columns_copy_value = new TCell(other_element_value);
+
+ __this__columns.put(__this__columns_copy_key, __this__columns_copy_value);
+ }
+ this.columns = __this__columns;
+ }
+ }
+
+ @Override
+ public TRowResult clone() {
+ return new TRowResult(this);
+ }
+
+ public byte[] getRow() {
+ return this.row;
+ }
+
+ public void setRow(byte[] row) {
+ this.row = row;
+ }
+
+ public void unsetRow() {
+ this.row = null;
+ }
+
+ // Returns true if field row is set (has been asigned a value) and false otherwise
+ public boolean isSetRow() {
+ return this.row != null;
+ }
+
+ public void setRowIsSet(boolean value) {
+ if (!value) {
+ this.row = null;
+ }
+ }
+
+ public int getColumnsSize() {
+ return (this.columns == null) ? 0 : this.columns.size();
+ }
+
+ public void putToColumns(byte[] key, TCell val) {
+ if (this.columns == null) {
+ this.columns = new HashMap<byte[],TCell>();
+ }
+ this.columns.put(key, val);
+ }
+
+ public Map<byte[],TCell> getColumns() {
+ return this.columns;
+ }
+
+ public void setColumns(Map<byte[],TCell> columns) {
+ this.columns = columns;
+ }
+
+ public void unsetColumns() {
+ this.columns = null;
+ }
+
+ // Returns true if field columns is set (has been asigned a value) and false otherwise
+ public boolean isSetColumns() {
+ return this.columns != null;
+ }
+
+ public void setColumnsIsSet(boolean value) {
+ if (!value) {
+ this.columns = null;
+ }
+ }
+
+ public void setFieldValue(int fieldID, Object value) {
+ switch (fieldID) {
+ case ROW:
+ if (value == null) {
+ unsetRow();
+ } else {
+ setRow((byte[])value);
+ }
+ break;
+
+ case COLUMNS:
+ if (value == null) {
+ unsetColumns();
+ } else {
+ setColumns((Map<byte[],TCell>)value);
+ }
+ break;
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ public Object getFieldValue(int fieldID) {
+ switch (fieldID) {
+ case ROW:
+ return getRow();
+
+ case COLUMNS:
+ return getColumns();
+
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ // Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise
+ public boolean isSet(int fieldID) {
+ switch (fieldID) {
+ case ROW:
+ return isSetRow();
+ case COLUMNS:
+ return isSetColumns();
+ default:
+ throw new IllegalArgumentException("Field " + fieldID + " doesn't exist!");
+ }
+ }
+
+ @Override
+ public boolean equals(Object that) {
+ if (that == null)
+ return false;
+ if (that instanceof TRowResult)
+ return this.equals((TRowResult)that);
+ return false;
+ }
+
+ public boolean equals(TRowResult that) {
+ if (that == null)
+ return false;
+
+ boolean this_present_row = true && this.isSetRow();
+ boolean that_present_row = true && that.isSetRow();
+ if (this_present_row || that_present_row) {
+ if (!(this_present_row && that_present_row))
+ return false;
+ if (!java.util.Arrays.equals(this.row, that.row))
+ return false;
+ }
+
+ boolean this_present_columns = true && this.isSetColumns();
+ boolean that_present_columns = true && that.isSetColumns();
+ if (this_present_columns || that_present_columns) {
+ if (!(this_present_columns && that_present_columns))
+ return false;
+ if (!this.columns.equals(that.columns))
+ return false;
+ }
+
+ return true;
+ }
+
+ @Override
+ public int hashCode() {
+ return 0;
+ }
+
+ public void read(TProtocol iprot) throws TException {
+ TField field;
+ iprot.readStructBegin();
+ while (true)
+ {
+ field = iprot.readFieldBegin();
+ if (field.type == TType.STOP) {
+ break;
+ }
+ switch (field.id)
+ {
+ case ROW:
+ if (field.type == TType.STRING) {
+ this.row = iprot.readBinary();
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ case COLUMNS:
+ if (field.type == TType.MAP) {
+ {
+ TMap _map4 = iprot.readMapBegin();
+ this.columns = new HashMap<byte[],TCell>(2*_map4.size);
+ for (int _i5 = 0; _i5 < _map4.size; ++_i5)
+ {
+ byte[] _key6;
+ TCell _val7;
+ _key6 = iprot.readBinary();
+ _val7 = new TCell();
+ _val7.read(iprot);
+ this.columns.put(_key6, _val7);
+ }
+ iprot.readMapEnd();
+ }
+ } else {
+ TProtocolUtil.skip(iprot, field.type);
+ }
+ break;
+ default:
+ TProtocolUtil.skip(iprot, field.type);
+ break;
+ }
+ iprot.readFieldEnd();
+ }
+ iprot.readStructEnd();
+
+
+ // check for required fields of primitive type, which can't be checked in the validate method
+ validate();
+ }
+
+ public void write(TProtocol oprot) throws TException {
+ validate();
+
+ oprot.writeStructBegin(STRUCT_DESC);
+ if (this.row != null) {
+ oprot.writeFieldBegin(ROW_FIELD_DESC);
+ oprot.writeBinary(this.row);
+ oprot.writeFieldEnd();
+ }
+ if (this.columns != null) {
+ oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+ {
+ oprot.writeMapBegin(new TMap(TType.STRING, TType.STRUCT, this.columns.size()));
+ for (Map.Entry<byte[], TCell> _iter8 : this.columns.entrySet()) {
+ oprot.writeBinary(_iter8.getKey());
+ _iter8.getValue().write(oprot);
+ }
+ oprot.writeMapEnd();
+ }
+ oprot.writeFieldEnd();
+ }
+ oprot.writeFieldStop();
+ oprot.writeStructEnd();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder("TRowResult(");
+ boolean first = true;
+
+ sb.append("row:");
+ if (this.row == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.row);
+ }
+ first = false;
+ if (!first) sb.append(", ");
+ sb.append("columns:");
+ if (this.columns == null) {
+ sb.append("null");
+ } else {
+ sb.append(this.columns);
+ }
+ first = false;
+ sb.append(")");
+ return sb.toString();
+ }
+
+ public void validate() throws TException {
+ // check for required fields
+ // check that fields of type enum have valid values
+ }
+
+}
+
diff --git a/src/java/org/apache/hadoop/hbase/thrift/package.html b/src/java/org/apache/hadoop/hbase/thrift/package.html
new file mode 100644
index 0000000..71d669d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/thrift/package.html
@@ -0,0 +1,78 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<head />
+<body bgcolor="white">
+Provides an HBase <a href="http://developers.facebook.com/thrift/">Thrift</a>
+service.
+
+This directory contains a Thrift interface definition file for an Hbase RPC
+service and a Java server implementation.
+
+<h2><a name="whatisthrift">What is Thrift?</a></h2>
+
+<p>"Thrift is a software framework for scalable cross-language services
+development. It combines a powerful software stack with a code generation
+engine to build services that work efficiently and seamlessly between C++,
+Java, Python, PHP, and Ruby. Thrift was developed at Facebook, and we are now
+releasing it as open source." For additional information, see
+http://developers.facebook.com/thrift/. Facebook has announced their intent
+to migrate Thrift into Apache Incubator.
+</p>
+
+<h2><a name="description">Description</a></h2>
+
+<p>The <a href="generated/Hbase.Iface.html">Hbase API</a> is defined in the
+file Hbase.thrift. A server-side implementation of the API is in {@link
+org.apache.hadoop.hbase.thrift.ThriftServer}. The generated interfaces,
+types, and RPC utility files are checked into SVN under the {@link
+org.apache.hadoop.hbase.thrift.generated} directory.
+
+</p>
+
+<p>The files were generated by running the commands:
+<pre>
+ thrift -strict --gen java Hbase.thrift
+ mv gen-java/org/apache/hadoop/hbase/thrift/generated .
+ rm -rf gen-java
+</pre>
+</p>
+
+<p>The 'thrift' binary is the Thrift compiler, and it is distributed as a part
+of
+the Thrift package. Additionally, specific language runtime libraries are a
+part of the Thrift package. A version of the Java runtime is checked into SVN
+under the hbase/lib directory.
+</p>
+
+<p>To start ThriftServer, use:
+<pre>
+ ./bin/hbase-daemon.sh start thrift [--port=PORT]
+</pre>
+The default port is 9090.
+</p>
+
+<p>To stop, use:
+<pre>
+ ./bin/hbase-daemon.sh stop thrift
+</pre>
+</p>
+</body>
+</html>
diff --git a/src/java/org/apache/hadoop/hbase/util/Base64.java b/src/java/org/apache/hadoop/hbase/util/Base64.java
new file mode 100644
index 0000000..867af77
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Base64.java
@@ -0,0 +1,1638 @@
+/**
+ * Encodes and decodes to and from Base64 notation.
+ *
+ * <p>
+ * Homepage: <a href="http://iharder.net/base64">http://iharder.net/base64</a>.
+ * </p>
+ *
+ * <p>
+ * Change Log:
+ * </p>
+ * <ul>
+ * <li>v2.2.1 - Fixed bug using URL_SAFE and ORDERED encodings. Fixed bug
+ * when using very small files (~< 40 bytes).</li>
+ * <li>v2.2 - Added some helper methods for encoding/decoding directly from
+ * one file to the next. Also added a main() method to support command
+ * line encoding/decoding from one file to the next. Also added these
+ * Base64 dialects:
+ * <ol>
+ * <li>The default is RFC3548 format.</li>
+ * <li>Using Base64.URLSAFE generates URL and file name friendly format as
+ * described in Section 4 of RFC3548.
+ * http://www.faqs.org/rfcs/rfc3548.html</li>
+ * <li>Using Base64.ORDERED generates URL and file name friendly format
+ * that preserves lexical ordering as described in
+ * http://www.faqs.org/qa/rfcc-1940.html</li>
+ * </ol>
+ * <p>
+ * Special thanks to Jim Kellerman at <a href="http://www.powerset.com/">
+ * http://www.powerset.com/</a> for contributing the new Base64 dialects.
+ * </li>
+ *
+ * <li>v2.1 - Cleaned up javadoc comments and unused variables and methods.
+ * Added some convenience methods for reading and writing to and from files.
+ * </li>
+ * <li>v2.0.2 - Now specifies UTF-8 encoding in places where the code fails on
+ * systems with other encodings (like EBCDIC).</li>
+ * <li>v2.0.1 - Fixed an error when decoding a single byte, that is, when the
+ * encoded data was a single byte.</li>
+ * <li>v2.0 - I got rid of methods that used booleans to set options. Now
+ * everything is more consolidated and cleaner. The code now detects when
+ * data that's being decoded is gzip-compressed and will decompress it
+ * automatically. Generally things are cleaner. You'll probably have to
+ * change some method calls that you were making to support the new options
+ * format (<tt>int</tt>s that you "OR" together).</li>
+ * <li>v1.5.1 - Fixed bug when decompressing and decoding to a byte[] using
+ * <tt>decode( String s, boolean gzipCompressed )</tt>. Added the ability to
+ * "suspend" encoding in the Output Stream so you can turn on and off the
+ * encoding if you need to embed base64 data in an otherwise "normal" stream
+ * (like an XML file).</li>
+ * <li>v1.5 - Output stream pases on flush() command but doesn't do anything
+ * itself. This helps when using GZIP streams. Added the ability to
+ * GZip-compress objects before encoding them.</li>
+ * <li>v1.4 - Added helper methods to read/write files.</li>
+ * <li>v1.3.6 - Fixed OutputStream.flush() so that 'position' is reset.</li>
+ * <li>v1.3.5 - Added flag to turn on and off line breaks. Fixed bug in input
+ * stream where last buffer being read, if not completely full, was not
+ * returned.</li>
+ * <li>v1.3.4 - Fixed when "improperly padded stream" error was thrown at the
+ * wrong time.</li>
+ * <li>v1.3.3 - Fixed I/O streams which were totally messed up.</li>
+ * </ul>
+ *
+ * <p>
+ * I am placing this code in the Public Domain. Do with it as you will. This
+ * software comes with no guarantees or warranties but with plenty of
+ * well-wishing instead!
+ * <p>
+ * Please visit <a href="http://iharder.net/base64">http://iharder.net/base64</a>
+ * periodically to check for updates or to contribute improvements.
+ * <p>
+ * author: Robert Harder, rob@iharder.net
+ * <br>
+ * version: 2.2.1
+ */
+
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.FilterInputStream;
+import java.io.FilterOutputStream;
+import java.io.InputStream;
+import java.io.IOException;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutputStream;
+import java.io.OutputStream;
+import java.io.Serializable;
+import java.io.UnsupportedEncodingException;
+import java.lang.ClassNotFoundException;
+import java.util.zip.GZIPInputStream;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Encodes and decodes to and from Base64 notation.
+ */
+public class Base64 {
+
+ /* ******** P U B L I C F I E L D S ******** */
+
+ /** No options specified. Value is zero. */
+ public final static int NO_OPTIONS = 0;
+
+ /** Specify encoding. */
+ public final static int ENCODE = 1;
+
+ /** Specify decoding. */
+ public final static int DECODE = 0;
+
+ /** Specify that data should be gzip-compressed. */
+ public final static int GZIP = 2;
+
+ /** Don't break lines when encoding (violates strict Base64 specification) */
+ public final static int DONT_BREAK_LINES = 8;
+
+ /**
+ * Encode using Base64-like encoding that is URL and Filename safe as
+ * described in Section 4 of RFC3548:
+ * <a href="http://www.faqs.org/rfcs/rfc3548.html">
+ * http://www.faqs.org/rfcs/rfc3548.html</a>.
+ * It is important to note that data encoded this way is <em>not</em>
+ * officially valid Base64, or at the very least should not be called Base64
+ * without also specifying that is was encoded using the URL and
+ * Filename safe dialect.
+ */
+ public final static int URL_SAFE = 16;
+
+ /**
+ * Encode using the special "ordered" dialect of Base64 described here:
+ * <a href="http://www.faqs.org/qa/rfcc-1940.html">
+ * http://www.faqs.org/qa/rfcc-1940.html</a>.
+ */
+ public final static int ORDERED = 32;
+
+ /* ******** P R I V A T E F I E L D S ******** */
+
+ private static final Log LOG = LogFactory.getLog(Base64.class);
+
+ /** Maximum line length (76) of Base64 output. */
+ private final static int MAX_LINE_LENGTH = 76;
+
+ /** The equals sign (=) as a byte. */
+ private final static byte EQUALS_SIGN = (byte) '=';
+
+ /** The new line character (\n) as a byte. */
+ private final static byte NEW_LINE = (byte) '\n';
+
+ /** Preferred encoding. */
+ private final static String PREFERRED_ENCODING = "UTF-8";
+
+ private final static byte WHITE_SPACE_ENC = -5; // Indicates white space
+ private final static byte EQUALS_SIGN_ENC = -1; // Indicates equals sign
+
+ /* ******** S T A N D A R D B A S E 6 4 A L P H A B E T ******** */
+
+ /** The 64 valid Base64 values. */
+
+ /*
+ * Host platform may be something funny like EBCDIC, so we hardcode these
+ * values.
+ */
+ private final static byte[] _STANDARD_ALPHABET = { (byte) 'A', (byte) 'B',
+ (byte) 'C', (byte) 'D', (byte) 'E', (byte) 'F', (byte) 'G', (byte) 'H',
+ (byte) 'I', (byte) 'J', (byte) 'K', (byte) 'L', (byte) 'M', (byte) 'N',
+ (byte) 'O', (byte) 'P', (byte) 'Q', (byte) 'R', (byte) 'S', (byte) 'T',
+ (byte) 'U', (byte) 'V', (byte) 'W', (byte) 'X', (byte) 'Y', (byte) 'Z',
+ (byte) 'a', (byte) 'b', (byte) 'c', (byte) 'd', (byte) 'e', (byte) 'f',
+ (byte) 'g', (byte) 'h', (byte) 'i', (byte) 'j', (byte) 'k', (byte) 'l',
+ (byte) 'm', (byte) 'n', (byte) 'o', (byte) 'p', (byte) 'q', (byte) 'r',
+ (byte) 's', (byte) 't', (byte) 'u', (byte) 'v', (byte) 'w', (byte) 'x',
+ (byte) 'y', (byte) 'z', (byte) '0', (byte) '1', (byte) '2', (byte) '3',
+ (byte) '4', (byte) '5', (byte) '6', (byte) '7', (byte) '8', (byte) '9',
+ (byte) '+', (byte) '/'
+ };
+
+ /**
+ * Translates a Base64 value to either its 6-bit reconstruction value or a
+ * negative number indicating some other meaning.
+ */
+ private final static byte[] _STANDARD_DECODABET = {
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 0 - 8
+ -5, -5, // Whitespace: Tab, Newline
+ -9, -9, // Decimal 11 - 12
+ -5, // Whitespace: Return
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 14 - 26
+ -9, -9, -9, -9, -9, // Decimal 27 - 31
+ -5, // Whitespace: Space
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 33 - 42
+ 62, // Plus sign at decimal 43
+ -9, -9, -9, // Decimal 44 - 46
+ 63, // Slash at decimal 47
+ 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // Numbers zero - nine
+ -9, -9, -9, // Decimal 58 - 60
+ -1, // Equals sign at decimal 61
+ -9, -9, -9, // Decimal 62 - 64
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, // Letters 'A' - 'N'
+ 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // Letters 'O' - 'Z'
+ -9, -9, -9, -9, -9, -9, // Decimal 91 - 96
+ 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, // Letters 'a' - 'm'
+ 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // Letters 'n' -'z'
+ -9, -9, -9, -9 // Decimal 123 - 126
+ };
+
+ /* ******** U R L S A F E B A S E 6 4 A L P H A B E T ******** */
+
+ /**
+ * Used in the URL and Filename safe dialect described in Section 4 of RFC3548
+ * <a href="http://www.faqs.org/rfcs/rfc3548.html">
+ * http://www.faqs.org/rfcs/rfc3548.html</a>.
+ * Notice that the last two bytes become "hyphen" and "underscore" instead of
+ * "plus" and "slash."
+ */
+ private final static byte[] _URL_SAFE_ALPHABET = { (byte) 'A', (byte) 'B',
+ (byte) 'C', (byte) 'D', (byte) 'E', (byte) 'F', (byte) 'G', (byte) 'H',
+ (byte) 'I', (byte) 'J', (byte) 'K', (byte) 'L', (byte) 'M', (byte) 'N',
+ (byte) 'O', (byte) 'P', (byte) 'Q', (byte) 'R', (byte) 'S', (byte) 'T',
+ (byte) 'U', (byte) 'V', (byte) 'W', (byte) 'X', (byte) 'Y', (byte) 'Z',
+ (byte) 'a', (byte) 'b', (byte) 'c', (byte) 'd', (byte) 'e', (byte) 'f',
+ (byte) 'g', (byte) 'h', (byte) 'i', (byte) 'j', (byte) 'k', (byte) 'l',
+ (byte) 'm', (byte) 'n', (byte) 'o', (byte) 'p', (byte) 'q', (byte) 'r',
+ (byte) 's', (byte) 't', (byte) 'u', (byte) 'v', (byte) 'w', (byte) 'x',
+ (byte) 'y', (byte) 'z', (byte) '0', (byte) '1', (byte) '2', (byte) '3',
+ (byte) '4', (byte) '5', (byte) '6', (byte) '7', (byte) '8', (byte) '9',
+ (byte) '-', (byte) '_'
+ };
+
+ /**
+ * Used in decoding URL and Filename safe dialects of Base64.
+ */
+ private final static byte[] _URL_SAFE_DECODABET = {
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 0 - 8
+ -5, -5, // Whitespace: Tab, Newline
+ -9, -9, // Decimal 11 - 12
+ -5, // Whitespace: Return
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 14 - 26
+ -9, -9, -9, -9, -9, // Decimal 27 - 31
+ -5, // Whitespace: Space
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 33 - 42
+ -9, // Plus sign at 43
+ -9, // Decimal 44
+ 62, // Minus sign at 45
+ -9, // Decimal 46
+ -9, // Slash at 47
+ 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // Numbers 0 - 9
+ -9, -9, -9, // Decimal 58 - 60
+ -1, // Equals sign at 61
+ -9, -9, -9, // Decimal 62 - 64
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, // Letters 'A' - 'N'
+ 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // Letters 'O' - 'Z'
+ -9, -9, -9, -9, // Decimal 91 - 94
+ 63, // Underscore at 95
+ -9, // Decimal 96
+ 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, // Letters 'a' - 'm'
+ 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // Letters 'n' - 'z'
+ -9, -9, -9, -9 // Decimal 123 - 126
+ };
+
+ /* ******** O R D E R E D B A S E 6 4 A L P H A B E T ******** */
+
+ /**
+ * In addition to being URL and file name friendly, this encoding preserves
+ * the sort order of encoded values. Whatever is input, be it string or
+ * just an array of bytes, when you use this encoding, the encoded value sorts
+ * exactly the same as the input value. It is described in the RFC change
+ * request: <a href="http://www.faqs.org/qa/rfcc-1940.html">
+ * http://www.faqs.org/qa/rfcc-1940.html</a>.
+ *
+ * It replaces "plus" and "slash" with "hyphen" and "underscore" and
+ * rearranges the alphabet so that the characters are in their natural sort
+ * order.
+ */
+ private final static byte[] _ORDERED_ALPHABET = { (byte) '-', (byte) '0',
+ (byte) '1', (byte) '2', (byte) '3', (byte) '4', (byte) '5', (byte) '6',
+ (byte) '7', (byte) '8', (byte) '9', (byte) 'A', (byte) 'B', (byte) 'C',
+ (byte) 'D', (byte) 'E', (byte) 'F', (byte) 'G', (byte) 'H', (byte) 'I',
+ (byte) 'J', (byte) 'K', (byte) 'L', (byte) 'M', (byte) 'N', (byte) 'O',
+ (byte) 'P', (byte) 'Q', (byte) 'R', (byte) 'S', (byte) 'T', (byte) 'U',
+ (byte) 'V', (byte) 'W', (byte) 'X', (byte) 'Y', (byte) 'Z', (byte) '_',
+ (byte) 'a', (byte) 'b', (byte) 'c', (byte) 'd', (byte) 'e', (byte) 'f',
+ (byte) 'g', (byte) 'h', (byte) 'i', (byte) 'j', (byte) 'k', (byte) 'l',
+ (byte) 'm', (byte) 'n', (byte) 'o', (byte) 'p', (byte) 'q', (byte) 'r',
+ (byte) 's', (byte) 't', (byte) 'u', (byte) 'v', (byte) 'w', (byte) 'x',
+ (byte) 'y', (byte) 'z'
+ };
+
+ /**
+ * Used in decoding the "ordered" dialect of Base64.
+ */
+ private final static byte[] _ORDERED_DECODABET = {
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 0 - 8
+ -5, -5, // Whitespace: Tab, Newline
+ -9, -9, // Decimal 11 - 12
+ -5, // Whitespace: Return
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 14 - 26
+ -9, -9, -9, -9, -9, // Decimal 27 - 31
+ -5, // Whitespace: Space
+ -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 33 - 42
+ -9, // Plus sign at 43
+ -9, // Decimal 44
+ 0, // Minus sign at 45
+ -9, // Decimal 46
+ -9, // Slash at decimal 47
+ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, // Numbers 0 - 9
+ -9, -9, -9, // Decimal 58 - 60
+ -1, // Equals sign at 61
+ -9, -9, -9, // Decimal 62 - 64
+ 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, // Letters 'A' - 'M'
+ 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, // Letters 'N' - 'Z'
+ -9, -9, -9, -9, // Decimal 91 - 94
+ 37, // Underscore at 95
+ -9, // Decimal 96
+ 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, // Letters 'a' - 'm'
+ 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, // Letters 'n' - 'z'
+ -9, -9, -9, -9 // Decimal 123 - 126
+ };
+
+ /* ******** D E T E R M I N E W H I C H A L H A B E T ******** */
+
+ /**
+ * Returns one of the _SOMETHING_ALPHABET byte arrays depending on the options
+ * specified. It's possible, though silly, to specify ORDERED and URLSAFE in
+ * which case one of them will be picked, though there is no guarantee as to
+ * which one will be picked.
+ */
+ protected final static byte[] getAlphabet(int options) {
+ if ((options & URL_SAFE) == URL_SAFE) {
+ return _URL_SAFE_ALPHABET;
+
+ } else if ((options & ORDERED) == ORDERED) {
+ return _ORDERED_ALPHABET;
+
+ } else {
+ return _STANDARD_ALPHABET;
+ }
+ } // end getAlphabet
+
+ /**
+ * Returns one of the _SOMETHING_DECODABET byte arrays depending on the
+ * options specified. It's possible, though silly, to specify ORDERED and
+ * URL_SAFE in which case one of them will be picked, though there is no
+ * guarantee as to which one will be picked.
+ */
+ protected final static byte[] getDecodabet(int options) {
+ if ((options & URL_SAFE) == URL_SAFE) {
+ return _URL_SAFE_DECODABET;
+
+ } else if ((options & ORDERED) == ORDERED) {
+ return _ORDERED_DECODABET;
+
+ } else {
+ return _STANDARD_DECODABET;
+ }
+ } // end getDecodabet
+
+ /** Defeats instantiation. */
+ private Base64() {}
+
+ /**
+ * Main program. Used for testing.
+ *
+ * Encodes or decodes two files from the command line
+ *
+ * @param args command arguments
+ */
+ public final static void main(String[] args) {
+ if (args.length < 3) {
+ usage("Not enough arguments.");
+
+ } else {
+ String flag = args[0];
+ String infile = args[1];
+ String outfile = args[2];
+ if (flag.equals("-e")) { // encode
+ encodeFileToFile(infile, outfile);
+
+ } else if (flag.equals("-d")) { // decode
+ decodeFileToFile(infile, outfile);
+
+ } else {
+ usage("Unknown flag: " + flag);
+ }
+ }
+ } // end main
+
+ /**
+ * Prints command line usage.
+ *
+ * @param msg A message to include with usage info.
+ */
+ private final static void usage(String msg) {
+ System.err.println(msg);
+ System.err.println("Usage: java Base64 -e|-d inputfile outputfile");
+ } // end usage
+
+ /* ******** E N C O D I N G M E T H O D S ******** */
+
+ /**
+ * Encodes up to the first three bytes of array <var>threeBytes</var> and
+ * returns a four-byte array in Base64 notation. The actual number of
+ * significant bytes in your array is given by <var>numSigBytes</var>. The
+ * array <var>threeBytes</var> needs only be as big as <var>numSigBytes</var>.
+ * Code can reuse a byte array by passing a four-byte array as <var>b4</var>.
+ *
+ * @param b4 A reusable byte array to reduce array instantiation
+ * @param threeBytes the array to convert
+ * @param numSigBytes the number of significant bytes in your array
+ * @return four byte array in Base64 notation.
+ * @since 1.5.1
+ */
+ protected static byte[] encode3to4(byte[] b4, byte[] threeBytes,
+ int numSigBytes, int options) {
+ encode3to4(threeBytes, 0, numSigBytes, b4, 0, options);
+ return b4;
+ } // end encode3to4
+
+ /**
+ * Encodes up to three bytes of the array <var>source</var> and writes the
+ * resulting four Base64 bytes to <var>destination</var>. The source and
+ * destination arrays can be manipulated anywhere along their length by
+ * specifying <var>srcOffset</var> and <var>destOffset</var>. This method
+ * does not check to make sure your arrays are large enough to accomodate
+ * <var>srcOffset</var> + 3 for the <var>source</var> array or
+ * <var>destOffset</var> + 4 for the <var>destination</var> array. The
+ * actual number of significant bytes in your array is given by
+ * <var>numSigBytes</var>.
+ * <p>
+ * This is the lowest level of the encoding methods with all possible
+ * parameters.
+ *
+ * @param source the array to convert
+ * @param srcOffset the index where conversion begins
+ * @param numSigBytes the number of significant bytes in your array
+ * @param destination the array to hold the conversion
+ * @param destOffset the index where output will be put
+ * @return the <var>destination</var> array
+ * @since 1.3
+ */
+ protected static byte[] encode3to4(byte[] source, int srcOffset,
+ int numSigBytes, byte[] destination, int destOffset, int options) {
+ byte[] ALPHABET = getAlphabet(options);
+
+ // 1 2 3
+ // 01234567890123456789012345678901 Bit position
+ // --------000000001111111122222222 Array position from threeBytes
+ // --------| || || || | Six bit groups to index ALPHABET
+ // >>18 >>12 >> 6 >> 0 Right shift necessary
+ // 0x3f 0x3f 0x3f Additional AND
+
+ // Create buffer with zero-padding if there are only one or two
+ // significant bytes passed in the array.
+ // We have to shift left 24 in order to flush out the 1's that appear
+ // when Java treats a value as negative that is cast from a byte to an int.
+ int inBuff =
+ (numSigBytes > 0 ? ((source[srcOffset] << 24) >>> 8) : 0)
+ | (numSigBytes > 1 ? ((source[srcOffset + 1] << 24) >>> 16) : 0)
+ | (numSigBytes > 2 ? ((source[srcOffset + 2] << 24) >>> 24) : 0);
+
+ switch (numSigBytes) {
+ case 3:
+ destination[destOffset] = ALPHABET[(inBuff >>> 18)];
+ destination[destOffset + 1] = ALPHABET[(inBuff >>> 12) & 0x3f];
+ destination[destOffset + 2] = ALPHABET[(inBuff >>> 6) & 0x3f];
+ destination[destOffset + 3] = ALPHABET[(inBuff) & 0x3f];
+ return destination;
+
+ case 2:
+ destination[destOffset] = ALPHABET[(inBuff >>> 18)];
+ destination[destOffset + 1] = ALPHABET[(inBuff >>> 12) & 0x3f];
+ destination[destOffset + 2] = ALPHABET[(inBuff >>> 6) & 0x3f];
+ destination[destOffset + 3] = EQUALS_SIGN;
+ return destination;
+
+ case 1:
+ destination[destOffset] = ALPHABET[(inBuff >>> 18)];
+ destination[destOffset + 1] = ALPHABET[(inBuff >>> 12) & 0x3f];
+ destination[destOffset + 2] = EQUALS_SIGN;
+ destination[destOffset + 3] = EQUALS_SIGN;
+ return destination;
+
+ default:
+ return destination;
+ } // end switch
+ } // end encode3to4
+
+ /**
+ * Serializes an object and returns the Base64-encoded version of that
+ * serialized object. If the object cannot be serialized or there is another
+ * error, the method will return <tt>null</tt>. The object is not
+ * GZip-compressed before being encoded.
+ *
+ * @param serializableObject The object to encode
+ * @return The Base64-encoded object
+ * @since 1.4
+ */
+ public static String encodeObject(Serializable serializableObject) {
+ return encodeObject(serializableObject, NO_OPTIONS);
+ } // end encodeObject
+
+ /**
+ * Serializes an object and returns the Base64-encoded version of that
+ * serialized object. If the object cannot be serialized or there is another
+ * error, the method will return <tt>null</tt>.
+ * <p>
+ * Valid options:
+ * <ul>
+ * <li>GZIP: gzip-compresses object before encoding it.</li>
+ * <li>DONT_BREAK_LINES: don't break lines at 76 characters. <i>Note:
+ * Technically, this makes your encoding non-compliant.</i></li>
+ * </ul>
+ * <p>
+ * Example: <code>encodeObject( myObj, Base64.GZIP )</code> or
+ * <p>
+ * Example:
+ * <code>encodeObject( myObj, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+ *
+ * @param serializableObject The object to encode
+ * @param options Specified options
+ * @see Base64#GZIP
+ * @see Base64#DONT_BREAK_LINES
+ * @return The Base64-encoded object
+ * @since 2.0
+ */
+ public static String encodeObject(Serializable serializableObject,
+ int options) {
+
+ ByteArrayOutputStream baos = new ByteArrayOutputStream();
+ OutputStream b64os = null;
+ ObjectOutputStream oos = null;
+ try {
+ // ObjectOutputStream -> (GZIP) -> Base64 -> ByteArrayOutputStream
+ b64os = new Base64OutputStream(baos, ENCODE | options);
+
+ oos = ((options & GZIP) == GZIP) ?
+ new ObjectOutputStream(new GZIPOutputStream(b64os)) :
+ new ObjectOutputStream(b64os);
+
+ oos.writeObject(serializableObject);
+ return new String(baos.toByteArray(), PREFERRED_ENCODING);
+
+ } catch (UnsupportedEncodingException uue) {
+ return new String(baos.toByteArray());
+
+ } catch (IOException e) {
+ LOG.error("error encoding object", e);
+ return null;
+
+ } finally {
+ if (oos != null) {
+ try {
+ oos.close();
+ } catch (Exception e) {
+ LOG.error("error closing ObjectOutputStream", e);
+ }
+ }
+ if (b64os != null) {
+ try {
+ b64os.close();
+ } catch (Exception e) {
+ LOG.error("error closing Base64OutputStream", e);
+ }
+ }
+ try {
+ baos.close();
+ } catch (Exception e) {
+ LOG.error("error closing ByteArrayOutputStream", e);
+ }
+ } // end finally
+ } // end encode
+
+ /**
+ * Encodes a byte array into Base64 notation. Does not GZip-compress data.
+ *
+ * @param source The data to convert
+ * @return encoded byte array
+ * @since 1.4
+ */
+ public static String encodeBytes(byte[] source) {
+ return encodeBytes(source, 0, source.length, NO_OPTIONS);
+ } // end encodeBytes
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ * <p>
+ * Valid options:
+ * <ul>
+ * <li>GZIP: gzip-compresses object before encoding it.</li>
+ * <li>DONT_BREAK_LINES: don't break lines at 76 characters. <i>Note:
+ * Technically, this makes your encoding non-compliant.</i></li>
+ * </ul>
+ *
+ * <p>
+ * Example: <code>encodeBytes( myData, Base64.GZIP )</code> or
+ * <p>
+ * Example:
+ * <code>encodeBytes( myData, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+ *
+ * @param source The data to convert
+ * @param options Specified options
+ * @see Base64#GZIP
+ * @see Base64#DONT_BREAK_LINES
+ * @see Base64#URL_SAFE
+ * @see Base64#ORDERED
+ * @return encoded byte array
+ * @since 2.0
+ */
+ public static String encodeBytes(byte[] source, int options) {
+ return encodeBytes(source, 0, source.length, options);
+ } // end encodeBytes
+
+ /**
+ * Encodes a byte array into Base64 notation. Does not GZip-compress data.
+ *
+ * @param source The data to convert
+ * @param off Offset in array where conversion should begin
+ * @param len Length of data to convert
+ * @return encoded byte array
+ * @since 1.4
+ */
+ public static String encodeBytes(byte[] source, int off, int len) {
+ return encodeBytes(source, off, len, NO_OPTIONS);
+ } // end encodeBytes
+
+ /**
+ * Encodes a byte array into Base64 notation.
+ * <p>
+ * Valid options:
+ * <ul>
+ * <li>GZIP: gzip-compresses object before encoding it.</li>
+ * <li>DONT_BREAK_LINES: don't break lines at 76 characters. <i>Note:
+ * Technically, this makes your encoding non-compliant.</i></li>
+ * </ul>
+ *
+ * <p>
+ * Example: <code>encodeBytes( myData, Base64.GZIP )</code> or
+ * <p>
+ * Example:
+ * <code>encodeBytes( myData, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+ *
+ * @param source The data to convert
+ * @param off Offset in array where conversion should begin
+ * @param len Length of data to convert
+ * @param options Specified options
+ * @see Base64#GZIP
+ * @see Base64#DONT_BREAK_LINES
+ * @see Base64#URL_SAFE
+ * @see Base64#ORDERED
+ * @return encoded byte array
+ * @since 2.0
+ */
+ public static String encodeBytes(byte[] source, int off, int len, int options) {
+ if ((options & GZIP) == GZIP) { // Compress?
+ // GZip -> Base64 -> ByteArray
+ ByteArrayOutputStream baos = new ByteArrayOutputStream();
+ GZIPOutputStream gzos = null;
+
+ try {
+ gzos =
+ new GZIPOutputStream(new Base64OutputStream(baos, ENCODE | options));
+
+ gzos.write(source, off, len);
+ gzos.close();
+ gzos = null;
+ return new String(baos.toByteArray(), PREFERRED_ENCODING);
+
+ } catch (UnsupportedEncodingException uue) {
+ return new String(baos.toByteArray());
+
+ } catch (IOException e) {
+ LOG.error("error encoding byte array", e);
+ return null;
+
+ } finally {
+ if (gzos != null) {
+ try {
+ gzos.close();
+ } catch (Exception e) {
+ LOG.error("error closing GZIPOutputStream", e);
+ }
+ }
+ try {
+ baos.close();
+ } catch (Exception e) {
+ LOG.error("error closing ByteArrayOutputStream", e);
+ }
+ } // end finally
+
+ } // end Compress
+
+ // Don't compress. Better not to use streams at all then.
+
+ boolean breakLines = ((options & DONT_BREAK_LINES) == 0);
+
+ int len43 = len * 4 / 3;
+ byte[] outBuff =
+ new byte[(len43) // Main 4:3
+ + ((len % 3) > 0 ? 4 : 0) // padding
+ + (breakLines ? (len43 / MAX_LINE_LENGTH) : 0)]; // New lines
+ int d = 0;
+ int e = 0;
+ int len2 = len - 2;
+ int lineLength = 0;
+ for (; d < len2; d += 3, e += 4) {
+ encode3to4(source, d + off, 3, outBuff, e, options);
+
+ lineLength += 4;
+ if (breakLines && lineLength == MAX_LINE_LENGTH) {
+ outBuff[e + 4] = NEW_LINE;
+ e++;
+ lineLength = 0;
+ } // end if: end of line
+ } // end for: each piece of array
+
+ if (d < len) {
+ encode3to4(source, d + off, len - d, outBuff, e, options);
+ e += 4;
+ } // end if: some padding needed
+
+ // Return value according to relevant encoding.
+ try {
+ return new String(outBuff, 0, e, PREFERRED_ENCODING);
+
+ } catch (UnsupportedEncodingException uue) {
+ return new String(outBuff, 0, e);
+ }
+ } // end encodeBytes
+
+ /* ******** D E C O D I N G M E T H O D S ******** */
+
+ /**
+ * Decodes four bytes from array <var>source</var> and writes the resulting
+ * bytes (up to three of them) to <var>destination</var>. The source and
+ * destination arrays can be manipulated anywhere along their length by
+ * specifying <var>srcOffset</var> and <var>destOffset</var>. This method
+ * does not check to make sure your arrays are large enough to accomodate
+ * <var>srcOffset</var> + 4 for the <var>source</var> array or
+ * <var>destOffset</var> + 3 for the <var>destination</var> array. This
+ * method returns the actual number of bytes that were converted from the
+ * Base64 encoding.
+ * <p>
+ * This is the lowest level of the decoding methods with all possible
+ * parameters.
+ * </p>
+ *
+ * @param source the array to convert
+ * @param srcOffset the index where conversion begins
+ * @param destination the array to hold the conversion
+ * @param destOffset the index where output will be put
+ * @param options
+ * @see Base64#URL_SAFE
+ * @see Base64#ORDERED
+ * @return the number of decoded bytes converted
+ * @since 1.3
+ */
+ protected static int decode4to3(byte[] source, int srcOffset,
+ byte[] destination, int destOffset, int options) {
+ byte[] DECODABET = getDecodabet(options);
+
+ if (source[srcOffset + 2] == EQUALS_SIGN) { // Example: Dk==
+ // Two ways to do the same thing. Don't know which way I like best.
+ // int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1] ] << 24 ) >>> 12 );
+ int outBuff =
+ ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+ | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12);
+
+ destination[destOffset] = (byte) (outBuff >>> 16);
+ return 1;
+
+ } else if (source[srcOffset + 3] == EQUALS_SIGN) { // Example: DkL=
+ // Two ways to do the same thing. Don't know which way I like best.
+ // int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1 ] ] << 24 ) >>> 12 )
+ // | ( ( DECODABET[ source[ srcOffset + 2 ] ] << 24 ) >>> 18 );
+ int outBuff =
+ ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+ | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12)
+ | ((DECODABET[source[srcOffset + 2]] & 0xFF) << 6);
+
+ destination[destOffset] = (byte) (outBuff >>> 16);
+ destination[destOffset + 1] = (byte) (outBuff >>> 8);
+ return 2;
+
+ } else { // Example: DkLE
+ try {
+ // Two ways to do the same thing. Don't know which way I like best.
+ // int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+ // | ( ( DECODABET[ source[ srcOffset + 1 ] ] << 24 ) >>> 12 )
+ // | ( ( DECODABET[ source[ srcOffset + 2 ] ] << 24 ) >>> 18 )
+ // | ( ( DECODABET[ source[ srcOffset + 3 ] ] << 24 ) >>> 24 );
+ int outBuff =
+ ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+ | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12)
+ | ((DECODABET[source[srcOffset + 2]] & 0xFF) << 6)
+ | ((DECODABET[source[srcOffset + 3]] & 0xFF));
+
+ destination[destOffset] = (byte) (outBuff >> 16);
+ destination[destOffset + 1] = (byte) (outBuff >> 8);
+ destination[destOffset + 2] = (byte) (outBuff);
+
+ return 3;
+
+ } catch (Exception e) {
+ LOG.error("error decoding bytes at " + source[srcOffset] + ": " +
+ (DECODABET[source[srcOffset]]) + ", " + source[srcOffset + 1] +
+ ": " + (DECODABET[source[srcOffset + 1]]) + ", " +
+ source[srcOffset + 2] + ": " + (DECODABET[source[srcOffset + 2]]) +
+ ", " + source[srcOffset + 3] + ": " +
+ (DECODABET[source[srcOffset + 3]]), e);
+ return -1;
+ } // end catch
+ }
+ } // end decodeToBytes
+
+ /**
+ * Very low-level access to decoding ASCII characters in the form of a byte
+ * array. Does not support automatically gunzipping or any other "fancy"
+ * features.
+ *
+ * @param source The Base64 encoded data
+ * @param off The offset of where to begin decoding
+ * @param len The length of characters to decode
+ * @param options
+ * @see Base64#URL_SAFE
+ * @see Base64#ORDERED
+ * @return decoded data
+ * @since 1.3
+ */
+ public static byte[] decode(byte[] source, int off, int len, int options) {
+ byte[] DECODABET = getDecodabet(options);
+
+ int len34 = len * 3 / 4;
+ byte[] outBuff = new byte[len34]; // Upper limit on size of output
+ int outBuffPosn = 0;
+
+ byte[] b4 = new byte[4];
+ int b4Posn = 0;
+ int i = 0;
+ byte sbiCrop = 0;
+ byte sbiDecode = 0;
+ for (i = off; i < off + len; i++) {
+ sbiCrop = (byte) (source[i] & 0x7f); // Only the low seven bits
+ sbiDecode = DECODABET[sbiCrop];
+
+ if (sbiDecode >= WHITE_SPACE_ENC) { // Whitespace, Equals or better
+ if (sbiDecode >= EQUALS_SIGN_ENC) { // Equals or better
+ b4[b4Posn++] = sbiCrop;
+ if (b4Posn > 3) {
+ outBuffPosn += decode4to3(b4, 0, outBuff, outBuffPosn, options);
+ b4Posn = 0;
+
+ // If that was the equals sign, break out of 'for' loop
+ if (sbiCrop == EQUALS_SIGN)
+ break;
+ } // end if: quartet built
+ } // end if: equals sign or better
+ } else {
+ LOG.error("Bad Base64 input character at " + i + ": " + source[i] +
+ "(decimal)");
+ return null;
+ } // end else:
+ } // each input character
+
+ byte[] out = new byte[outBuffPosn];
+ System.arraycopy(outBuff, 0, out, 0, outBuffPosn);
+ return out;
+ } // end decode
+
+ /**
+ * Decodes data from Base64 notation, automatically detecting gzip-compressed
+ * data and decompressing it.
+ *
+ * @param s the string to decode
+ * @return the decoded data
+ * @since 1.4
+ */
+ public static byte[] decode(String s) {
+ return decode(s, NO_OPTIONS);
+ }
+
+ /**
+ * Decodes data from Base64 notation, automatically detecting gzip-compressed
+ * data and decompressing it.
+ *
+ * @param s the string to decode
+ * @param options
+ * @see Base64#URL_SAFE
+ * @see Base64#ORDERED
+ * @return the decoded data
+ * @since 1.4
+ */
+ public static byte[] decode(String s, int options) {
+ byte[] bytes = null;
+ try {
+ bytes = s.getBytes(PREFERRED_ENCODING);
+
+ } catch (UnsupportedEncodingException uee) {
+ bytes = s.getBytes();
+ } // end catch
+
+ // Decode
+
+ bytes = decode(bytes, 0, bytes.length, options);
+
+ // Check to see if it's gzip-compressed
+ // GZIP Magic Two-Byte Number: 0x8b1f (35615)
+
+ if (bytes != null && bytes.length >= 4) {
+ int head = (bytes[0] & 0xff) | ((bytes[1] << 8) & 0xff00);
+ if (GZIPInputStream.GZIP_MAGIC == head) {
+ GZIPInputStream gzis = null;
+ ByteArrayOutputStream baos = new ByteArrayOutputStream();
+ try {
+ gzis = new GZIPInputStream(new ByteArrayInputStream(bytes));
+
+ byte[] buffer = new byte[2048];
+ for (int length = 0; (length = gzis.read(buffer)) >= 0; ) {
+ baos.write(buffer, 0, length);
+ } // end while: reading input
+
+ // No error? Get new bytes.
+ bytes = baos.toByteArray();
+
+ } catch (IOException e) {
+ // Just return originally-decoded bytes
+
+ } finally {
+ try {
+ baos.close();
+ } catch (Exception e) {
+ LOG.error("error closing ByteArrayOutputStream", e);
+ }
+ if (gzis != null) {
+ try {
+ gzis.close();
+ } catch (Exception e) {
+ LOG.error("error closing GZIPInputStream", e);
+ }
+ }
+ } // end finally
+ } // end if: gzipped
+ } // end if: bytes.length >= 2
+
+ return bytes;
+ } // end decode
+
+ /**
+ * Attempts to decode Base64 data and deserialize a Java Object within.
+ * Returns <tt>null</tt> if there was an error.
+ *
+ * @param encodedObject The Base64 data to decode
+ * @return The decoded and deserialized object
+ * @since 1.5
+ */
+ public static Object decodeToObject(String encodedObject) {
+ // Decode and gunzip if necessary
+ byte[] objBytes = decode(encodedObject);
+
+ Object obj = null;
+ ObjectInputStream ois = null;
+ try {
+ ois = new ObjectInputStream(new ByteArrayInputStream(objBytes));
+ obj = ois.readObject();
+
+ } catch (IOException e) {
+ LOG.error("error decoding object", e);
+
+ } catch (ClassNotFoundException e) {
+ LOG.error("error decoding object", e);
+
+ } finally {
+ if (ois != null) {
+ try {
+ ois.close();
+ } catch (Exception e) {
+ LOG.error("error closing ObjectInputStream", e);
+ }
+ }
+ } // end finally
+
+ return obj;
+ } // end decodeObject
+
+ /**
+ * Convenience method for encoding data to a file.
+ *
+ * @param dataToEncode byte array of data to encode in base64 form
+ * @param filename Filename for saving encoded data
+ * @return <tt>true</tt> if successful, <tt>false</tt> otherwise
+ *
+ * @since 2.1
+ */
+ public static boolean encodeToFile(byte[] dataToEncode, String filename) {
+ boolean success = false;
+ Base64OutputStream bos = null;
+ try {
+ bos = new Base64OutputStream(new FileOutputStream(filename), ENCODE);
+ bos.write(dataToEncode);
+ success = true;
+
+ } catch (IOException e) {
+ LOG.error("error encoding file: " + filename, e);
+ success = false;
+
+ } finally {
+ if (bos != null) {
+ try {
+ bos.close();
+ } catch (Exception e) {
+ LOG.error("error closing Base64OutputStream", e);
+ }
+ }
+ } // end finally
+
+ return success;
+ } // end encodeToFile
+
+ /**
+ * Convenience method for decoding data to a file.
+ *
+ * @param dataToDecode Base64-encoded data as a string
+ * @param filename Filename for saving decoded data
+ * @return <tt>true</tt> if successful, <tt>false</tt> otherwise
+ *
+ * @since 2.1
+ */
+ public static boolean decodeToFile(String dataToDecode, String filename) {
+ boolean success = false;
+ Base64OutputStream bos = null;
+ try {
+ bos = new Base64OutputStream(new FileOutputStream(filename), DECODE);
+ bos.write(dataToDecode.getBytes(PREFERRED_ENCODING));
+ success = true;
+
+ } catch (IOException e) {
+ LOG.error("error decoding to file: " + filename, e);
+ success = false;
+
+ } finally {
+ if (bos != null) {
+ try {
+ bos.close();
+ } catch (Exception e) {
+ LOG.error("error closing Base64OutputStream", e);
+ }
+ }
+ } // end finally
+
+ return success;
+ } // end decodeToFile
+
+ /**
+ * Convenience method for reading a base64-encoded file and decoding it.
+ *
+ * @param filename Filename for reading encoded data
+ * @return decoded byte array or null if unsuccessful
+ *
+ * @since 2.1
+ */
+ public static byte[] decodeFromFile(String filename) {
+ byte[] decodedData = null;
+ Base64InputStream bis = null;
+ try {
+ File file = new File(filename);
+ byte[] buffer = null;
+
+ // Check the size of file
+ if (file.length() > Integer.MAX_VALUE) {
+ LOG.fatal("File is too big for this convenience method (" +
+ file.length() + " bytes).");
+ return null;
+ } // end if: file too big for int index
+
+ buffer = new byte[(int) file.length()];
+
+ // Open a stream
+
+ bis = new Base64InputStream(new BufferedInputStream(
+ new FileInputStream(file)), DECODE);
+
+ // Read until done
+
+ int length = 0;
+ for (int numBytes = 0; (numBytes = bis.read(buffer, length, 4096)) >= 0; ) {
+ length += numBytes;
+ }
+
+ // Save in a variable to return
+
+ decodedData = new byte[length];
+ System.arraycopy(buffer, 0, decodedData, 0, length);
+
+ } catch (IOException e) {
+ LOG.error("Error decoding from file " + filename, e);
+
+ } finally {
+ if (bis != null) {
+ try {
+ bis.close();
+ } catch (Exception e) {
+ LOG.error("error closing Base64InputStream", e);
+ }
+ }
+ } // end finally
+
+ return decodedData;
+ } // end decodeFromFile
+
+ /**
+ * Convenience method for reading a binary file and base64-encoding it.
+ *
+ * @param filename Filename for reading binary data
+ * @return base64-encoded string or null if unsuccessful
+ *
+ * @since 2.1
+ */
+ public static String encodeFromFile(String filename) {
+ String encodedData = null;
+ Base64InputStream bis = null;
+ try {
+ File file = new File(filename);
+
+ // Need max() for math on small files (v2.2.1)
+
+ byte[] buffer = new byte[Math.max((int) (file.length() * 1.4), 40)];
+
+ // Open a stream
+
+ bis = new Base64InputStream(new BufferedInputStream(
+ new FileInputStream(file)), ENCODE);
+
+ // Read until done
+ int length = 0;
+ for (int numBytes = 0; (numBytes = bis.read(buffer, length, 4096)) >= 0; ) {
+ length += numBytes;
+ }
+
+ // Save in a variable to return
+
+ encodedData = new String(buffer, 0, length, PREFERRED_ENCODING);
+
+ } catch (IOException e) {
+ LOG.error("Error encoding from file " + filename, e);
+
+ } finally {
+ if (bis != null) {
+ try {
+ bis.close();
+ } catch (Exception e) {
+ LOG.error("error closing Base64InputStream", e);
+ }
+ }
+ } // end finally
+
+ return encodedData;
+ } // end encodeFromFile
+
+ /**
+ * Reads <tt>infile</tt> and encodes it to <tt>outfile</tt>.
+ *
+ * @param infile Input file
+ * @param outfile Output file
+ * @since 2.2
+ */
+ public static void encodeFileToFile(String infile, String outfile) {
+ String encoded = encodeFromFile(infile);
+ OutputStream out = null;
+ try {
+ out = new BufferedOutputStream(new FileOutputStream(outfile));
+ out.write(encoded.getBytes("US-ASCII")); // Strict, 7-bit output.
+
+ } catch (IOException e) {
+ LOG.error("error encoding from file " + infile + " to " + outfile, e);
+
+ } finally {
+ if (out != null) {
+ try {
+ out.close();
+ } catch (Exception e) {
+ LOG.error("error closing " + outfile, e);
+ }
+ }
+ } // end finally
+ } // end encodeFileToFile
+
+ /**
+ * Reads <tt>infile</tt> and decodes it to <tt>outfile</tt>.
+ *
+ * @param infile Input file
+ * @param outfile Output file
+ * @since 2.2
+ */
+ public static void decodeFileToFile(String infile, String outfile) {
+ byte[] decoded = decodeFromFile(infile);
+ OutputStream out = null;
+ try {
+ out = new BufferedOutputStream(new FileOutputStream(outfile));
+ out.write(decoded);
+
+ } catch (IOException e) {
+ LOG.error("error decoding from file " + infile + " to " + outfile, e);
+
+ } finally {
+ if (out != null) {
+ try {
+ out.close();
+ } catch (Exception e) {
+ LOG.error("error closing " + outfile, e);
+ }
+ }
+ } // end finally
+ } // end decodeFileToFile
+
+ /* ******** I N N E R C L A S S I N P U T S T R E A M ******** */
+
+ /**
+ * A {@link Base64.Base64InputStream} will read data from another
+ * <tt>InputStream</tt>, given in the constructor, and
+ * encode/decode to/from Base64 notation on the fly.
+ *
+ * @see Base64
+ * @since 1.3
+ */
+ public static class Base64InputStream extends FilterInputStream {
+ private boolean encode; // Encoding or decoding
+ private int position; // Current position in the buffer
+ private byte[] buffer; // Buffer holding converted data
+ private int bufferLength; // Length of buffer (3 or 4)
+ private int numSigBytes; // Meaningful bytes in the buffer
+ private int lineLength;
+ private boolean breakLines; // Break lines at < 80 characters
+ private int options; // Record options
+ private byte[] decodabet; // Local copy avoids method calls
+
+ /**
+ * Constructs a {@link Base64InputStream} in DECODE mode.
+ *
+ * @param in the <tt>InputStream</tt> from which to read data.
+ * @since 1.3
+ */
+ public Base64InputStream(InputStream in) {
+ this(in, DECODE);
+ } // end constructor
+
+ /**
+ * Constructs a {@link Base64.Base64InputStream} in either ENCODE or DECODE mode.
+ * <p>
+ * Valid options:
+ *
+ * <pre>
+ * ENCODE or DECODE: Encode or Decode as data is read.
+ * DONT_BREAK_LINES: don't break lines at 76 characters
+ * (only meaningful when encoding)
+ * <i>Note: Technically, this makes your encoding non-compliant.</i>
+ * </pre>
+ *
+ * <p>
+ * Example: <code>new Base64.Base64InputStream( in, Base64.DECODE )</code>
+ *
+ *
+ * @param in the <tt>InputStream</tt> from which to read data.
+ * @param options Specified options
+ * @see Base64#ENCODE
+ * @see Base64#DECODE
+ * @see Base64#DONT_BREAK_LINES
+ * @since 2.0
+ */
+ public Base64InputStream(InputStream in, int options) {
+ super(in);
+ this.breakLines = (options & DONT_BREAK_LINES) != DONT_BREAK_LINES;
+ this.encode = (options & ENCODE) == ENCODE;
+ this.bufferLength = encode ? 4 : 3;
+ this.buffer = new byte[bufferLength];
+ this.position = -1;
+ this.lineLength = 0;
+ this.options = options; // Record for later, mostly to determine which
+ // alphabet to use
+ this.decodabet = getDecodabet(options);
+ } // end constructor
+
+ /**
+ * Reads enough of the input stream to convert to/from Base64 and returns
+ * the next byte.
+ *
+ * @return next byte
+ * @since 1.3
+ */
+ @Override
+ public int read() throws IOException {
+ // Do we need to get data?
+ if (position < 0) {
+ if (encode) {
+ byte[] b3 = new byte[3];
+ int numBinaryBytes = 0;
+ for (int i = 0; i < 3; i++) {
+ try {
+ int b = in.read();
+
+ // If end of stream, b is -1.
+ if (b >= 0) {
+ b3[i] = (byte) b;
+ numBinaryBytes++;
+ } // end if: not end of stream
+
+ } catch (IOException e) {
+ // Only a problem if we got no data at all.
+ if (i == 0)
+ throw e;
+
+ } // end catch
+ } // end for: each needed input byte
+
+ if (numBinaryBytes > 0) {
+ encode3to4(b3, 0, numBinaryBytes, buffer, 0, options);
+ position = 0;
+ numSigBytes = 4;
+
+ } else {
+ return -1;
+ } // end else
+
+ } else {
+ byte[] b4 = new byte[4];
+ int i = 0;
+ for (i = 0; i < 4; i++) {
+ // Read four "meaningful" bytes:
+ int b = 0;
+ do {
+ b = in.read();
+ } while (b >= 0 && decodabet[b & 0x7f] <= WHITE_SPACE_ENC);
+
+ if (b < 0) {
+ break; // Reads a -1 if end of stream
+ }
+
+ b4[i] = (byte) b;
+ } // end for: each needed input byte
+
+ if (i == 4) {
+ numSigBytes = decode4to3(b4, 0, buffer, 0, options);
+ position = 0;
+
+ } else if (i == 0) {
+ return -1;
+
+ } else {
+ // Must have broken out from above.
+ throw new IOException("Improperly padded Base64 input.");
+ } // end
+ } // end else: decode
+ } // end else: get data
+
+ // Got data?
+ if (position >= 0) {
+ // End of relevant data?
+ if ( /* !encode && */position >= numSigBytes) {
+ return -1;
+ }
+
+ if (encode && breakLines && lineLength >= MAX_LINE_LENGTH) {
+ lineLength = 0;
+ return '\n';
+
+ }
+ lineLength++; // This isn't important when decoding
+ // but throwing an extra "if" seems
+ // just as wasteful.
+
+ int b = buffer[position++];
+
+ if (position >= bufferLength)
+ position = -1;
+
+ return b & 0xFF; // This is how you "cast" a byte that's
+ // intended to be unsigned.
+
+ }
+
+ // When JDK1.4 is more accepted, use an assertion here.
+ throw new IOException("Error in Base64 code reading stream.");
+
+ } // end read
+
+ /**
+ * Calls {@link #read()} repeatedly until the end of stream is reached or
+ * <var>len</var> bytes are read. Returns number of bytes read into array
+ * or -1 if end of stream is encountered.
+ *
+ * @param dest array to hold values
+ * @param off offset for array
+ * @param len max number of bytes to read into array
+ * @return bytes read into array or -1 if end of stream is encountered.
+ * @since 1.3
+ */
+ @Override
+ public int read(byte[] dest, int off, int len) throws IOException {
+ int i;
+ int b;
+ for (i = 0; i < len; i++) {
+ b = read();
+ if (b >= 0) {
+ dest[off + i] = (byte) b;
+ } else if (i == 0) {
+ return -1;
+ } else {
+ break; // Out of 'for' loop
+ }
+ } // end for: each byte read
+ return i;
+ } // end read
+
+ } // end inner class InputStream
+
+ /* ******** I N N E R C L A S S O U T P U T S T R E A M ******** */
+
+ /**
+ * A {@link Base64.Base64OutputStream} will write data to another
+ * <tt>OutputStream</tt>, given in the constructor, and
+ * encode/decode to/from Base64 notation on the fly.
+ *
+ * @see Base64
+ * @since 1.3
+ */
+ public static class Base64OutputStream extends FilterOutputStream {
+ private boolean encode;
+ private int position;
+ private byte[] buffer;
+ private int bufferLength;
+ private int lineLength;
+ private boolean breakLines;
+ private byte[] b4; // Scratch used in a few places
+ private boolean suspendEncoding;
+ private int options; // Record for later
+ private byte[] decodabet; // Local copy avoids method calls
+
+ /**
+ * Constructs a {@link Base64OutputStream} in ENCODE mode.
+ *
+ * @param out the <tt>OutputStream</tt> to which data will be written.
+ * @since 1.3
+ */
+ public Base64OutputStream(OutputStream out) {
+ this(out, ENCODE);
+ } // end constructor
+
+ /**
+ * Constructs a {@link Base64OutputStream} in either ENCODE or DECODE mode.
+ * <p>
+ * Valid options:
+ *
+ * <ul>
+ * <li>ENCODE or DECODE: Encode or Decode as data is read.</li>
+ * <li>DONT_BREAK_LINES: don't break lines at 76 characters (only
+ * meaningful when encoding) <i>Note: Technically, this makes your
+ * encoding non-compliant.</i></li>
+ * </ul>
+ *
+ * <p>
+ * Example: <code>new Base64.Base64OutputStream( out, Base64.ENCODE )</code>
+ *
+ * @param out the <tt>OutputStream</tt> to which data will be written.
+ * @param options Specified options.
+ * @see Base64#ENCODE
+ * @see Base64#DECODE
+ * @see Base64#DONT_BREAK_LINES
+ * @since 1.3
+ */
+ public Base64OutputStream(OutputStream out, int options) {
+ super(out);
+ this.breakLines = (options & DONT_BREAK_LINES) != DONT_BREAK_LINES;
+ this.encode = (options & ENCODE) == ENCODE;
+ this.bufferLength = encode ? 3 : 4;
+ this.buffer = new byte[bufferLength];
+ this.position = 0;
+ this.lineLength = 0;
+ this.suspendEncoding = false;
+ this.b4 = new byte[4];
+ this.options = options;
+ this.decodabet = getDecodabet(options);
+ } // end constructor
+
+ /**
+ * Writes the byte to the output stream after converting to/from Base64
+ * notation. When encoding, bytes are buffered three at a time before the
+ * output stream actually gets a write() call. When decoding, bytes are
+ * buffered four at a time.
+ *
+ * @param theByte the byte to write
+ * @since 1.3
+ */
+ @Override
+ public void write(int theByte) throws IOException {
+ // Encoding suspended?
+ if (suspendEncoding) {
+ super.out.write(theByte);
+ return;
+ } // end if: supsended
+
+ // Encode?
+ if (encode) {
+ buffer[position++] = (byte) theByte;
+ if (position >= bufferLength) { // Enough to encode.
+ out.write(encode3to4(b4, buffer, bufferLength, options));
+ lineLength += 4;
+ if (breakLines && lineLength >= MAX_LINE_LENGTH) {
+ out.write(NEW_LINE);
+ lineLength = 0;
+ } // end if: end of line
+
+ position = 0;
+ } // end if: enough to output
+
+ } else {
+ // Meaningful Base64 character?
+ if (decodabet[theByte & 0x7f] > WHITE_SPACE_ENC) {
+ buffer[position++] = (byte) theByte;
+ if (position >= bufferLength) { // Enough to output.
+ int len = decode4to3(buffer, 0, b4, 0, options);
+ out.write(b4, 0, len);
+ position = 0;
+ } // end if: enough to output
+
+ } else if (decodabet[theByte & 0x7f] != WHITE_SPACE_ENC) {
+ throw new IOException("Invalid character in Base64 data.");
+ } // end else: not white space either
+ } // end else: decoding
+ } // end write
+
+ /**
+ * Calls {@link #write(int)} repeatedly until <var>len</var> bytes are
+ * written.
+ *
+ * @param theBytes array from which to read bytes
+ * @param off offset for array
+ * @param len max number of bytes to read into array
+ * @since 1.3
+ */
+ @Override
+ public void write(byte[] theBytes, int off, int len) throws IOException {
+ // Encoding suspended?
+ if (suspendEncoding) {
+ super.out.write(theBytes, off, len);
+ return;
+ } // end if: supsended
+
+ for (int i = 0; i < len; i++) {
+ write(theBytes[off + i]);
+ } // end for: each byte written
+
+ } // end write
+
+ /**
+ * Method added by PHIL. [Thanks, PHIL. -Rob] This pads the buffer without
+ * closing the stream.
+ *
+ * @throws IOException
+ */
+ public void flushBase64() throws IOException {
+ if (position > 0) {
+ if (encode) {
+ out.write(encode3to4(b4, buffer, position, options));
+ position = 0;
+
+ } else {
+ throw new IOException("Base64 input not properly padded.");
+ } // end else: decoding
+ } // end if: buffer partially full
+
+ } // end flush
+
+ /**
+ * Flushes and closes (I think, in the superclass) the stream.
+ *
+ * @since 1.3
+ */
+ @Override
+ public void close() throws IOException {
+ // 1. Ensure that pending characters are written
+ flushBase64();
+
+ // 2. Actually close the stream
+ // Base class both flushes and closes.
+ super.close();
+
+ buffer = null;
+ out = null;
+ } // end close
+
+ /**
+ * Suspends encoding of the stream. May be helpful if you need to embed a
+ * piece of base640-encoded data in a stream.
+ *
+ * @throws IOException
+ * @since 1.5.1
+ */
+ public void suspendEncoding() throws IOException {
+ flushBase64();
+ this.suspendEncoding = true;
+ } // end suspendEncoding
+
+ /**
+ * Resumes encoding of the stream. May be helpful if you need to embed a
+ * piece of base640-encoded data in a stream.
+ *
+ * @since 1.5.1
+ */
+ public void resumeEncoding() {
+ this.suspendEncoding = false;
+ } // end resumeEncoding
+
+ } // end inner class OutputStream
+
+} // end class Base64
diff --git a/src/java/org/apache/hadoop/hbase/util/Bytes.java b/src/java/org/apache/hadoop/hbase/util/Bytes.java
new file mode 100644
index 0000000..e8d5f91
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Bytes.java
@@ -0,0 +1,998 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.nio.ByteBuffer;
+import java.util.Comparator;
+import java.math.BigInteger;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.WritableComparator;
+import org.apache.hadoop.io.WritableUtils;
+
+/**
+ * Utility class that handles byte arrays, conversions to/from other types,
+ * comparisons, hash code generation, manufacturing keys for HashMaps or
+ * HashSets, etc.
+ */
+public class Bytes {
+ /**
+ * Size of long in bytes
+ */
+ public static final int SIZEOF_LONG = Long.SIZE/Byte.SIZE;
+
+ /**
+ * Size of int in bytes
+ */
+ public static final int SIZEOF_INT = Integer.SIZE/Byte.SIZE;
+
+ /**
+ * Size of short in bytes
+ */
+ public static final int SIZEOF_SHORT = Short.SIZE/Byte.SIZE;
+
+ /**
+ * Size of float in bytes
+ */
+ public static final int SIZEOF_FLOAT = Float.SIZE/Byte.SIZE;
+
+ /**
+ * Size of double in bytes
+ */
+ public static final int SIZEOF_DOUBLE = Double.SIZE/Byte.SIZE;
+
+ /**
+ * Size of byte in bytes
+ */
+ public static final int SIZEOF_BYTE = 1;
+
+ /**
+ * Estimate of size cost to pay beyond payload in jvm for instance of byte [].
+ * Estimate based on study of jhat and jprofiler numbers.
+ */
+ // JHat says BU is 56 bytes.
+ // SizeOf which uses java.lang.instrument says 24 bytes. (3 longs?)
+ public static final int ESTIMATED_HEAP_TAX = 16;
+
+ /**
+ * Byte array comparator class.
+ */
+ public static class ByteArrayComparator implements RawComparator<byte []> {
+ public ByteArrayComparator() {
+ super();
+ }
+ public int compare(byte [] left, byte [] right) {
+ return compareTo(left, right);
+ }
+ public int compare(byte [] b1, int s1, int l1, byte [] b2, int s2, int l2) {
+ return compareTo(b1, s1, l1, b2, s2, l2);
+ }
+ }
+
+ /**
+ * Pass this to TreeMaps where byte [] are keys.
+ */
+ public static Comparator<byte []> BYTES_COMPARATOR =
+ new ByteArrayComparator();
+
+ /**
+ * Use comparing byte arrays, byte-by-byte
+ */
+ public static RawComparator<byte []> BYTES_RAWCOMPARATOR =
+ new ByteArrayComparator();
+
+ /**
+ * Read byte-array written with a WritableableUtils.vint prefix.
+ * @param in Input to read from.
+ * @return byte array read off <code>in</code>
+ * @throws IOException
+ */
+ public static byte [] readByteArray(final DataInput in)
+ throws IOException {
+ int len = WritableUtils.readVInt(in);
+ if (len < 0) {
+ throw new NegativeArraySizeException(Integer.toString(len));
+ }
+ byte [] result = new byte[len];
+ in.readFully(result, 0, len);
+ return result;
+ }
+
+ /**
+ * Read byte-array written with a WritableableUtils.vint prefix.
+ * IOException is converted to a RuntimeException.
+ * @param in Input to read from.
+ * @return byte array read off <code>in</code>
+ */
+ public static byte [] readByteArrayThrowsRuntime(final DataInput in) {
+ try {
+ return readByteArray(in);
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ /**
+ * Write byte-array with a WritableableUtils.vint prefix.
+ * @param out
+ * @param b
+ * @throws IOException
+ */
+ public static void writeByteArray(final DataOutput out, final byte [] b)
+ throws IOException {
+ writeByteArray(out, b, 0, b.length);
+ }
+
+ /**
+ * Write byte-array to out with a vint length prefix.
+ * @param out
+ * @param b
+ * @throws IOException
+ */
+ public static void writeByteArray(final DataOutput out, final byte [] b,
+ final int offset, final int length)
+ throws IOException {
+ WritableUtils.writeVInt(out, length);
+ out.write(b, offset, length);
+ }
+
+ /**
+ * Write byte-array from src to tgt with a vint length prefix.
+ * @param tgt
+ * @param tgtOffset
+ * @param src
+ * @param srcOffset
+ * @param srcLength
+ * @return New offset in src array.
+ */
+ public static int writeByteArray(final byte [] tgt, final int tgtOffset,
+ final byte [] src, final int srcOffset, final int srcLength) {
+ byte [] vint = vintToBytes(srcLength);
+ System.arraycopy(vint, 0, tgt, tgtOffset, vint.length);
+ int offset = tgtOffset + vint.length;
+ System.arraycopy(src, srcOffset, tgt, offset, srcLength);
+ return offset + srcLength;
+ }
+
+ /**
+ * Put bytes at the specified byte array position.
+ * @param tgtBytes the byte array
+ * @param tgtOffset position in the array
+ * @param srcBytes byte to write out
+ * @return incremented offset
+ */
+ public static int putBytes(byte[] tgtBytes, int tgtOffset, byte[] srcBytes,
+ int srcOffset, int srcLength) {
+ System.arraycopy(srcBytes, srcOffset, tgtBytes, tgtOffset, srcLength);
+ return tgtOffset + srcLength;
+ }
+
+ /**
+ * Write a single byte out to the specified byte array position.
+ * @param bytes the byte array
+ * @param offset position in the array
+ * @param b byte to write out
+ * @return incremented offset
+ */
+ public static int putByte(byte[] bytes, int offset, byte b) {
+ bytes[offset] = b;
+ return offset + 1;
+ }
+
+ /**
+ * Returns a new byte array, copied from the passed ByteBuffer.
+ * @param bb A ByteBuffer
+ * @return the byte array
+ */
+ public static byte[] toBytes(ByteBuffer bb) {
+ int length = bb.limit();
+ byte [] result = new byte[length];
+ System.arraycopy(bb.array(), bb.arrayOffset(), result, 0, length);
+ return result;
+ }
+
+ /**
+ * @param b Presumed UTF-8 encoded byte array.
+ * @return String made from <code>b</code>
+ */
+ public static String toString(final byte [] b) {
+ return toString(b, 0, b.length);
+ }
+
+ /**
+ * @param b Presumed UTF-8 encoded byte array.
+ * @param off
+ * @param len
+ * @return String made from <code>b</code>
+ */
+ public static String toString(final byte [] b, int off, int len) {
+ String result = null;
+ try {
+ result = new String(b, off, len, HConstants.UTF8_ENCODING);
+ } catch (UnsupportedEncodingException e) {
+ e.printStackTrace();
+ }
+ return result;
+ }
+
+ /**
+ * Converts a string to a UTF-8 byte array.
+ * @param s
+ * @return the byte array
+ */
+ public static byte[] toBytes(String s) {
+ if (s == null) {
+ throw new IllegalArgumentException("string cannot be null");
+ }
+ byte [] result = null;
+ try {
+ result = s.getBytes(HConstants.UTF8_ENCODING);
+ } catch (UnsupportedEncodingException e) {
+ e.printStackTrace();
+ }
+ return result;
+ }
+
+ /**
+ * Convert a boolean to a byte array.
+ * @param b
+ * @return <code>b</code> encoded in a byte array.
+ */
+ public static byte [] toBytes(final boolean b) {
+ byte [] bb = new byte[1];
+ bb[0] = b? (byte)-1: (byte)0;
+ return bb;
+ }
+
+ /**
+ * @param b
+ * @return True or false.
+ */
+ public static boolean toBoolean(final byte [] b) {
+ if (b == null || b.length > 1) {
+ throw new IllegalArgumentException("Array is wrong size");
+ }
+ return b[0] != (byte)0;
+ }
+
+ /**
+ * Convert a long value to a byte array
+ * @param val
+ * @return the byte array
+ */
+ public static byte[] toBytes(long val) {
+ byte [] b = new byte[8];
+ for(int i=7;i>0;i--) {
+ b[i] = (byte)(val);
+ val >>>= 8;
+ }
+ b[0] = (byte)(val);
+ return b;
+ }
+
+ /**
+ * Converts a byte array to a long value
+ * @param bytes
+ * @return the long value
+ */
+ public static long toLong(byte[] bytes) {
+ return toLong(bytes, 0);
+ }
+
+ /**
+ * Converts a byte array to a long value
+ * @param bytes
+ * @param offset
+ * @return the long value
+ */
+ public static long toLong(byte[] bytes, int offset) {
+ return toLong(bytes, offset, SIZEOF_LONG);
+ }
+
+ /**
+ * Converts a byte array to a long value
+ * @param bytes
+ * @param offset
+ * @param length
+ * @return the long value
+ */
+ public static long toLong(byte[] bytes, int offset, final int length) {
+ if (bytes == null || length != SIZEOF_LONG ||
+ (offset + length > bytes.length)) {
+ return -1L;
+ }
+ long l = 0;
+ for(int i = offset; i < (offset + length); i++) {
+ l <<= 8;
+ l ^= (long)bytes[i] & 0xFF;
+ }
+ return l;
+ }
+
+ /**
+ * Put a long value out to the specified byte array position.
+ * @param bytes the byte array
+ * @param offset position in the array
+ * @param val long to write out
+ * @return incremented offset
+ */
+ public static int putLong(byte[] bytes, int offset, long val) {
+ if (bytes == null || (bytes.length - offset < SIZEOF_LONG)) {
+ return offset;
+ }
+ for(int i=offset+7;i>offset;i--) {
+ bytes[i] = (byte)(val);
+ val >>>= 8;
+ }
+ bytes[offset] = (byte)(val);
+ return offset + SIZEOF_LONG;
+ }
+
+ /**
+ * Presumes float encoded as IEEE 754 floating-point "single format"
+ * @param bytes
+ * @return Float made from passed byte array.
+ */
+ public static float toFloat(byte [] bytes) {
+ return toFloat(bytes, 0);
+ }
+
+ /**
+ * Presumes float encoded as IEEE 754 floating-point "single format"
+ * @param bytes
+ * @param offset
+ * @return Float made from passed byte array.
+ */
+ private static float toFloat(byte [] bytes, int offset) {
+ int i = toInt(bytes, offset);
+ return Float.intBitsToFloat(i);
+ }
+
+ /**
+ * @param bytes
+ * @param offset
+ * @param f
+ * @return New offset in <code>bytes</bytes>
+ */
+ public static int putFloat(byte [] bytes, int offset, float f) {
+ int i = Float.floatToRawIntBits(f);
+ return putInt(bytes, offset, i);
+ }
+
+ public static byte [] toBytes(final float f) {
+ // Encode it as int
+ int i = Float.floatToRawIntBits(f);
+ return Bytes.toBytes(i);
+ }
+
+ /**
+ * @param bytes
+ * @return Return double made from passed bytes.
+ */
+ public static double toDouble(final byte [] bytes) {
+ return toDouble(bytes, 0);
+ }
+
+ /**
+ * @param bytes
+ * @param offset
+ * @return Return double made from passed bytes.
+ */
+ public static double toDouble(final byte [] bytes, final int offset) {
+ long l = toLong(bytes, offset);
+ return Double.longBitsToDouble(l);
+ }
+
+ /**
+ * @param bytes
+ * @param offset
+ * @param d
+ * @return New offset into array <code>bytes</code>
+ */
+ public static int putDouble(byte [] bytes, int offset, double d) {
+ long l = Double.doubleToLongBits(d);
+ return putLong(bytes, offset, l);
+ }
+
+ public static byte [] toBytes(final double d) {
+ // Encode it as a long
+ long l = Double.doubleToRawLongBits(d);
+ return Bytes.toBytes(l);
+ }
+
+ /**
+ * Convert an int value to a byte array
+ * @param val
+ * @return the byte array
+ */
+ public static byte[] toBytes(int val) {
+ byte [] b = new byte[4];
+ for(int i = 3; i > 0; i--) {
+ b[i] = (byte)(val);
+ val >>>= 8;
+ }
+ b[0] = (byte)(val);
+ return b;
+ }
+
+ /**
+ * Converts a byte array to an int value
+ * @param bytes
+ * @return the int value
+ */
+ public static int toInt(byte[] bytes) {
+ return toInt(bytes, 0);
+ }
+
+ /**
+ * Converts a byte array to an int value
+ * @param bytes
+ * @param offset
+ * @return the int value
+ */
+ public static int toInt(byte[] bytes, int offset) {
+ return toInt(bytes, offset, SIZEOF_INT);
+ }
+
+ /**
+ * Converts a byte array to an int value
+ * @param bytes
+ * @param offset
+ * @param length
+ * @return the int value
+ */
+ public static int toInt(byte[] bytes, int offset, final int length) {
+ if (bytes == null || length != SIZEOF_INT ||
+ (offset + length > bytes.length)) {
+ return -1;
+ }
+ int n = 0;
+ for(int i = offset; i < (offset + length); i++) {
+ n <<= 8;
+ n ^= bytes[i] & 0xFF;
+ }
+ return n;
+ }
+
+ /**
+ * Put an int value out to the specified byte array position.
+ * @param bytes the byte array
+ * @param offset position in the array
+ * @param val int to write out
+ * @return incremented offset
+ */
+ public static int putInt(byte[] bytes, int offset, int val) {
+ if (bytes == null || (bytes.length - offset < SIZEOF_INT)) {
+ return offset;
+ }
+ for(int i= offset+3; i > offset; i--) {
+ bytes[i] = (byte)(val);
+ val >>>= 8;
+ }
+ bytes[offset] = (byte)(val);
+ return offset + SIZEOF_INT;
+ }
+
+ /**
+ * Convert a short value to a byte array
+ * @param val
+ * @return the byte array
+ */
+ public static byte[] toBytes(short val) {
+ byte[] b = new byte[SIZEOF_SHORT];
+ b[1] = (byte)(val);
+ val >>>= 8;
+ b[0] = (byte)(val);
+ return b;
+ }
+
+ /**
+ * Converts a byte array to a short value
+ * @param bytes
+ * @return the short value
+ */
+ public static short toShort(byte[] bytes) {
+ return toShort(bytes, 0);
+ }
+
+ /**
+ * Converts a byte array to a short value
+ * @param bytes
+ * @return the short value
+ */
+ public static short toShort(byte[] bytes, int offset) {
+ return toShort(bytes, offset, SIZEOF_SHORT);
+ }
+
+ /**
+ * Converts a byte array to a short value
+ * @param bytes
+ * @return the short value
+ */
+ public static short toShort(byte[] bytes, int offset, final int length) {
+ if (bytes == null || length != SIZEOF_SHORT ||
+ (offset + length > bytes.length)) {
+ return -1;
+ }
+ short n = 0;
+ n ^= bytes[offset] & 0xFF;
+ n <<= 8;
+ n ^= bytes[offset+1] & 0xFF;
+ return n;
+ }
+
+ /**
+ * Put a short value out to the specified byte array position.
+ * @param bytes the byte array
+ * @param offset position in the array
+ * @param val short to write out
+ * @return incremented offset
+ */
+ public static int putShort(byte[] bytes, int offset, short val) {
+ if (bytes == null || (bytes.length - offset < SIZEOF_SHORT)) {
+ return offset;
+ }
+ bytes[offset+1] = (byte)(val);
+ val >>>= 8;
+ bytes[offset] = (byte)(val);
+ return offset + SIZEOF_SHORT;
+ }
+
+ /**
+ * @param vint Integer to make a vint of.
+ * @return Vint as bytes array.
+ */
+ public static byte [] vintToBytes(final long vint) {
+ long i = vint;
+ int size = WritableUtils.getVIntSize(i);
+ byte [] result = new byte[size];
+ int offset = 0;
+ if (i >= -112 && i <= 127) {
+ result[offset] = ((byte)i);
+ return result;
+ }
+
+ int len = -112;
+ if (i < 0) {
+ i ^= -1L; // take one's complement'
+ len = -120;
+ }
+
+ long tmp = i;
+ while (tmp != 0) {
+ tmp = tmp >> 8;
+ len--;
+ }
+
+ result[offset++] = (byte)len;
+
+ len = (len < -120) ? -(len + 120) : -(len + 112);
+
+ for (int idx = len; idx != 0; idx--) {
+ int shiftbits = (idx - 1) * 8;
+ long mask = 0xFFL << shiftbits;
+ result[offset++] = (byte)((i & mask) >> shiftbits);
+ }
+ return result;
+ }
+
+ /**
+ * @param buffer
+ * @return vint bytes as an integer.
+ */
+ public static long bytesToVint(final byte [] buffer) {
+ int offset = 0;
+ byte firstByte = buffer[offset++];
+ int len = WritableUtils.decodeVIntSize(firstByte);
+ if (len == 1) {
+ return firstByte;
+ }
+ long i = 0;
+ for (int idx = 0; idx < len-1; idx++) {
+ byte b = buffer[offset++];
+ i = i << 8;
+ i = i | (b & 0xFF);
+ }
+ return (WritableUtils.isNegativeVInt(firstByte) ? (i ^ -1L) : i);
+ }
+
+ /**
+ * Reads a zero-compressed encoded long from input stream and returns it.
+ * @param buffer Binary array
+ * @param offset Offset into array at which vint begins.
+ * @throws java.io.IOException
+ * @return deserialized long from stream.
+ */
+ public static long readVLong(final byte [] buffer, final int offset)
+ throws IOException {
+ byte firstByte = buffer[offset];
+ int len = WritableUtils.decodeVIntSize(firstByte);
+ if (len == 1) {
+ return firstByte;
+ }
+ long i = 0;
+ for (int idx = 0; idx < len-1; idx++) {
+ byte b = buffer[offset + 1 + idx];
+ i = i << 8;
+ i = i | (b & 0xFF);
+ }
+ return (WritableUtils.isNegativeVInt(firstByte) ? (i ^ -1L) : i);
+ }
+
+ /**
+ * @param left
+ * @param right
+ * @return 0 if equal, < 0 if left is less than right, etc.
+ */
+ public static int compareTo(final byte [] left, final byte [] right) {
+ return compareTo(left, 0, left.length, right, 0, right.length);
+ }
+
+ /**
+ * @param b1
+ * @param b2
+ * @param s1 Where to start comparing in the left buffer
+ * @param s2 Where to start comparing in the right buffer
+ * @param l1 How much to compare from the left buffer
+ * @param l2 How much to compare from the right buffer
+ * @return 0 if equal, < 0 if left is less than right, etc.
+ */
+ public static int compareTo(byte[] b1, int s1, int l1,
+ byte[] b2, int s2, int l2) {
+ // Bring WritableComparator code local
+ int end1 = s1 + l1;
+ int end2 = s2 + l2;
+ for (int i = s1, j = s2; i < end1 && j < end2; i++, j++) {
+ int a = (b1[i] & 0xff);
+ int b = (b2[j] & 0xff);
+ if (a != b) {
+ return a - b;
+ }
+ }
+ return l1 - l2;
+ }
+
+ /**
+ * @param left
+ * @param right
+ * @return True if equal
+ */
+ public static boolean equals(final byte [] left, final byte [] right) {
+ // Could use Arrays.equals?
+ return left == null && right == null? true:
+ (left == null || right == null || (left.length != right.length))? false:
+ compareTo(left, right) == 0;
+ }
+
+ /**
+ * @param b
+ * @return Runs {@link WritableComparator#hashBytes(byte[], int)} on the
+ * passed in array. This method is what {@link org.apache.hadoop.io.Text} and
+ * {@link ImmutableBytesWritable} use calculating hash code.
+ */
+ public static int hashCode(final byte [] b) {
+ return hashCode(b, b.length);
+ }
+
+ /**
+ * @param b
+ * @param length
+ * @return Runs {@link WritableComparator#hashBytes(byte[], int)} on the
+ * passed in array. This method is what {@link org.apache.hadoop.io.Text} and
+ * {@link ImmutableBytesWritable} use calculating hash code.
+ */
+ public static int hashCode(final byte [] b, final int length) {
+ return WritableComparator.hashBytes(b, length);
+ }
+
+ /**
+ * @param b
+ * @return A hash of <code>b</code> as an Integer that can be used as key in
+ * Maps.
+ */
+ public static Integer mapKey(final byte [] b) {
+ return Integer.valueOf(hashCode(b));
+ }
+
+ /**
+ * @param b
+ * @param length
+ * @return A hash of <code>b</code> as an Integer that can be used as key in
+ * Maps.
+ */
+ public static Integer mapKey(final byte [] b, final int length) {
+ return Integer.valueOf(hashCode(b, length));
+ }
+
+ /**
+ * @param a
+ * @param b
+ * @return New array that has a in lower half and b in upper half.
+ */
+ public static byte [] add(final byte [] a, final byte [] b) {
+ return add(a, b, HConstants.EMPTY_BYTE_ARRAY);
+ }
+
+ /**
+ * @param a
+ * @param b
+ * @param c
+ * @return New array made from a, b and c
+ */
+ public static byte [] add(final byte [] a, final byte [] b, final byte [] c) {
+ byte [] result = new byte[a.length + b.length + c.length];
+ System.arraycopy(a, 0, result, 0, a.length);
+ System.arraycopy(b, 0, result, a.length, b.length);
+ System.arraycopy(c, 0, result, a.length + b.length, c.length);
+ return result;
+ }
+
+ /**
+ * @param a
+ * @param length
+ * @return First <code>length</code> bytes from <code>a</code>
+ */
+ public static byte [] head(final byte [] a, final int length) {
+ if(a.length < length) return null;
+ byte [] result = new byte[length];
+ System.arraycopy(a, 0, result, 0, length);
+ return result;
+ }
+
+ /**
+ * @param a
+ * @param length
+ * @return Last <code>length</code> bytes from <code>a</code>
+ */
+ public static byte [] tail(final byte [] a, final int length) {
+ if(a.length < length) return null;
+ byte [] result = new byte[length];
+ System.arraycopy(a, a.length - length, result, 0, length);
+ return result;
+ }
+
+ /**
+ * @param a
+ * @param length
+ * @return Value in <code>a</code> plus <code>length</code> prepended 0 bytes
+ */
+ public static byte [] padHead(final byte [] a, final int length) {
+ byte [] padding = new byte[length];
+ for(int i=0;i<length;i++) padding[i] = 0;
+ return add(padding,a);
+ }
+
+ /**
+ * @param a
+ * @param length
+ * @return Value in <code>a</code> plus <code>length</code> appended 0 bytes
+ */
+ public static byte [] padTail(final byte [] a, final int length) {
+ byte [] padding = new byte[length];
+ for(int i=0;i<length;i++) padding[i] = 0;
+ return add(a,padding);
+ }
+
+ /**
+ * Split passed range. Expensive operation relatively. Uses BigInteger math.
+ * Useful splitting ranges for MapReduce jobs.
+ * @param a Beginning of range
+ * @param b End of range
+ * @param num Number of times to split range. Pass 1 if you want to split
+ * the range in two; i.e. one split.
+ * @return Array of dividing values
+ */
+ public static byte [][] split(final byte [] a, final byte [] b, final int num) {
+ byte [] aPadded = null;
+ byte [] bPadded = null;
+ if (a.length < b.length) {
+ aPadded = padTail(a,b.length-a.length);
+ bPadded = b;
+ } else if (b.length < a.length) {
+ aPadded = a;
+ bPadded = padTail(b,a.length-b.length);
+ } else {
+ aPadded = a;
+ bPadded = b;
+ }
+ if (compareTo(aPadded,bPadded) > 1) {
+ throw new IllegalArgumentException("b > a");
+ }
+ if (num <= 0) throw new IllegalArgumentException("num cannot be < 0");
+ byte [] prependHeader = {1, 0};
+ BigInteger startBI = new BigInteger(add(prependHeader, aPadded));
+ BigInteger stopBI = new BigInteger(add(prependHeader, bPadded));
+ BigInteger diffBI = stopBI.subtract(startBI);
+ BigInteger splitsBI = BigInteger.valueOf(num + 1);
+ if(diffBI.compareTo(splitsBI) <= 0) return null;
+ BigInteger intervalBI = null;
+ try {
+ intervalBI = diffBI.divide(splitsBI);
+ } catch(Exception e) {
+ return null;
+ }
+
+ byte [][] result = new byte[num+2][];
+ result[0] = a;
+
+ for (int i = 1; i <= num; i++) {
+ BigInteger curBI = startBI.add(intervalBI.multiply(BigInteger.valueOf(i)));
+ byte [] padded = curBI.toByteArray();
+ if (padded[1] == 0)
+ padded = tail(padded,padded.length-2);
+ else
+ padded = tail(padded,padded.length-1);
+ result[i] = padded;
+ }
+ result[num+1] = b;
+ return result;
+ }
+
+ /**
+ * @param t
+ * @return Array of byte arrays made from passed array of Text
+ */
+ public static byte [][] toByteArrays(final String [] t) {
+ byte [][] result = new byte[t.length][];
+ for (int i = 0; i < t.length; i++) {
+ result[i] = Bytes.toBytes(t[i]);
+ }
+ return result;
+ }
+
+ /**
+ * @param column
+ * @return A byte array of a byte array where first and only entry is
+ * <code>column</code>
+ */
+ public static byte [][] toByteArrays(final String column) {
+ return toByteArrays(toBytes(column));
+ }
+
+ /**
+ * @param column
+ * @return A byte array of a byte array where first and only entry is
+ * <code>column</code>
+ */
+ public static byte [][] toByteArrays(final byte [] column) {
+ byte [][] result = new byte[1][];
+ result[0] = column;
+ return result;
+ }
+
+ /**
+ * Binary search for keys in indexes.
+ * @param arr array of byte arrays to search for
+ * @param key the key you want to find
+ * @param offset the offset in the key you want to find
+ * @param length the length of the key
+ * @param comparator a comparator to compare.
+ * @return index of key
+ */
+ public static int binarySearch(byte [][]arr, byte []key, int offset,
+ int length, RawComparator<byte []> comparator) {
+ int low = 0;
+ int high = arr.length - 1;
+
+ while (low <= high) {
+ int mid = (low+high) >>> 1;
+ int cmp = comparator.compare(arr[mid], 0, arr[mid].length, key, offset,
+ length);
+ if (cmp < 0)
+ low = mid + 1;
+ else if (cmp > 0)
+ high = mid - 1;
+ else
+ return mid;
+ }
+ return - (low+1);
+ }
+
+ /**
+ * Bytewise binary increment/deincrement of long contained in byte array
+ * on given amount.
+ *
+ * @param value - array of bytes containing long (length <= SIZEOF_LONG)
+ * @param amount value will be incremented on (deincremented if negative)
+ * @return array of bytes containing incremented long (length == SIZEOF_LONG)
+ * @throws IOException - if value.length > SIZEOF_LONG
+ */
+ public static byte [] incrementBytes(byte[] value, long amount)
+ throws IOException {
+ byte[] val = value;
+ if (val.length < SIZEOF_LONG) {
+ // Hopefully this doesn't happen too often.
+ byte [] newvalue;
+ if (val[0] < 0) {
+ byte [] negativeValue = {-1, -1, -1, -1, -1, -1, -1, -1};
+ newvalue = negativeValue;
+ } else {
+ newvalue = new byte[SIZEOF_LONG];
+ }
+ System.arraycopy(val, 0, newvalue, newvalue.length - val.length,
+ val.length);
+ val = newvalue;
+ } else if (val.length > SIZEOF_LONG) {
+ throw new IllegalArgumentException("Increment Bytes - value too big: " +
+ val.length);
+ }
+ if(amount == 0) return val;
+ if(val[0] < 0){
+ return binaryIncrementNeg(val, amount);
+ }
+ return binaryIncrementPos(val, amount);
+ }
+
+ /* increment/deincrement for positive value */
+ private static byte [] binaryIncrementPos(byte [] value, long amount) {
+ long amo = amount;
+ int sign = 1;
+ if (amount < 0) {
+ amo = -amount;
+ sign = -1;
+ }
+ for(int i=0;i<value.length;i++) {
+ int cur = ((int)amo % 256) * sign;
+ amo = (amo >> 8);
+ int val = value[value.length-i-1] & 0x0ff;
+ int total = val + cur;
+ if(total > 255) {
+ amo += sign;
+ total %= 256;
+ } else if (total < 0) {
+ amo -= sign;
+ }
+ value[value.length-i-1] = (byte)total;
+ if (amo == 0) return value;
+ }
+ return value;
+ }
+
+ /* increment/deincrement for negative value */
+ private static byte [] binaryIncrementNeg(byte [] value, long amount) {
+ long amo = amount;
+ int sign = 1;
+ if (amount < 0) {
+ amo = -amount;
+ sign = -1;
+ }
+ for(int i=0;i<value.length;i++) {
+ int cur = ((int)amo % 256) * sign;
+ amo = (amo >> 8);
+ int val = ((~value[value.length-i-1]) & 0x0ff) + 1;
+ int total = cur - val;
+ if(total >= 0) {
+ amo += sign;
+ } else if (total < -256) {
+ amo -= sign;
+ total %= 256;
+ }
+ value[value.length-i-1] = (byte)total;
+ if (amo == 0) return value;
+ }
+ return value;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/FSUtils.java b/src/java/org/apache/hadoop/hbase/util/FSUtils.java
new file mode 100644
index 0000000..080c1f7
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/FSUtils.java
@@ -0,0 +1,268 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.DataInputStream;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.DistributedFileSystem;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+
+/**
+ * Utility methods for interacting with the underlying file system.
+ */
+public class FSUtils {
+ private static final Log LOG = LogFactory.getLog(FSUtils.class);
+
+ /**
+ * Not instantiable
+ */
+ private FSUtils() {
+ super();
+ }
+
+ /**
+ * Delete if exists.
+ * @param fs
+ * @param dir
+ * @return True if deleted <code>dir</code>
+ * @throws IOException
+ */
+ public static boolean deleteDirectory(final FileSystem fs, final Path dir)
+ throws IOException {
+ return fs.exists(dir)? fs.delete(dir, true): false;
+ }
+
+ /**
+ * Check if directory exists. If it does not, create it.
+ * @param fs
+ * @param dir
+ * @return Path
+ * @throws IOException
+ */
+ public Path checkdir(final FileSystem fs, final Path dir) throws IOException {
+ if (!fs.exists(dir)) {
+ fs.mkdirs(dir);
+ }
+ return dir;
+ }
+
+ /**
+ * Create file.
+ * @param fs
+ * @param p
+ * @return Path
+ * @throws IOException
+ */
+ public static Path create(final FileSystem fs, final Path p)
+ throws IOException {
+ if (fs.exists(p)) {
+ throw new IOException("File already exists " + p.toString());
+ }
+ if (!fs.createNewFile(p)) {
+ throw new IOException("Failed create of " + p);
+ }
+ return p;
+ }
+
+ /**
+ * Checks to see if the specified file system is available
+ *
+ * @param fs
+ * @throws IOException
+ */
+ public static void checkFileSystemAvailable(final FileSystem fs)
+ throws IOException {
+ if (!(fs instanceof DistributedFileSystem)) {
+ return;
+ }
+ IOException exception = null;
+ DistributedFileSystem dfs = (DistributedFileSystem) fs;
+ try {
+ if (dfs.exists(new Path("/"))) {
+ return;
+ }
+ } catch (IOException e) {
+ exception = RemoteExceptionHandler.checkIOException(e);
+ }
+ try {
+ fs.close();
+ } catch (Exception e) {
+ LOG.error("file system close failed: ", e);
+ }
+ IOException io = new IOException("File system is not available");
+ io.initCause(exception);
+ throw io;
+ }
+
+ /**
+ * Verifies current version of file system
+ *
+ * @param fs
+ * @param rootdir
+ * @return null if no version file exists, version string otherwise.
+ * @throws IOException
+ */
+ public static String getVersion(FileSystem fs, Path rootdir)
+ throws IOException {
+ Path versionFile = new Path(rootdir, HConstants.VERSION_FILE_NAME);
+ String version = null;
+ if (fs.exists(versionFile)) {
+ FSDataInputStream s =
+ fs.open(versionFile);
+ try {
+ version = DataInputStream.readUTF(s);
+ } finally {
+ s.close();
+ }
+ }
+ return version;
+ }
+
+ /**
+ * Verifies current version of file system
+ *
+ * @param fs file system
+ * @param rootdir root directory of HBase installation
+ * @param message if true, issues a message on System.out
+ *
+ * @throws IOException
+ */
+ public static void checkVersion(FileSystem fs, Path rootdir,
+ boolean message) throws IOException {
+ String version = getVersion(fs, rootdir);
+
+ if (version == null) {
+ if (!rootRegionExists(fs, rootdir)) {
+ // rootDir is empty (no version file and no root region)
+ // just create new version file (HBASE-1195)
+ FSUtils.setVersion(fs, rootdir);
+ return;
+ }
+ } else if (version.compareTo(HConstants.FILE_SYSTEM_VERSION) == 0)
+ return;
+
+ // version is deprecated require migration
+ // Output on stdout so user sees it in terminal.
+ String msg = "File system needs to be upgraded. Run " +
+ "the '${HBASE_HOME}/bin/hbase migrate' script.";
+ if (message) {
+ System.out.println("WARNING! " + msg);
+ }
+ throw new FileSystemVersionException(msg);
+ }
+
+ /**
+ * Sets version of file system
+ *
+ * @param fs
+ * @param rootdir
+ * @throws IOException
+ */
+ public static void setVersion(FileSystem fs, Path rootdir)
+ throws IOException {
+ FSDataOutputStream s =
+ fs.create(new Path(rootdir, HConstants.VERSION_FILE_NAME));
+ s.writeUTF(HConstants.FILE_SYSTEM_VERSION);
+ s.close();
+ LOG.debug("Created version file to: " + rootdir.toString());
+ }
+
+ /**
+ * Verifies root directory path is a valid URI with a scheme
+ *
+ * @param root root directory path
+ * @throws IOException if not a valid URI with a scheme
+ */
+ public static void validateRootPath(Path root) throws IOException {
+ try {
+ URI rootURI = new URI(root.toString());
+ String scheme = rootURI.getScheme();
+ if (scheme == null) {
+ throw new IOException("Root directory does not contain a scheme");
+ }
+ } catch (URISyntaxException e) {
+ IOException io = new IOException("Root directory path is not a valid URI");
+ io.initCause(e);
+ throw io;
+ }
+ }
+
+ /**
+ * Return the 'path' component of a Path. In Hadoop, Path is an URI. This
+ * method returns the 'path' component of a Path's URI: e.g. If a Path is
+ * <code>hdfs://example.org:9000/hbase_trunk/TestTable/compaction.dir</code>,
+ * this method returns <code>/hbase_trunk/TestTable/compaction.dir</code>.
+ * This method is useful if you want to print out a Path without qualifying
+ * Filesystem instance.
+ * @param p Filesystem Path whose 'path' component we are to return.
+ * @return Path portion of the Filesystem
+ */
+ public static String getPath(Path p) {
+ return p.toUri().getPath();
+ }
+
+ /**
+ * @param c
+ * @return Path to hbase root directory: i.e. <code>hbase.rootdir</code> as a
+ * Path.
+ * @throws IOException
+ */
+ public static Path getRootDir(final HBaseConfiguration c) throws IOException {
+ FileSystem fs = FileSystem.get(c);
+ // Get root directory of HBase installation
+ Path rootdir = fs.makeQualified(new Path(c.get(HConstants.HBASE_DIR)));
+ if (!fs.exists(rootdir)) {
+ String message = "HBase root directory " + rootdir.toString() +
+ " does not exist.";
+ LOG.error(message);
+ throw new FileNotFoundException(message);
+ }
+ return rootdir;
+ }
+
+ /**
+ * Checks if root region exists
+ *
+ * @param fs file system
+ * @param rootdir root directory of HBase installation
+ * @return true if exists
+ * @throws IOException
+ */
+ public static boolean rootRegionExists(FileSystem fs, Path rootdir)
+ throws IOException {
+ Path rootRegionDir =
+ HRegion.getRegionDir(rootdir, HRegionInfo.ROOT_REGIONINFO);
+ return fs.exists(rootRegionDir);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/FileSystemVersionException.java b/src/java/org/apache/hadoop/hbase/util/FileSystemVersionException.java
new file mode 100644
index 0000000..ce5f141
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/FileSystemVersionException.java
@@ -0,0 +1,39 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+
+/** Thrown when the file system needs to be upgraded */
+public class FileSystemVersionException extends IOException {
+ private static final long serialVersionUID = 1004053363L;
+
+ /** default constructor */
+ public FileSystemVersionException() {
+ super();
+ }
+
+ /** @param s message */
+ public FileSystemVersionException(String s) {
+ super(s);
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/Hash.java b/src/java/org/apache/hadoop/hbase/util/Hash.java
new file mode 100644
index 0000000..d5a5e8a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Hash.java
@@ -0,0 +1,119 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * This class represents a common API for hashing functions.
+ */
+public abstract class Hash {
+ /** Constant to denote invalid hash type. */
+ public static final int INVALID_HASH = -1;
+ /** Constant to denote {@link JenkinsHash}. */
+ public static final int JENKINS_HASH = 0;
+ /** Constant to denote {@link MurmurHash}. */
+ public static final int MURMUR_HASH = 1;
+
+ /**
+ * This utility method converts String representation of hash function name
+ * to a symbolic constant. Currently two function types are supported,
+ * "jenkins" and "murmur".
+ * @param name hash function name
+ * @return one of the predefined constants
+ */
+ public static int parseHashType(String name) {
+ if ("jenkins".equalsIgnoreCase(name)) {
+ return JENKINS_HASH;
+ } else if ("murmur".equalsIgnoreCase(name)) {
+ return MURMUR_HASH;
+ } else {
+ return INVALID_HASH;
+ }
+ }
+
+ /**
+ * This utility method converts the name of the configured
+ * hash type to a symbolic constant.
+ * @param conf configuration
+ * @return one of the predefined constants
+ */
+ public static int getHashType(Configuration conf) {
+ String name = conf.get("hbase.hash.type", "murmur");
+ return parseHashType(name);
+ }
+
+ /**
+ * Get a singleton instance of hash function of a given type.
+ * @param type predefined hash type
+ * @return hash function instance, or null if type is invalid
+ */
+ public static Hash getInstance(int type) {
+ switch(type) {
+ case JENKINS_HASH:
+ return JenkinsHash.getInstance();
+ case MURMUR_HASH:
+ return MurmurHash.getInstance();
+ default:
+ return null;
+ }
+ }
+
+ /**
+ * Get a singleton instance of hash function of a type
+ * defined in the configuration.
+ * @param conf current configuration
+ * @return defined hash type, or null if type is invalid
+ */
+ public static Hash getInstance(Configuration conf) {
+ int type = getHashType(conf);
+ return getInstance(type);
+ }
+
+ /**
+ * Calculate a hash using all bytes from the input argument, and
+ * a seed of -1.
+ * @param bytes input bytes
+ * @return hash value
+ */
+ public int hash(byte[] bytes) {
+ return hash(bytes, bytes.length, -1);
+ }
+
+ /**
+ * Calculate a hash using all bytes from the input argument,
+ * and a provided seed value.
+ * @param bytes input bytes
+ * @param initval seed value
+ * @return hash value
+ */
+ public int hash(byte[] bytes, int initval) {
+ return hash(bytes, bytes.length, initval);
+ }
+
+ /**
+ * Calculate a hash using bytes from 0 to <code>length</code>, and
+ * the provided seed value
+ * @param bytes input bytes
+ * @param length length of the valid bytes to consider
+ * @param initval seed value
+ * @return hash value
+ */
+ public abstract int hash(byte[] bytes, int length, int initval);
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/InfoServer.java b/src/java/org/apache/hadoop/hbase/util/InfoServer.java
new file mode 100644
index 0000000..7ee264d
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/InfoServer.java
@@ -0,0 +1,236 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URL;
+
+import javax.servlet.http.HttpServlet;
+
+import org.apache.hadoop.mapred.StatusHttpServer;
+import org.mortbay.http.HttpContext;
+import org.mortbay.http.SocketListener;
+import org.mortbay.http.handler.ResourceHandler;
+import org.mortbay.jetty.servlet.WebApplicationContext;
+
+/**
+ * Create a Jetty embedded server to answer http requests. The primary goal
+ * is to serve up status information for the server.
+ * There are three contexts:
+ * "/stacks/" -> points to stack trace
+ * "/static/" -> points to common static files (src/webapps/static)
+ * "/" -> the jsp server code from (src/webapps/<name>)
+ */
+public class InfoServer {
+ // Bulk of this class is copied from
+ // {@link org.apache.hadoop.mapred.StatusHttpServer}. StatusHttpServer
+ // is not amenable to subclassing. It keeps webAppContext inaccessible
+ // and will find webapps only in the jar the class StatusHttpServer was
+ // loaded from.
+ private org.mortbay.jetty.Server webServer;
+ private SocketListener listener;
+ private boolean findPort;
+ private WebApplicationContext webAppContext;
+
+ /**
+ * Create a status server on the given port.
+ * The jsp scripts are taken from src/webapps/<code>name<code>.
+ * @param name The name of the server
+ * @param port The port to use on the server
+ * @param findPort whether the server should start at the given port and
+ * increment by 1 until it finds a free port.
+ */
+ public InfoServer(String name, String bindAddress, int port, boolean findPort)
+ throws IOException {
+ this.webServer = new org.mortbay.jetty.Server();
+ this.findPort = findPort;
+ this.listener = new SocketListener();
+ this.listener.setPort(port);
+ this.listener.setHost(bindAddress);
+ this.webServer.addListener(listener);
+
+ // Set up the context for "/static/*"
+ String appDir = getWebAppsPath();
+
+ // Set up the context for "/logs/" if "hadoop.log.dir" property is defined.
+ String logDir = System.getProperty("hbase.log.dir");
+ if (logDir != null) {
+ HttpContext logContext = new HttpContext();
+ logContext.setContextPath("/logs/*");
+ logContext.setResourceBase(logDir);
+ logContext.addHandler(new ResourceHandler());
+ webServer.addContext(logContext);
+ }
+
+ HttpContext staticContext = new HttpContext();
+ staticContext.setContextPath("/static/*");
+ staticContext.setResourceBase(appDir + "/static");
+ staticContext.addHandler(new ResourceHandler());
+ this.webServer.addContext(staticContext);
+
+ // set up the context for "/" jsp files
+ String webappDir = getWebAppDir(name);
+ this.webAppContext =
+ this.webServer.addWebApplication("/", webappDir);
+ if (name.equals("master")) {
+ // Put up the rest webapp.
+ this.webServer.addWebApplication("/api", getWebAppDir("rest"));
+ }
+ addServlet("stacks", "/stacks", StatusHttpServer.StackServlet.class);
+ addServlet("logLevel", "/logLevel", org.apache.hadoop.log.LogLevel.Servlet.class);
+ }
+
+ public static String getWebAppDir(final String webappName) throws IOException {
+ String webappDir = null;
+ try {
+ webappDir = getWebAppsPath("webapps" + File.separator + webappName);
+ } catch (FileNotFoundException e) {
+ // Retry. Resource may be inside jar on a windows machine.
+ webappDir = getWebAppsPath("webapps/" + webappName);
+ }
+ return webappDir;
+ }
+
+ /**
+ * Set a value in the webapp context. These values are available to the jsp
+ * pages as "application.getAttribute(name)".
+ * @param name The name of the attribute
+ * @param value The value of the attribute
+ */
+ public void setAttribute(String name, Object value) {
+ this.webAppContext.setAttribute(name, value);
+ }
+
+ /**
+ * Add a servlet in the server.
+ * @param name The name of the servlet (can be passed as null)
+ * @param pathSpec The path spec for the servlet
+ * @param servletClass The servlet class
+ */
+ public <T extends HttpServlet> void addServlet(String name, String pathSpec,
+ Class<T> servletClass) {
+ WebApplicationContext context = webAppContext;
+ try {
+ if (name == null) {
+ context.addServlet(pathSpec, servletClass.getName());
+ } else {
+ context.addServlet(name, pathSpec, servletClass.getName());
+ }
+ } catch (ClassNotFoundException ex) {
+ throw makeRuntimeException("Problem instantiating class", ex);
+ } catch (InstantiationException ex) {
+ throw makeRuntimeException("Problem instantiating class", ex);
+ } catch (IllegalAccessException ex) {
+ throw makeRuntimeException("Problem instantiating class", ex);
+ }
+ }
+
+ private static RuntimeException makeRuntimeException(String msg, Throwable cause) {
+ RuntimeException result = new RuntimeException(msg);
+ if (cause != null) {
+ result.initCause(cause);
+ }
+ return result;
+ }
+
+ /**
+ * Get the value in the webapp context.
+ * @param name The name of the attribute
+ * @return The value of the attribute
+ */
+ public Object getAttribute(String name) {
+ return this.webAppContext.getAttribute(name);
+ }
+
+ /**
+ * Get the pathname to the <code>webapps</code> files.
+ * @return the pathname as a URL
+ */
+ private static String getWebAppsPath() throws IOException {
+ return getWebAppsPath("webapps");
+ }
+
+ /**
+ * Get the pathname to the <code>patch</code> files.
+ * @param path Path to find.
+ * @return the pathname as a URL
+ */
+ private static String getWebAppsPath(final String path) throws IOException {
+ URL url = InfoServer.class.getClassLoader().getResource(path);
+ if (url == null)
+ throw new IOException("webapps not found in CLASSPATH: " + path);
+ return url.toString();
+ }
+
+ /**
+ * Get the port that the server is on
+ * @return the port
+ */
+ public int getPort() {
+ return this.listener.getPort();
+ }
+
+ public void setThreads(int min, int max) {
+ this.listener.setMinThreads(min);
+ this.listener.setMaxThreads(max);
+ }
+
+ /**
+ * Start the server. Does not wait for the server to start.
+ */
+ public void start() throws IOException {
+ try {
+ while (true) {
+ try {
+ this.webServer.start();
+ break;
+ } catch (org.mortbay.util.MultiException ex) {
+ // look for the multi exception containing a bind exception,
+ // in that case try the next port number.
+ boolean needNewPort = false;
+ for(int i=0; i < ex.size(); ++i) {
+ Exception sub = ex.getException(i);
+ if (sub instanceof java.net.BindException) {
+ needNewPort = true;
+ break;
+ }
+ }
+ if (!findPort || !needNewPort) {
+ throw ex;
+ }
+ this.listener.setPort(listener.getPort() + 1);
+ }
+ }
+ } catch (IOException ie) {
+ throw ie;
+ } catch (Exception e) {
+ IOException ie = new IOException("Problem starting http server");
+ ie.initCause(e);
+ throw ie;
+ }
+ }
+
+ /**
+ * stop the server
+ */
+ public void stop() throws InterruptedException {
+ this.webServer.stop();
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/JenkinsHash.java b/src/java/org/apache/hadoop/hbase/util/JenkinsHash.java
new file mode 100644
index 0000000..a34a9e8
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/JenkinsHash.java
@@ -0,0 +1,261 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.FileInputStream;
+import java.io.IOException;
+
+/**
+ * Produces 32-bit hash for hash table lookup.
+ *
+ * <pre>lookup3.c, by Bob Jenkins, May 2006, Public Domain.
+ *
+ * You can use this free for any purpose. It's in the public domain.
+ * It has no warranty.
+ * </pre>
+ *
+ * @see <a href="http://burtleburtle.net/bob/c/lookup3.c">lookup3.c</a>
+ * @see <a href="http://www.ddj.com/184410284">Hash Functions (and how this
+ * function compares to others such as CRC, MD?, etc</a>
+ * @see <a href="http://burtleburtle.net/bob/hash/doobs.html">Has update on the
+ * Dr. Dobbs Article</a>
+ */
+public class JenkinsHash extends Hash {
+ private static long INT_MASK = 0x00000000ffffffffL;
+ private static long BYTE_MASK = 0x00000000000000ffL;
+
+ private static JenkinsHash _instance = new JenkinsHash();
+
+ public static Hash getInstance() {
+ return _instance;
+ }
+
+ private static long rot(long val, int pos) {
+ return ((Integer.rotateLeft(
+ (int)(val & INT_MASK), pos)) & INT_MASK);
+ }
+
+ /**
+ * taken from hashlittle() -- hash a variable-length key into a 32-bit value
+ *
+ * @param key the key (the unaligned variable-length array of bytes)
+ * @param nbytes number of bytes to include in hash
+ * @param initval can be any integer value
+ * @return a 32-bit value. Every bit of the key affects every bit of the
+ * return value. Two keys differing by one or two bits will have totally
+ * different hash values.
+ *
+ * <p>The best hash table sizes are powers of 2. There is no need to do mod
+ * a prime (mod is sooo slow!). If you need less than 32 bits, use a bitmask.
+ * For example, if you need only 10 bits, do
+ * <code>h = (h & hashmask(10));</code>
+ * In which case, the hash table should have hashsize(10) elements.
+ *
+ * <p>If you are hashing n strings byte[][] k, do it like this:
+ * for (int i = 0, h = 0; i < n; ++i) h = hash( k[i], h);
+ *
+ * <p>By Bob Jenkins, 2006. bob_jenkins@burtleburtle.net. You may use this
+ * code any way you wish, private, educational, or commercial. It's free.
+ *
+ * <p>Use for hash table lookup, or anything where one collision in 2^^32 is
+ * acceptable. Do NOT use for cryptographic purposes.
+ */
+ @Override
+ @SuppressWarnings("fallthrough")
+ public int hash(byte[] key, int nbytes, int initval) {
+ int length = nbytes;
+ long a, b, c; // We use longs because we don't have unsigned ints
+ a = b = c = (0x00000000deadbeefL + length + initval) & INT_MASK;
+ int offset = 0;
+ for (; length > 12; offset += 12, length -= 12) {
+ a = (a + (key[offset + 0] & BYTE_MASK)) & INT_MASK;
+ a = (a + (((key[offset + 1] & BYTE_MASK) << 8) & INT_MASK)) & INT_MASK;
+ a = (a + (((key[offset + 2] & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+ a = (a + (((key[offset + 3] & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+ b = (b + (key[offset + 4] & BYTE_MASK)) & INT_MASK;
+ b = (b + (((key[offset + 5] & BYTE_MASK) << 8) & INT_MASK)) & INT_MASK;
+ b = (b + (((key[offset + 6] & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+ b = (b + (((key[offset + 7] & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+ c = (c + (key[offset + 8] & BYTE_MASK)) & INT_MASK;
+ c = (c + (((key[offset + 9] & BYTE_MASK) << 8) & INT_MASK)) & INT_MASK;
+ c = (c + (((key[offset + 10] & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+ c = (c + (((key[offset + 11] & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+
+ /*
+ * mix -- mix 3 32-bit values reversibly.
+ * This is reversible, so any information in (a,b,c) before mix() is
+ * still in (a,b,c) after mix().
+ *
+ * If four pairs of (a,b,c) inputs are run through mix(), or through
+ * mix() in reverse, there are at least 32 bits of the output that
+ * are sometimes the same for one pair and different for another pair.
+ *
+ * This was tested for:
+ * - pairs that differed by one bit, by two bits, in any combination
+ * of top bits of (a,b,c), or in any combination of bottom bits of
+ * (a,b,c).
+ * - "differ" is defined as +, -, ^, or ~^. For + and -, I transformed
+ * the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ * is commonly produced by subtraction) look like a single 1-bit
+ * difference.
+ * - the base values were pseudorandom, all zero but one bit set, or
+ * all zero plus a counter that starts at zero.
+ *
+ * Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
+ * satisfy this are
+ * 4 6 8 16 19 4
+ * 9 15 3 18 27 15
+ * 14 9 3 7 17 3
+ * Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing for
+ * "differ" defined as + with a one-bit base and a two-bit delta. I
+ * used http://burtleburtle.net/bob/hash/avalanche.html to choose
+ * the operations, constants, and arrangements of the variables.
+ *
+ * This does not achieve avalanche. There are input bits of (a,b,c)
+ * that fail to affect some output bits of (a,b,c), especially of a.
+ * The most thoroughly mixed value is c, but it doesn't really even
+ * achieve avalanche in c.
+ *
+ * This allows some parallelism. Read-after-writes are good at doubling
+ * the number of bits affected, so the goal of mixing pulls in the
+ * opposite direction as the goal of parallelism. I did what I could.
+ * Rotates seem to cost as much as shifts on every machine I could lay
+ * my hands on, and rotates are much kinder to the top and bottom bits,
+ * so I used rotates.
+ *
+ * #define mix(a,b,c) \
+ * { \
+ * a -= c; a ^= rot(c, 4); c += b; \
+ * b -= a; b ^= rot(a, 6); a += c; \
+ * c -= b; c ^= rot(b, 8); b += a; \
+ * a -= c; a ^= rot(c,16); c += b; \
+ * b -= a; b ^= rot(a,19); a += c; \
+ * c -= b; c ^= rot(b, 4); b += a; \
+ * }
+ *
+ * mix(a,b,c);
+ */
+ a = (a - c) & INT_MASK; a ^= rot(c, 4); c = (c + b) & INT_MASK;
+ b = (b - a) & INT_MASK; b ^= rot(a, 6); a = (a + c) & INT_MASK;
+ c = (c - b) & INT_MASK; c ^= rot(b, 8); b = (b + a) & INT_MASK;
+ a = (a - c) & INT_MASK; a ^= rot(c,16); c = (c + b) & INT_MASK;
+ b = (b - a) & INT_MASK; b ^= rot(a,19); a = (a + c) & INT_MASK;
+ c = (c - b) & INT_MASK; c ^= rot(b, 4); b = (b + a) & INT_MASK;
+ }
+
+ //-------------------------------- last block: affect all 32 bits of (c)
+ switch (length) { // all the case statements fall through
+ case 12:
+ c = (c + (((key[offset + 11] & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+ case 11:
+ c = (c + (((key[offset + 10] & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+ case 10:
+ c = (c + (((key[offset + 9] & BYTE_MASK) << 8) & INT_MASK)) & INT_MASK;
+ case 9:
+ c = (c + (key[offset + 8] & BYTE_MASK)) & INT_MASK;
+ case 8:
+ b = (b + (((key[offset + 7] & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+ case 7:
+ b = (b + (((key[offset + 6] & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+ case 6:
+ b = (b + (((key[offset + 5] & BYTE_MASK) << 8) & INT_MASK)) & INT_MASK;
+ case 5:
+ b = (b + (key[offset + 4] & BYTE_MASK)) & INT_MASK;
+ case 4:
+ a = (a + (((key[offset + 3] & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+ case 3:
+ a = (a + (((key[offset + 2] & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+ case 2:
+ a = (a + (((key[offset + 1] & BYTE_MASK) << 8) & INT_MASK)) & INT_MASK;
+ case 1:
+ a = (a + (key[offset + 0] & BYTE_MASK)) & INT_MASK;
+ break;
+ case 0:
+ return (int)(c & INT_MASK);
+ }
+ /*
+ * final -- final mixing of 3 32-bit values (a,b,c) into c
+ *
+ * Pairs of (a,b,c) values differing in only a few bits will usually
+ * produce values of c that look totally different. This was tested for
+ * - pairs that differed by one bit, by two bits, in any combination
+ * of top bits of (a,b,c), or in any combination of bottom bits of
+ * (a,b,c).
+ *
+ * - "differ" is defined as +, -, ^, or ~^. For + and -, I transformed
+ * the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ * is commonly produced by subtraction) look like a single 1-bit
+ * difference.
+ *
+ * - the base values were pseudorandom, all zero but one bit set, or
+ * all zero plus a counter that starts at zero.
+ *
+ * These constants passed:
+ * 14 11 25 16 4 14 24
+ * 12 14 25 16 4 14 24
+ * and these came close:
+ * 4 8 15 26 3 22 24
+ * 10 8 15 26 3 22 24
+ * 11 8 15 26 3 22 24
+ *
+ * #define final(a,b,c) \
+ * {
+ * c ^= b; c -= rot(b,14); \
+ * a ^= c; a -= rot(c,11); \
+ * b ^= a; b -= rot(a,25); \
+ * c ^= b; c -= rot(b,16); \
+ * a ^= c; a -= rot(c,4); \
+ * b ^= a; b -= rot(a,14); \
+ * c ^= b; c -= rot(b,24); \
+ * }
+ *
+ */
+ c ^= b; c = (c - rot(b,14)) & INT_MASK;
+ a ^= c; a = (a - rot(c,11)) & INT_MASK;
+ b ^= a; b = (b - rot(a,25)) & INT_MASK;
+ c ^= b; c = (c - rot(b,16)) & INT_MASK;
+ a ^= c; a = (a - rot(c,4)) & INT_MASK;
+ b ^= a; b = (b - rot(a,14)) & INT_MASK;
+ c ^= b; c = (c - rot(b,24)) & INT_MASK;
+
+ return (int)(c & INT_MASK);
+ }
+
+ /**
+ * Compute the hash of the specified file
+ * @param args name of file to compute hash of.
+ * @throws IOException
+ */
+ public static void main(String[] args) throws IOException {
+ if (args.length != 1) {
+ System.err.println("Usage: JenkinsHash filename");
+ System.exit(-1);
+ }
+ FileInputStream in = new FileInputStream(args[0]);
+ byte[] bytes = new byte[512];
+ int value = 0;
+ JenkinsHash hash = new JenkinsHash();
+ for (int length = in.read(bytes); length > 0 ; length = in.read(bytes)) {
+ value = hash.hash(bytes, length, value);
+ }
+ System.out.println(Math.abs(value));
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/Keying.java b/src/java/org/apache/hadoop/hbase/util/Keying.java
new file mode 100644
index 0000000..49ed739
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Keying.java
@@ -0,0 +1,115 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.util.StringTokenizer;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+/**
+ * Utility creating hbase friendly keys.
+ * Use fabricating row names or column qualifiers.
+ * <p>TODO: Add createSchemeless key, a key that doesn't care if scheme is
+ * http or https.
+ * @see Bytes#split(byte[], byte[], int)
+ */
+public class Keying {
+ private static final String SCHEME = "r:";
+ private static final Pattern URI_RE_PARSER =
+ Pattern.compile("^([^:/?#]+://(?:[^/?#@]+@)?)([^:/?#]+)(.*)$");
+
+ /**
+ * Makes a key out of passed URI for use as row name or column qualifier.
+ *
+ * This method runs transforms on the passed URI so it sits better
+ * as a key (or portion-of-a-key) in hbase. The <code>host</code> portion of
+ * the URI authority is reversed so subdomains sort under their parent
+ * domain. The returned String is an opaque URI of an artificial
+ * <code>r:</code> scheme to prevent the result being considered an URI of
+ * the original scheme. Here is an example of the transform: The url
+ * <code>http://lucene.apache.org/index.html?query=something#middle<code> is
+ * returned as
+ * <code>r:http://org.apache.lucene/index.html?query=something#middle</code>
+ * The transforms are reversible. No transform is done if passed URI is
+ * not hierarchical.
+ *
+ * <p>If authority <code>userinfo</code> is present, will mess up the sort
+ * (until we do more work).</p>
+ *
+ * @param u URL to transform.
+ * @return An opaque URI of artificial 'r' scheme with host portion of URI
+ * authority reversed (if present).
+ * @see #keyToUri(String)
+ * @see <a href="http://www.ietf.org/rfc/rfc2396.txt">RFC2396</a>
+ */
+ public static String createKey(final String u) {
+ if (u.startsWith(SCHEME)) {
+ throw new IllegalArgumentException("Starts with " + SCHEME);
+ }
+ Matcher m = getMatcher(u);
+ if (m == null || !m.matches()) {
+ // If no match, return original String.
+ return u;
+ }
+ return SCHEME + m.group(1) + reverseHostname(m.group(2)) + m.group(3);
+ }
+
+ /**
+ * Reverse the {@link #createKey(String)} transform.
+ *
+ * @param s <code>URI</code> made by {@link #createKey(String)}.
+ * @return 'Restored' URI made by reversing the {@link #createKey(String)}
+ * transform.
+ */
+ public static String keyToUri(final String s) {
+ if (!s.startsWith(SCHEME)) {
+ return s;
+ }
+ Matcher m = getMatcher(s.substring(SCHEME.length()));
+ if (m == null || !m.matches()) {
+ // If no match, return original String.
+ return s;
+ }
+ return m.group(1) + reverseHostname(m.group(2)) + m.group(3);
+ }
+
+ private static Matcher getMatcher(final String u) {
+ if (u == null || u.length() <= 0) {
+ return null;
+ }
+ return URI_RE_PARSER.matcher(u);
+ }
+
+ private static String reverseHostname(final String hostname) {
+ if (hostname == null) {
+ return "";
+ }
+ StringBuilder sb = new StringBuilder(hostname.length());
+ for (StringTokenizer st = new StringTokenizer(hostname, ".", false);
+ st.hasMoreElements();) {
+ Object next = st.nextElement();
+ if (sb.length() > 0) {
+ sb.insert(0, ".");
+ }
+ sb.insert(0, next);
+ }
+ return sb.toString();
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/Merge.java b/src/java/org/apache/hadoop/hbase/util/Merge.java
new file mode 100644
index 0000000..7abb353
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Merge.java
@@ -0,0 +1,377 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.io.WritableComparator;
+import org.apache.hadoop.util.GenericOptionsParser;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Utility that can merge any two regions in the same table: adjacent,
+ * overlapping or disjoint.
+ */
+public class Merge extends Configured implements Tool {
+ static final Log LOG = LogFactory.getLog(Merge.class);
+ private final HBaseConfiguration conf;
+ private Path rootdir;
+ private volatile MetaUtils utils;
+ private byte [] tableName; // Name of table
+ private volatile byte [] region1; // Name of region 1
+ private volatile byte [] region2; // Name of region 2
+ private volatile boolean isMetaTable;
+ private volatile HRegionInfo mergeInfo;
+
+ /** default constructor */
+ public Merge() {
+ this(new HBaseConfiguration());
+ }
+
+ /**
+ * @param conf
+ */
+ public Merge(HBaseConfiguration conf) {
+ super(conf);
+ this.conf = conf;
+ this.mergeInfo = null;
+ }
+
+ public int run(String[] args) throws Exception {
+ if (parseArgs(args) != 0) {
+ return -1;
+ }
+
+ // Verify file system is up.
+ FileSystem fs = FileSystem.get(this.conf); // get DFS handle
+ LOG.info("Verifying that file system is available...");
+ try {
+ FSUtils.checkFileSystemAvailable(fs);
+ } catch (IOException e) {
+ LOG.fatal("File system is not available", e);
+ return -1;
+ }
+
+ // Verify HBase is down
+ LOG.info("Verifying that HBase is not running...");
+ try {
+ HBaseAdmin.checkHBaseAvailable(conf);
+ LOG.fatal("HBase cluster must be off-line.");
+ return -1;
+ } catch (MasterNotRunningException e) {
+ // Expected. Ignore.
+ }
+
+ // Initialize MetaUtils and and get the root of the HBase installation
+
+ this.utils = new MetaUtils(conf);
+ this.rootdir = FSUtils.getRootDir(this.conf);
+ try {
+ if (isMetaTable) {
+ mergeTwoMetaRegions();
+ } else {
+ mergeTwoRegions();
+ }
+ return 0;
+ } catch (Exception e) {
+ LOG.fatal("Merge failed", e);
+ utils.scanMetaRegion(HRegionInfo.FIRST_META_REGIONINFO,
+ new MetaUtils.ScannerListener() {
+ public boolean processRow(HRegionInfo info) {
+ System.err.println(info.toString());
+ return true;
+ }
+ }
+ );
+
+ return -1;
+
+ } finally {
+ if (this.utils != null) {
+ this.utils.shutdown();
+ }
+ }
+ }
+
+ /** @return HRegionInfo for merge result */
+ HRegionInfo getMergedHRegionInfo() {
+ return this.mergeInfo;
+ }
+
+ /*
+ * Merge two meta regions. This is unlikely to be needed soon as we have only
+ * seend the meta table split once and that was with 64MB regions. With 256MB
+ * regions, it will be some time before someone has enough data in HBase to
+ * split the meta region and even less likely that a merge of two meta
+ * regions will be needed, but it is included for completeness.
+ */
+ private void mergeTwoMetaRegions() throws IOException {
+ HRegion rootRegion = utils.getRootRegion();
+ List<KeyValue> cells1 =
+ rootRegion.get(region1, HConstants.COL_REGIONINFO, -1, -1);
+ HRegionInfo info1 = Writables.getHRegionInfo((cells1 == null)? null: cells1.get(0).getValue());
+ List<KeyValue> cells2 =
+ rootRegion.get(region2, HConstants.COL_REGIONINFO, -1, -1);
+ HRegionInfo info2 = Writables.getHRegionInfo((cells2 == null)? null: cells2.get(0).getValue());
+ HRegion merged = merge(info1, rootRegion, info2, rootRegion);
+ LOG.info("Adding " + merged.getRegionInfo() + " to " +
+ rootRegion.getRegionInfo());
+ HRegion.addRegionToMETA(rootRegion, merged);
+ merged.close();
+ }
+
+ private static class MetaScannerListener
+ implements MetaUtils.ScannerListener {
+ private final byte [] region1;
+ private final byte [] region2;
+ private HRegionInfo meta1 = null;
+ private HRegionInfo meta2 = null;
+
+ MetaScannerListener(final byte [] region1, final byte [] region2) {
+ this.region1 = region1;
+ this.region2 = region2;
+ }
+
+ public boolean processRow(HRegionInfo info) {
+ if (meta1 == null && HRegion.rowIsInRange(info, region1)) {
+ meta1 = info;
+ }
+ if (region2 != null && meta2 == null &&
+ HRegion.rowIsInRange(info, region2)) {
+ meta2 = info;
+ }
+ return meta1 == null || (region2 != null && meta2 == null);
+ }
+
+ HRegionInfo getMeta1() {
+ return meta1;
+ }
+
+ HRegionInfo getMeta2() {
+ return meta2;
+ }
+ }
+
+ /*
+ * Merges two regions from a user table.
+ */
+ private void mergeTwoRegions() throws IOException {
+ LOG.info("Merging regions " + Bytes.toString(this.region1) + " and " +
+ Bytes.toString(this.region2) + " in table " + Bytes.toString(this.tableName));
+ // Scan the root region for all the meta regions that contain the regions
+ // we're merging.
+ MetaScannerListener listener = new MetaScannerListener(region1, region2);
+ this.utils.scanRootRegion(listener);
+ HRegionInfo meta1 = listener.getMeta1();
+ if (meta1 == null) {
+ throw new IOException("Could not find meta region for " + Bytes.toString(region1));
+ }
+ HRegionInfo meta2 = listener.getMeta2();
+ if (meta2 == null) {
+ throw new IOException("Could not find meta region for " + Bytes.toString(region2));
+ }
+ LOG.info("Found meta for region1 " + Bytes.toString(meta1.getRegionName()) +
+ ", meta for region2 " + Bytes.toString(meta2.getRegionName()));
+ HRegion metaRegion1 = this.utils.getMetaRegion(meta1);
+ List<KeyValue> cells1 = metaRegion1.get(region1, HConstants.COL_REGIONINFO, -1, -1);
+ HRegionInfo info1 = Writables.getHRegionInfo((cells1 == null)? null: cells1.get(0).getValue());
+ if (info1== null) {
+ throw new NullPointerException("info1 is null using key " +
+ Bytes.toString(region1) + " in " + meta1);
+ }
+
+ HRegion metaRegion2 = null;
+ if (Bytes.equals(meta1.getRegionName(), meta2.getRegionName())) {
+ metaRegion2 = metaRegion1;
+ } else {
+ metaRegion2 = utils.getMetaRegion(meta2);
+ }
+ List<KeyValue> cells2 = metaRegion2.get(region2, HConstants.COL_REGIONINFO, -1, -1);
+ HRegionInfo info2 = Writables.getHRegionInfo((cells2 == null)? null: cells2.get(0).getValue());
+ if (info2 == null) {
+ throw new NullPointerException("info2 is null using key " + meta2);
+ }
+ HRegion merged = merge(info1, metaRegion1, info2, metaRegion2);
+
+ // Now find the meta region which will contain the newly merged region
+
+ listener = new MetaScannerListener(merged.getRegionName(), null);
+ utils.scanRootRegion(listener);
+ HRegionInfo mergedInfo = listener.getMeta1();
+ if (mergedInfo == null) {
+ throw new IOException("Could not find meta region for " +
+ Bytes.toString(merged.getRegionName()));
+ }
+ HRegion mergeMeta = null;
+ if (Bytes.equals(mergedInfo.getRegionName(), meta1.getRegionName())) {
+ mergeMeta = metaRegion1;
+ } else if (Bytes.equals(mergedInfo.getRegionName(), meta2.getRegionName())) {
+ mergeMeta = metaRegion2;
+ } else {
+ mergeMeta = utils.getMetaRegion(mergedInfo);
+ }
+ LOG.info("Adding " + merged.getRegionInfo() + " to " +
+ mergeMeta.getRegionInfo());
+
+ HRegion.addRegionToMETA(mergeMeta, merged);
+ merged.close();
+ }
+
+ /*
+ * Actually merge two regions and update their info in the meta region(s)
+ * If the meta is split, meta1 may be different from meta2. (and we may have
+ * to scan the meta if the resulting merged region does not go in either)
+ * Returns HRegion object for newly merged region
+ */
+ private HRegion merge(HRegionInfo info1, HRegion meta1, HRegionInfo info2,
+ HRegion meta2)
+ throws IOException {
+ if (info1 == null) {
+ throw new IOException("Could not find " + Bytes.toString(region1) + " in " +
+ Bytes.toString(meta1.getRegionName()));
+ }
+ if (info2 == null) {
+ throw new IOException("Cound not find " + Bytes.toString(region2) + " in " +
+ Bytes.toString(meta2.getRegionName()));
+ }
+ HRegion merged = null;
+ HLog log = utils.getLog();
+ HRegion r1 = HRegion.openHRegion(info1, this.rootdir, log, this.conf);
+ try {
+ HRegion r2 = HRegion.openHRegion(info2, this.rootdir, log, this.conf);
+ try {
+ merged = HRegion.merge(r1, r2);
+ } finally {
+ if (!r2.isClosed()) {
+ r2.close();
+ }
+ }
+ } finally {
+ if (!r1.isClosed()) {
+ r1.close();
+ }
+ }
+
+ // Remove the old regions from meta.
+ // HRegion.merge has already deleted their files
+
+ removeRegionFromMeta(meta1, info1);
+ removeRegionFromMeta(meta2, info2);
+
+ this.mergeInfo = merged.getRegionInfo();
+ return merged;
+ }
+
+ /*
+ * Removes a region's meta information from the passed <code>meta</code>
+ * region.
+ *
+ * @param meta META HRegion to be updated
+ * @param regioninfo HRegionInfo of region to remove from <code>meta</code>
+ *
+ * @throws IOException
+ */
+ private void removeRegionFromMeta(HRegion meta, HRegionInfo regioninfo)
+ throws IOException {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Removing region: " + regioninfo + " from " + meta);
+ }
+ meta.deleteAll(regioninfo.getRegionName(), System.currentTimeMillis(), null);
+ }
+
+ /*
+ * Adds a region's meta information from the passed <code>meta</code>
+ * region.
+ *
+ * @param metainfo META HRegionInfo to be updated
+ * @param region HRegion to add to <code>meta</code>
+ *
+ * @throws IOException
+ */
+ private int parseArgs(String[] args) {
+ GenericOptionsParser parser =
+ new GenericOptionsParser(this.getConf(), args);
+
+ String[] remainingArgs = parser.getRemainingArgs();
+ if (remainingArgs.length != 3) {
+ usage();
+ return -1;
+ }
+ tableName = Bytes.toBytes(remainingArgs[0]);
+ isMetaTable = Bytes.compareTo(tableName, HConstants.META_TABLE_NAME) == 0;
+
+ region1 = Bytes.toBytes(remainingArgs[1]);
+ region2 = Bytes.toBytes(remainingArgs[2]);
+ int status = 0;
+ if (notInTable(tableName, region1) || notInTable(tableName, region2)) {
+ status = -1;
+ } else if (Bytes.equals(region1, region2)) {
+ LOG.error("Can't merge a region with itself");
+ status = -1;
+ }
+ return status;
+ }
+
+ private boolean notInTable(final byte [] tn, final byte [] rn) {
+ if (WritableComparator.compareBytes(tn, 0, tn.length, rn, 0, tn.length) != 0) {
+ LOG.error("Region " + Bytes.toString(rn) + " does not belong to table " +
+ Bytes.toString(tn));
+ return true;
+ }
+ return false;
+ }
+
+ private void usage() {
+ System.err.println(
+ "Usage: bin/hbase merge <table-name> <region-1> <region-2>\n");
+ }
+
+ /**
+ * Main program
+ *
+ * @param args
+ */
+ public static void main(String[] args) {
+ int status = 0;
+ try {
+ status = ToolRunner.run(new Merge(), args);
+ } catch (Exception e) {
+ LOG.error("exiting due to error", e);
+ status = -1;
+ }
+ System.exit(status);
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/MetaUtils.java b/src/java/org/apache/hadoop/hbase/util/MetaUtils.java
new file mode 100644
index 0000000..2fba461
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/MetaUtils.java
@@ -0,0 +1,462 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.Store;
+
+/**
+ * Contains utility methods for manipulating HBase meta tables.
+ * Be sure to call {@link #shutdown()} when done with this class so it closes
+ * resources opened during meta processing (ROOT, META, etc.). Be careful
+ * how you use this class. If used during migrations, be careful when using
+ * this class to check whether migration is needed.
+ */
+public class MetaUtils {
+ private static final Log LOG = LogFactory.getLog(MetaUtils.class);
+ private final HBaseConfiguration conf;
+ private FileSystem fs;
+ private Path rootdir;
+ private HLog log;
+ private HRegion rootRegion;
+ private Map<byte [], HRegion> metaRegions = Collections.synchronizedSortedMap(
+ new TreeMap<byte [], HRegion>(Bytes.BYTES_COMPARATOR));
+
+ /** Default constructor
+ * @throws IOException */
+ public MetaUtils() throws IOException {
+ this(new HBaseConfiguration());
+ }
+
+ /** @param conf HBaseConfiguration
+ * @throws IOException */
+ public MetaUtils(HBaseConfiguration conf) throws IOException {
+ this.conf = conf;
+ conf.setInt("hbase.client.retries.number", 1);
+ this.rootRegion = null;
+ initialize();
+ }
+
+ /**
+ * Verifies that DFS is available and that HBase is off-line.
+ * @throws IOException
+ */
+ private void initialize() throws IOException {
+ this.fs = FileSystem.get(this.conf);
+ // Get root directory of HBase installation
+ this.rootdir = FSUtils.getRootDir(this.conf);
+ }
+
+ /** @return the HLog
+ * @throws IOException */
+ public synchronized HLog getLog() throws IOException {
+ if (this.log == null) {
+ Path logdir = new Path(this.fs.getHomeDirectory(),
+ HConstants.HREGION_LOGDIR_NAME + "_" + System.currentTimeMillis());
+ this.log = new HLog(this.fs, logdir, this.conf, null);
+ }
+ return this.log;
+ }
+
+ /**
+ * @return HRegion for root region
+ * @throws IOException
+ */
+ public HRegion getRootRegion() throws IOException {
+ if (this.rootRegion == null) {
+ openRootRegion();
+ }
+ return this.rootRegion;
+ }
+
+ /**
+ * Open or return cached opened meta region
+ *
+ * @param metaInfo HRegionInfo for meta region
+ * @return meta HRegion
+ * @throws IOException
+ */
+ public HRegion getMetaRegion(HRegionInfo metaInfo) throws IOException {
+ HRegion meta = metaRegions.get(metaInfo.getRegionName());
+ if (meta == null) {
+ meta = openMetaRegion(metaInfo);
+ this.metaRegions.put(metaInfo.getRegionName(), meta);
+ }
+ return meta;
+ }
+
+ /**
+ * Closes catalog regions if open. Also closes and deletes the HLog. You
+ * must call this method if you want to persist changes made during a
+ * MetaUtils edit session.
+ */
+ public void shutdown() {
+ if (this.rootRegion != null) {
+ try {
+ this.rootRegion.close();
+ } catch (IOException e) {
+ LOG.error("closing root region", e);
+ } finally {
+ this.rootRegion = null;
+ }
+ }
+ try {
+ for (HRegion r: metaRegions.values()) {
+ r.close();
+ }
+ } catch (IOException e) {
+ LOG.error("closing meta region", e);
+ } finally {
+ metaRegions.clear();
+ }
+ try {
+ if (this.log != null) {
+ this.log.rollWriter();
+ this.log.closeAndDelete();
+ }
+ } catch (IOException e) {
+ LOG.error("closing HLog", e);
+ } finally {
+ this.log = null;
+ }
+ }
+
+ /**
+ * Used by scanRootRegion and scanMetaRegion to call back the caller so it
+ * can process the data for a row.
+ */
+ public interface ScannerListener {
+ /**
+ * Callback so client of scanner can process row contents
+ *
+ * @param info HRegionInfo for row
+ * @return false to terminate the scan
+ * @throws IOException
+ */
+ public boolean processRow(HRegionInfo info) throws IOException;
+ }
+
+ /**
+ * Scans the root region. For every meta region found, calls the listener with
+ * the HRegionInfo of the meta region.
+ *
+ * @param listener method to be called for each meta region found
+ * @throws IOException
+ */
+ public void scanRootRegion(ScannerListener listener) throws IOException {
+ // Open root region so we can scan it
+ if (this.rootRegion == null) {
+ openRootRegion();
+ }
+
+ InternalScanner rootScanner = rootRegion.getScanner(
+ HConstants.COL_REGIONINFO_ARRAY, HConstants.EMPTY_START_ROW,
+ HConstants.LATEST_TIMESTAMP, null);
+
+ try {
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ while (rootScanner.next(results)) {
+ HRegionInfo info = null;
+ for (KeyValue kv: results) {
+ info = Writables.getHRegionInfoOrNull(kv.getValue());
+ if (info == null) {
+ LOG.warn("region info is null for row " +
+ Bytes.toString(kv.getRow()) + " in table " +
+ HConstants.ROOT_TABLE_NAME);
+ }
+ continue;
+ }
+ if (!listener.processRow(info)) {
+ break;
+ }
+ results.clear();
+ }
+ } finally {
+ rootScanner.close();
+ }
+ }
+
+ /**
+ * Scans a meta region. For every region found, calls the listener with
+ * the HRegionInfo of the region.
+ * TODO: Use Visitor rather than Listener pattern. Allow multiple Visitors.
+ * Use this everywhere we scan meta regions: e.g. in metascanners, in close
+ * handling, etc. Have it pass in the whole row, not just HRegionInfo.
+ *
+ * @param metaRegionInfo HRegionInfo for meta region
+ * @param listener method to be called for each meta region found
+ * @throws IOException
+ */
+ public void scanMetaRegion(HRegionInfo metaRegionInfo,
+ ScannerListener listener)
+ throws IOException {
+ // Open meta region so we can scan it
+ HRegion metaRegion = openMetaRegion(metaRegionInfo);
+ scanMetaRegion(metaRegion, listener);
+ }
+
+ /**
+ * Scan the passed in metaregion <code>m</code> invoking the passed
+ * <code>listener</code> per row found.
+ * @param m
+ * @param listener
+ * @throws IOException
+ */
+ public void scanMetaRegion(final HRegion m, final ScannerListener listener)
+ throws IOException {
+ InternalScanner metaScanner = m.getScanner(HConstants.COL_REGIONINFO_ARRAY,
+ HConstants.EMPTY_START_ROW, HConstants.LATEST_TIMESTAMP, null);
+ try {
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ while (metaScanner.next(results)) {
+ HRegionInfo info = null;
+ for (KeyValue kv: results) {
+ if (KeyValue.META_COMPARATOR.compareColumns(kv,
+ HConstants.COL_REGIONINFO, 0, HConstants.COL_REGIONINFO.length,
+ HConstants.COLUMN_FAMILY_STR.length()) == 0) {
+ info = Writables.getHRegionInfoOrNull(kv.getValue());
+ if (info == null) {
+ LOG.warn("region info is null for row " +
+ Bytes.toString(kv.getRow()) +
+ " in table " + HConstants.META_TABLE_NAME);
+ }
+ break;
+ }
+ }
+ if (!listener.processRow(info)) {
+ break;
+ }
+ results.clear();
+ }
+ } finally {
+ metaScanner.close();
+ }
+ }
+
+ private synchronized HRegion openRootRegion() throws IOException {
+ if (this.rootRegion != null) {
+ return this.rootRegion;
+ }
+ this.rootRegion = HRegion.openHRegion(HRegionInfo.ROOT_REGIONINFO,
+ this.rootdir, getLog(), this.conf);
+ this.rootRegion.compactStores();
+ return this.rootRegion;
+ }
+
+ private HRegion openMetaRegion(HRegionInfo metaInfo) throws IOException {
+ HRegion meta =
+ HRegion.openHRegion(metaInfo, this.rootdir, getLog(), this.conf);
+ meta.compactStores();
+ return meta;
+ }
+
+ /**
+ * Set a single region on/offline.
+ * This is a tool to repair tables that have offlined tables in their midst.
+ * Can happen on occasion. Use at your own risk. Call from a bit of java
+ * or jython script. This method is 'expensive' in that it creates a
+ * {@link HTable} instance per invocation to go against <code>.META.</code>
+ * @param c A configuration that has its <code>hbase.master</code>
+ * properly set.
+ * @param row Row in the catalog .META. table whose HRegionInfo's offline
+ * status we want to change.
+ * @param onlineOffline Pass <code>true</code> to OFFLINE the region.
+ * @throws IOException
+ */
+ public static void changeOnlineStatus (final HBaseConfiguration c,
+ final byte [] row, final boolean onlineOffline)
+ throws IOException {
+ HTable t = new HTable(c, HConstants.META_TABLE_NAME);
+ Cell cell = t.get(row, HConstants.COL_REGIONINFO);
+ if (cell == null) {
+ throw new IOException("no information for row " + Bytes.toString(row));
+ }
+ // Throws exception if null.
+ HRegionInfo info = Writables.getHRegionInfo(cell);
+ BatchUpdate b = new BatchUpdate(row);
+ info.setOffline(onlineOffline);
+ b.put(HConstants.COL_REGIONINFO, Writables.getBytes(info));
+ b.delete(HConstants.COL_SERVER);
+ b.delete(HConstants.COL_STARTCODE);
+ t.commit(b);
+ }
+
+ /**
+ * Offline version of the online TableOperation,
+ * org.apache.hadoop.hbase.master.AddColumn.
+ * @param tableName
+ * @param hcd Add this column to <code>tableName</code>
+ * @throws IOException
+ */
+ public void addColumn(final byte [] tableName,
+ final HColumnDescriptor hcd)
+ throws IOException {
+ List<HRegionInfo> metas = getMETARows(tableName);
+ for (HRegionInfo hri: metas) {
+ final HRegion m = getMetaRegion(hri);
+ scanMetaRegion(m, new ScannerListener() {
+ private boolean inTable = true;
+
+ @SuppressWarnings("synthetic-access")
+ public boolean processRow(HRegionInfo info) throws IOException {
+ LOG.debug("Testing " + Bytes.toString(tableName) + " against " +
+ Bytes.toString(info.getTableDesc().getName()));
+ if (Bytes.equals(info.getTableDesc().getName(), tableName)) {
+ this.inTable = false;
+ info.getTableDesc().addFamily(hcd);
+ updateMETARegionInfo(m, info);
+ return true;
+ }
+ // If we got here and we have not yet encountered the table yet,
+ // inTable will be false. Otherwise, we've passed out the table.
+ // Stop the scanner.
+ return this.inTable;
+ }});
+ }
+ }
+
+ /**
+ * Offline version of the online TableOperation,
+ * org.apache.hadoop.hbase.master.DeleteColumn.
+ * @param tableName
+ * @param columnFamily Name of column name to remove.
+ * @throws IOException
+ */
+ public void deleteColumn(final byte [] tableName,
+ final byte [] columnFamily) throws IOException {
+ List<HRegionInfo> metas = getMETARows(tableName);
+ for (HRegionInfo hri: metas) {
+ final HRegion m = getMetaRegion(hri);
+ scanMetaRegion(m, new ScannerListener() {
+ private boolean inTable = true;
+
+ @SuppressWarnings("synthetic-access")
+ public boolean processRow(HRegionInfo info) throws IOException {
+ if (Bytes.equals(info.getTableDesc().getName(), tableName)) {
+ this.inTable = false;
+ info.getTableDesc().removeFamily(columnFamily);
+ updateMETARegionInfo(m, info);
+ Path tabledir = new Path(rootdir,
+ info.getTableDesc().getNameAsString());
+ Path p = Store.getStoreHomedir(tabledir, info.getEncodedName(),
+ columnFamily);
+ if (!fs.delete(p, true)) {
+ LOG.warn("Failed delete of " + p);
+ }
+ return false;
+ }
+ // If we got here and we have not yet encountered the table yet,
+ // inTable will be false. Otherwise, we've passed out the table.
+ // Stop the scanner.
+ return this.inTable;
+ }});
+ }
+ }
+
+ /**
+ * Update COL_REGIONINFO in meta region r with HRegionInfo hri
+ *
+ * @param r
+ * @param hri
+ * @throws IOException
+ */
+ public void updateMETARegionInfo(HRegion r, final HRegionInfo hri)
+ throws IOException {
+ if (LOG.isDebugEnabled()) {
+ HRegionInfo h = Writables.getHRegionInfoOrNull(
+ r.get(hri.getRegionName(), HConstants.COL_REGIONINFO, -1, -1).get(0).getValue());
+ LOG.debug("Old " + Bytes.toString(HConstants.COL_REGIONINFO) +
+ " for " + hri.toString() + " in " + r.toString() + " is: " +
+ h.toString());
+ }
+ BatchUpdate b = new BatchUpdate(hri.getRegionName());
+ b.put(HConstants.COL_REGIONINFO, Writables.getBytes(hri));
+ r.batchUpdate(b, null);
+ if (LOG.isDebugEnabled()) {
+ HRegionInfo h = Writables.getHRegionInfoOrNull(
+ r.get(hri.getRegionName(), HConstants.COL_REGIONINFO, -1, -1).get(0).getValue());
+ LOG.debug("New " + Bytes.toString(HConstants.COL_REGIONINFO) +
+ " for " + hri.toString() + " in " + r.toString() + " is: " +
+ h.toString());
+ }
+ }
+
+ /**
+ * @return List of {@link HRegionInfo} rows found in the ROOT or META
+ * catalog table.
+ * @param tableName Name of table to go looking for.
+ * @throws IOException
+ * @see #getMetaRegion(HRegionInfo)
+ */
+ public List<HRegionInfo> getMETARows(final byte [] tableName)
+ throws IOException {
+ final List<HRegionInfo> result = new ArrayList<HRegionInfo>();
+ // If passed table name is META, then return the root region.
+ if (Bytes.equals(HConstants.META_TABLE_NAME, tableName)) {
+ result.add(openRootRegion().getRegionInfo());
+ return result;
+ }
+ // Return all meta regions that contain the passed tablename.
+ scanRootRegion(new ScannerListener() {
+ private final Log SL_LOG = LogFactory.getLog(this.getClass());
+
+ public boolean processRow(HRegionInfo info) throws IOException {
+ SL_LOG.debug("Testing " + info);
+ if (Bytes.equals(info.getTableDesc().getName(),
+ HConstants.META_TABLE_NAME)) {
+ result.add(info);
+ return false;
+ }
+ return true;
+ }});
+ return result;
+ }
+
+ /**
+ * @param n Table name.
+ * @return True if a catalog table, -ROOT- or .META.
+ */
+ public static boolean isMetaTableName(final byte [] n) {
+ return Bytes.equals(n, HConstants.ROOT_TABLE_NAME) ||
+ Bytes.equals(n, HConstants.META_TABLE_NAME);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/Migrate.java b/src/java/org/apache/hadoop/hbase/util/Migrate.java
new file mode 100644
index 0000000..e0c7417
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Migrate.java
@@ -0,0 +1,364 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+
+import org.apache.commons.cli.Options;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.util.GenericOptionsParser;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Perform a migration.
+ * HBase keeps a file in hdfs named hbase.version just under the hbase.rootdir.
+ * This file holds the version of the hbase data in the Filesystem. When the
+ * software changes in a manner incompatible with the data in the Filesystem,
+ * it updates its internal version number,
+ * {@link HConstants#FILE_SYSTEM_VERSION}. This wrapper script manages moving
+ * the filesystem across versions until there's a match with current software's
+ * version number. This script will only cross a particular version divide. You may
+ * need to install earlier or later hbase to migrate earlier (or older) versions.
+ *
+ * <p>This wrapper script comprises a set of migration steps. Which steps
+ * are run depends on the span between the version of the hbase data in the
+ * Filesystem and the version of the current softare.
+ *
+ * <p>A migration script must accompany any patch that changes data formats.
+ *
+ * <p>This script has a 'check' and 'execute' mode. Adding migration steps,
+ * its important to keep this in mind. Testing if migration needs to be run,
+ * be careful not to make presumptions about the current state of the data in
+ * the filesystem. It may have a format from many versions previous with
+ * layout not as expected or keys and values of a format not expected. Tools
+ * such as {@link MetaUtils} may not work as expected when running against
+ * old formats -- or, worse, fail in ways that are hard to figure (One such is
+ * edits made by previous migration steps not being apparent in later migration
+ * steps). The upshot is always verify presumptions migrating.
+ *
+ * <p>This script will migrate an hbase 0.18.x only.
+ *
+ * @see <a href="http://wiki.apache.org/hadoop/Hbase/HowToMigrate">How To Migration</a>
+ */
+public class Migrate extends Configured implements Tool {
+ private static final Log LOG = LogFactory.getLog(Migrate.class);
+ private final HBaseConfiguration conf;
+ private FileSystem fs;
+
+ // Gets set by migration methods if we are in readOnly mode.
+ boolean migrationNeeded = false;
+
+ boolean readOnly = false;
+
+ // Filesystem version of hbase 0.1.x.
+ private static final float HBASE_0_1_VERSION = 0.1f;
+
+ // Filesystem version we can migrate from
+ private static final int PREVIOUS_VERSION = 4;
+
+ private static final String MIGRATION_LINK =
+ " See http://wiki.apache.org/hadoop/Hbase/HowToMigrate for more information.";
+
+ /** default constructor */
+ public Migrate() {
+ this(new HBaseConfiguration());
+ }
+
+ /**
+ * @param conf
+ */
+ public Migrate(HBaseConfiguration conf) {
+ super(conf);
+ this.conf = conf;
+ }
+
+ /*
+ * Sets the hbase rootdir as fs.default.name.
+ * @return True if succeeded.
+ */
+ private boolean setFsDefaultName() {
+ // Validate root directory path
+ Path rd = new Path(conf.get(HConstants.HBASE_DIR));
+ try {
+ // Validate root directory path
+ FSUtils.validateRootPath(rd);
+ } catch (IOException e) {
+ LOG.fatal("Not starting migration because the root directory path '" +
+ rd.toString() + "' is not valid. Check the setting of the" +
+ " configuration parameter '" + HConstants.HBASE_DIR + "'", e);
+ return false;
+ }
+ this.conf.set("fs.default.name", rd.toString());
+ return true;
+ }
+
+ /*
+ * @return True if succeeded verifying filesystem.
+ */
+ private boolean verifyFilesystem() {
+ try {
+ // Verify file system is up.
+ fs = FileSystem.get(conf); // get DFS handle
+ LOG.info("Verifying that file system is available..");
+ FSUtils.checkFileSystemAvailable(fs);
+ return true;
+ } catch (IOException e) {
+ LOG.fatal("File system is not available", e);
+ return false;
+ }
+ }
+
+ private boolean notRunning() {
+ // Verify HBase is down
+ LOG.info("Verifying that HBase is not running...." +
+ "Trys ten times to connect to running master");
+ try {
+ HBaseAdmin.checkHBaseAvailable(conf);
+ LOG.fatal("HBase cluster must be off-line.");
+ return false;
+ } catch (MasterNotRunningException e) {
+ return true;
+ }
+ }
+
+ public int run(String[] args) {
+ if (parseArgs(args) != 0) {
+ return -1;
+ }
+ if (!setFsDefaultName()) {
+ return -2;
+ }
+ if (!verifyFilesystem()) {
+ return -3;
+ }
+ if (!notRunning()) {
+ return -4;
+ }
+
+ try {
+ LOG.info("Starting upgrade" + (readOnly ? " check" : ""));
+
+ // See if there is a file system version file
+ String versionStr = FSUtils.getVersion(fs, FSUtils.getRootDir(this.conf));
+ if (versionStr == null) {
+ throw new IOException("File system version file " +
+ HConstants.VERSION_FILE_NAME +
+ " does not exist. No upgrade possible." + MIGRATION_LINK);
+ }
+ if (versionStr.compareTo(HConstants.FILE_SYSTEM_VERSION) == 0) {
+ LOG.info("No upgrade necessary.");
+ return 0;
+ }
+ float version = Float.parseFloat(versionStr);
+ if (version == HBASE_0_1_VERSION ||
+ Integer.valueOf(versionStr).intValue() < PREVIOUS_VERSION) {
+ String msg = "Cannot upgrade from " + versionStr + " to " +
+ HConstants.FILE_SYSTEM_VERSION + " you must install hbase-0.2.x, run " +
+ "the upgrade tool, reinstall this version and run this utility again." +
+ MIGRATION_LINK;
+ System.out.println(msg);
+ throw new IOException(msg);
+ }
+
+ migrate4To6();
+
+ if (!readOnly) {
+ // Set file system version
+ LOG.info("Setting file system version.");
+ FSUtils.setVersion(fs, FSUtils.getRootDir(this.conf));
+ LOG.info("Upgrade successful.");
+ } else if (this.migrationNeeded) {
+ LOG.info("Upgrade needed.");
+ }
+ return 0;
+ } catch (Exception e) {
+ LOG.fatal("Upgrade" + (readOnly ? " check" : "") + " failed", e);
+ return -1;
+ }
+ }
+
+ // Move the fileystem version from 4 to 6.
+ // In here we rewrite the catalog table regions so they keep 10 versions
+ // instead of 1.
+ private void migrate4To6() throws IOException {
+ if (this.readOnly && this.migrationNeeded) {
+ return;
+ }
+ final MetaUtils utils = new MetaUtils(this.conf);
+ try {
+ // These two operations are effectively useless. -ROOT- is hardcode,
+ // at least until hbase 0.20.0 when we store it out in ZK.
+ updateVersions(utils.getRootRegion().getRegionInfo());
+ enableBlockCache(utils.getRootRegion().getRegionInfo());
+ // Scan the root region
+ utils.scanRootRegion(new MetaUtils.ScannerListener() {
+ public boolean processRow(HRegionInfo info)
+ throws IOException {
+ if (readOnly && !migrationNeeded) {
+ migrationNeeded = true;
+ return false;
+ }
+ updateVersions(utils.getRootRegion(), info);
+ enableBlockCache(utils.getRootRegion(), info);
+ return true;
+ }
+ });
+ } finally {
+ utils.shutdown();
+ }
+ }
+
+ /*
+ * Enable blockcaching on catalog tables.
+ * @param mr
+ * @param oldHri
+ */
+ void enableBlockCache(HRegion mr, HRegionInfo oldHri)
+ throws IOException {
+ if (!enableBlockCache(oldHri)) {
+ return;
+ }
+ BatchUpdate b = new BatchUpdate(oldHri.getRegionName());
+ b.put(HConstants.COL_REGIONINFO, Writables.getBytes(oldHri));
+ mr.batchUpdate(b);
+ LOG.info("Enabled blockcache on " + oldHri.getRegionNameAsString());
+ }
+
+ /*
+ * @param hri Update versions.
+ * @param true if we changed value
+ */
+ private boolean enableBlockCache(final HRegionInfo hri) {
+ boolean result = false;
+ HColumnDescriptor hcd =
+ hri.getTableDesc().getFamily(HConstants.COLUMN_FAMILY);
+ if (hcd == null) {
+ LOG.info("No info family in: " + hri.getRegionNameAsString());
+ return result;
+ }
+ // Set blockcache enabled.
+ hcd.setBlockCacheEnabled(true);
+ return true;
+ }
+
+
+ /*
+ * Update versions kept in historian.
+ * @param mr
+ * @param oldHri
+ */
+ void updateVersions(HRegion mr, HRegionInfo oldHri)
+ throws IOException {
+ if (!updateVersions(oldHri)) {
+ return;
+ }
+ BatchUpdate b = new BatchUpdate(oldHri.getRegionName());
+ b.put(HConstants.COL_REGIONINFO, Writables.getBytes(oldHri));
+ mr.batchUpdate(b);
+ LOG.info("Upped versions on " + oldHri.getRegionNameAsString());
+ }
+
+ /*
+ * @param hri Update versions.
+ * @param true if we changed value
+ */
+ private boolean updateVersions(final HRegionInfo hri) {
+ boolean result = false;
+ HColumnDescriptor hcd =
+ hri.getTableDesc().getFamily(HConstants.COLUMN_FAMILY_HISTORIAN);
+ if (hcd == null) {
+ LOG.info("No region historian family in: " + hri.getRegionNameAsString());
+ return result;
+ }
+ // Set historian records so they timeout after a week.
+ if (hcd.getTimeToLive() == HConstants.FOREVER) {
+ hcd.setTimeToLive(HConstants.WEEK_IN_SECONDS);
+ result = true;
+ }
+ // Set the versions up to 10 from old default of 1.
+ hcd = hri.getTableDesc().getFamily(HConstants.COLUMN_FAMILY);
+ if (hcd.getMaxVersions() == 1) {
+ // Set it to 10, an arbitrary high number
+ hcd.setMaxVersions(10);
+ result = true;
+ }
+ return result;
+ }
+
+ private int parseArgs(String[] args) {
+ Options opts = new Options();
+ GenericOptionsParser parser =
+ new GenericOptionsParser(this.getConf(), opts, args);
+ String[] remainingArgs = parser.getRemainingArgs();
+ if (remainingArgs.length != 1) {
+ usage();
+ return -1;
+ }
+ if (remainingArgs[0].compareTo("check") == 0) {
+ this.readOnly = true;
+ } else if (remainingArgs[0].compareTo("upgrade") != 0) {
+ usage();
+ return -1;
+ }
+ return 0;
+ }
+
+ private void usage() {
+ System.err.println("Usage: bin/hbase migrate {check | upgrade} [options]");
+ System.err.println();
+ System.err.println(" check perform upgrade checks only.");
+ System.err.println(" upgrade perform upgrade checks and modify hbase.");
+ System.err.println();
+ System.err.println(" Options are:");
+ System.err.println(" -conf <configuration file> specify an application configuration file");
+ System.err.println(" -D <property=value> use value for given property");
+ System.err.println(" -fs <local|namenode:port> specify a namenode");
+ }
+
+ /**
+ * Main program
+ *
+ * @param args command line arguments
+ */
+ public static void main(String[] args) {
+ int status = 0;
+ try {
+ status = ToolRunner.run(new Migrate(), args);
+ } catch (Exception e) {
+ LOG.error(e);
+ status = -1;
+ }
+ System.exit(status);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/MurmurHash.java b/src/java/org/apache/hadoop/hbase/util/MurmurHash.java
new file mode 100644
index 0000000..72504af
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/MurmurHash.java
@@ -0,0 +1,84 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+/**
+ * This is a very fast, non-cryptographic hash suitable for general hash-based
+ * lookup. See http://murmurhash.googlepages.com/ for more details.
+ *
+ * <p>The C version of MurmurHash 2.0 found at that site was ported
+ * to Java by Andrzej Bialecki (ab at getopt org).</p>
+ */
+public class MurmurHash extends Hash {
+ private static MurmurHash _instance = new MurmurHash();
+
+ public static Hash getInstance() {
+ return _instance;
+ }
+
+ @Override
+ public int hash(byte[] data, int length, int seed) {
+ int m = 0x5bd1e995;
+ int r = 24;
+
+ int h = seed ^ length;
+
+ int len_4 = length >> 2;
+
+ for (int i = 0; i < len_4; i++) {
+ int i_4 = i << 2;
+ int k = data[i_4 + 3];
+ k = k << 8;
+ k = k | (data[i_4 + 2] & 0xff);
+ k = k << 8;
+ k = k | (data[i_4 + 1] & 0xff);
+ k = k << 8;
+ k = k | (data[i_4 + 0] & 0xff);
+ k *= m;
+ k ^= k >>> r;
+ k *= m;
+ h *= m;
+ h ^= k;
+ }
+
+ // avoid calculating modulo
+ int len_m = len_4 << 2;
+ int left = length - len_m;
+
+ if (left != 0) {
+ if (left >= 3) {
+ h ^= data[length - 3] << 16;
+ }
+ if (left >= 2) {
+ h ^= data[length - 2] << 8;
+ }
+ if (left >= 1) {
+ h ^= data[length - 1];
+ }
+
+ h *= m;
+ }
+
+ h ^= h >>> 13;
+ h *= m;
+ h ^= h >>> 15;
+
+ return h;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/Pair.java b/src/java/org/apache/hadoop/hbase/util/Pair.java
new file mode 100644
index 0000000..f8f17fa
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Pair.java
@@ -0,0 +1,99 @@
+package org.apache.hadoop.hbase.util;
+
+import java.io.Serializable;
+
+/**
+ * A generic class for pairs.
+ * @param <T1>
+ * @param <T2>
+ */
+public class Pair<T1, T2> implements Serializable
+{
+ private static final long serialVersionUID = -3986244606585552569L;
+ protected T1 first = null;
+ protected T2 second = null;
+
+ /**
+ * Default constructor.
+ */
+ public Pair()
+ {
+ }
+
+ /**
+ * Constructor
+ * @param a
+ * @param b
+ */
+ public Pair(T1 a, T2 b)
+ {
+ this.first = a;
+ this.second = b;
+ }
+
+ /**
+ * Replace the first element of the pair.
+ * @param a
+ */
+ public void setFirst(T1 a)
+ {
+ this.first = a;
+ }
+
+ /**
+ * Replace the second element of the pair.
+ * @param b
+ */
+ public void setSecond(T2 b)
+ {
+ this.second = b;
+ }
+
+ /**
+ * Return the first element stored in the pair.
+ * @return T1
+ */
+ public T1 getFirst()
+ {
+ return first;
+ }
+
+ /**
+ * Return the second element stored in the pair.
+ * @return T2
+ */
+ public T2 getSecond()
+ {
+ return second;
+ }
+
+ private static boolean equals(Object x, Object y)
+ {
+ return (x == null && y == null) || (x != null && x.equals(y));
+ }
+
+ @Override
+ @SuppressWarnings("unchecked")
+ public boolean equals(Object other)
+ {
+ return other instanceof Pair && equals(first, ((Pair)other).first) &&
+ equals(second, ((Pair)other).second);
+ }
+
+ @Override
+ public int hashCode()
+ {
+ if (first == null)
+ return (second == null) ? 0 : second.hashCode() + 1;
+ else if (second == null)
+ return first.hashCode() + 2;
+ else
+ return first.hashCode() * 17 + second.hashCode();
+ }
+
+ @Override
+ public String toString()
+ {
+ return "{" + getFirst() + "," + getSecond() + "}";
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/Sleeper.java b/src/java/org/apache/hadoop/hbase/util/Sleeper.java
new file mode 100644
index 0000000..7e2aca1
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Sleeper.java
@@ -0,0 +1,93 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Sleeper for current thread.
+ * Sleeps for passed period. Also checks passed boolean and if interrupted,
+ * will return if the flag is set (rather than go back to sleep until its
+ * sleep time is up).
+ */
+public class Sleeper {
+ private final Log LOG = LogFactory.getLog(this.getClass().getName());
+ private final int period;
+ private AtomicBoolean stop;
+
+ /**
+ * @param sleep
+ * @param stop
+ */
+ public Sleeper(final int sleep, final AtomicBoolean stop) {
+ this.period = sleep;
+ this.stop = stop;
+ }
+
+ /**
+ * Sleep for period.
+ */
+ public void sleep() {
+ sleep(System.currentTimeMillis());
+ }
+
+ /**
+ * Sleep for period adjusted by passed <code>startTime<code>
+ * @param startTime Time some task started previous to now. Time to sleep
+ * will be docked current time minus passed <code>startTime<code>.
+ */
+ public void sleep(final long startTime) {
+ if (this.stop.get()) {
+ return;
+ }
+ long now = System.currentTimeMillis();
+ long waitTime = this.period - (now - startTime);
+ if (waitTime > this.period) {
+ LOG.warn("Calculated wait time > " + this.period +
+ "; setting to this.period: " + System.currentTimeMillis() + ", " +
+ startTime);
+ waitTime = this.period;
+ }
+ while (waitTime > 0) {
+ long woke = -1;
+ try {
+ Thread.sleep(waitTime);
+ woke = System.currentTimeMillis();
+ long slept = woke - now;
+ if (slept > (10 * this.period)) {
+ LOG.warn("We slept " + slept + "ms, ten times longer than scheduled: " +
+ this.period);
+ }
+ } catch(InterruptedException iex) {
+ // We we interrupted because we're meant to stop? If not, just
+ // continue ignoring the interruption
+ if (this.stop.get()) {
+ return;
+ }
+ }
+ // Recalculate waitTime.
+ woke = (woke == -1)? System.currentTimeMillis(): woke;
+ waitTime = this.period - (woke - startTime);
+ }
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/SoftSortedMap.java b/src/java/org/apache/hadoop/hbase/util/SoftSortedMap.java
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/SoftSortedMap.java
diff --git a/src/java/org/apache/hadoop/hbase/util/SoftValue.java b/src/java/org/apache/hadoop/hbase/util/SoftValue.java
new file mode 100644
index 0000000..0aaa82f
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/SoftValue.java
@@ -0,0 +1,49 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.lang.ref.ReferenceQueue;
+import java.lang.ref.SoftReference;
+import java.util.Map;
+
+/**
+ * A SoftReference derivative so that we can track down what keys to remove.
+ */
+class SoftValue<K, V> extends SoftReference<V> implements Map.Entry<K, V> {
+ private final K key;
+
+ @SuppressWarnings("unchecked")
+ SoftValue(K key, V value, ReferenceQueue queue) {
+ super(value, queue);
+ this.key = key;
+ }
+
+ public K getKey() {
+ return this.key;
+ }
+
+ public V getValue() {
+ return get();
+ }
+
+ public V setValue(V value) {
+ throw new RuntimeException("Not implemented");
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/SoftValueMap.java b/src/java/org/apache/hadoop/hbase/util/SoftValueMap.java
new file mode 100644
index 0000000..80a81b5
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/SoftValueMap.java
@@ -0,0 +1,144 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.lang.ref.ReferenceQueue;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * A Map that uses Soft Reference values internally. Use as a simple cache.
+ *
+ * @param <K> key class
+ * @param <V> value class
+ */
+public class SoftValueMap<K,V> implements Map<K,V> {
+ private final Map<K, SoftValue<K,V>> internalMap =
+ new HashMap<K, SoftValue<K,V>>();
+ private final ReferenceQueue<?> rq;
+
+ public SoftValueMap() {
+ this(new ReferenceQueue());
+ }
+
+ public SoftValueMap(final ReferenceQueue<?> rq) {
+ this.rq = rq;
+ }
+
+ /**
+ * Checks soft references and cleans any that have been placed on
+ * ReferenceQueue.
+ * @return How many references cleared.
+ */
+ public int checkReferences() {
+ int i = 0;
+ for (Object obj = null; (obj = this.rq.poll()) != null;) {
+ i++;
+ this.internalMap.remove(((SoftValue<K,V>)obj).getKey());
+ }
+ return i;
+ }
+
+ public V put(K key, V value) {
+ checkReferences();
+ SoftValue<K,V> oldValue = this.internalMap.put(key,
+ new SoftValue<K,V>(key, value, this.rq));
+ return oldValue == null ? null : oldValue.get();
+ }
+
+ @SuppressWarnings("unchecked")
+ public void putAll(Map map) {
+ throw new RuntimeException("Not implemented");
+ }
+
+ public V get(Object key) {
+ checkReferences();
+ SoftValue<K,V> value = this.internalMap.get(key);
+ if (value == null) {
+ return null;
+ }
+ if (value.get() == null) {
+ this.internalMap.remove(key);
+ return null;
+ }
+ return value.get();
+ }
+
+ public V remove(Object key) {
+ checkReferences();
+ SoftValue<K,V> value = this.internalMap.remove(key);
+ return value == null ? null : value.get();
+ }
+
+ public boolean containsKey(Object key) {
+ checkReferences();
+ return this.internalMap.containsKey(key);
+ }
+
+ public boolean containsValue(Object value) {
+/* checkReferences();
+ return internalMap.containsValue(value);*/
+ throw new UnsupportedOperationException("Don't support containsValue!");
+ }
+
+ public boolean isEmpty() {
+ checkReferences();
+ return this.internalMap.isEmpty();
+ }
+
+ public int size() {
+ checkReferences();
+ return this.internalMap.size();
+ }
+
+ public void clear() {
+ checkReferences();
+ this.internalMap.clear();
+ }
+
+ public Set<K> keySet() {
+ checkReferences();
+ return this.internalMap.keySet();
+ }
+
+ public Set<Map.Entry<K,V>> entrySet() {
+ checkReferences();
+ Set<Map.Entry<K, SoftValue<K,V>>> entries = this.internalMap.entrySet();
+ Set<Map.Entry<K, V>> real_entries = new HashSet<Map.Entry<K,V>>();
+ for(Map.Entry<K, SoftValue<K,V>> entry : entries) {
+ real_entries.add(entry.getValue());
+ }
+ return real_entries;
+ }
+
+ public Collection<V> values() {
+ checkReferences();
+ Collection<SoftValue<K,V>> softValues = this.internalMap.values();
+ ArrayList<V> hardValues = new ArrayList<V>();
+ for(SoftValue<K,V> softValue : softValues) {
+ hardValues.add(softValue.get());
+ }
+ return hardValues;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/SoftValueSortedMap.java b/src/java/org/apache/hadoop/hbase/util/SoftValueSortedMap.java
new file mode 100644
index 0000000..5bf586e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/SoftValueSortedMap.java
@@ -0,0 +1,188 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.lang.ref.ReferenceQueue;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Comparator;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+/**
+ * A SortedMap implementation that uses Soft Reference values
+ * internally to make it play well with the GC when in a low-memory
+ * situation. Use as a cache where you also need SortedMap functionality.
+ *
+ * @param <K> key class
+ * @param <V> value class
+ */
+public class SoftValueSortedMap<K,V> implements SortedMap<K,V> {
+ private final SortedMap<K, SoftValue<K,V>> internalMap;
+ private final ReferenceQueue rq = new ReferenceQueue();
+
+ /** Constructor */
+ public SoftValueSortedMap() {
+ this(new TreeMap<K, SoftValue<K,V>>());
+ }
+
+ /**
+ * Constructor
+ * @param c
+ */
+ public SoftValueSortedMap(final Comparator<K> c) {
+ this(new TreeMap<K, SoftValue<K,V>>(c));
+ }
+
+ /** For headMap and tailMap support */
+ private SoftValueSortedMap(SortedMap<K,SoftValue<K,V>> original) {
+ this.internalMap = original;
+ }
+
+ /**
+ * Checks soft references and cleans any that have been placed on
+ * ReferenceQueue. Call if get/put etc. are not called regularly.
+ * Internally these call checkReferences on each access.
+ * @return How many references cleared.
+ */
+ public int checkReferences() {
+ int i = 0;
+ for (Object obj = null; (obj = this.rq.poll()) != null;) {
+ i++;
+ this.internalMap.remove(((SoftValue<K,V>)obj).getKey());
+ }
+ return i;
+ }
+
+ public V put(K key, V value) {
+ checkReferences();
+ SoftValue<K,V> oldValue = this.internalMap.put(key,
+ new SoftValue<K,V>(key, value, this.rq));
+ return oldValue == null ? null : oldValue.get();
+ }
+
+ @SuppressWarnings("unchecked")
+ public void putAll(Map map) {
+ throw new RuntimeException("Not implemented");
+ }
+
+ public V get(Object key) {
+ checkReferences();
+ SoftValue<K,V> value = this.internalMap.get(key);
+ if (value == null) {
+ return null;
+ }
+ if (value.get() == null) {
+ this.internalMap.remove(key);
+ return null;
+ }
+ return value.get();
+ }
+
+ public V remove(Object key) {
+ checkReferences();
+ SoftValue<K,V> value = this.internalMap.remove(key);
+ return value == null ? null : value.get();
+ }
+
+ public boolean containsKey(Object key) {
+ checkReferences();
+ return this.internalMap.containsKey(key);
+ }
+
+ public boolean containsValue(Object value) {
+/* checkReferences();
+ return internalMap.containsValue(value);*/
+ throw new UnsupportedOperationException("Don't support containsValue!");
+ }
+
+ public K firstKey() {
+ checkReferences();
+ return internalMap.firstKey();
+ }
+
+ public K lastKey() {
+ checkReferences();
+ return internalMap.lastKey();
+ }
+
+ public SoftValueSortedMap<K,V> headMap(K key) {
+ checkReferences();
+ return new SoftValueSortedMap<K,V>(this.internalMap.headMap(key));
+ }
+
+ public SoftValueSortedMap<K,V> tailMap(K key) {
+ checkReferences();
+ return new SoftValueSortedMap<K,V>(this.internalMap.tailMap(key));
+ }
+
+ public SoftValueSortedMap<K,V> subMap(K fromKey, K toKey) {
+ checkReferences();
+ return new SoftValueSortedMap<K,V>(this.internalMap.subMap(fromKey, toKey));
+ }
+
+ public boolean isEmpty() {
+ checkReferences();
+ return this.internalMap.isEmpty();
+ }
+
+ public int size() {
+ checkReferences();
+ return this.internalMap.size();
+ }
+
+ public void clear() {
+ checkReferences();
+ this.internalMap.clear();
+ }
+
+ public Set<K> keySet() {
+ checkReferences();
+ return this.internalMap.keySet();
+ }
+
+ @SuppressWarnings("unchecked")
+ public Comparator comparator() {
+ return this.internalMap.comparator();
+ }
+
+ public Set<Map.Entry<K,V>> entrySet() {
+ checkReferences();
+ Set<Map.Entry<K, SoftValue<K,V>>> entries = this.internalMap.entrySet();
+ Set<Map.Entry<K, V>> real_entries = new TreeSet<Map.Entry<K,V>>();
+ for(Map.Entry<K, SoftValue<K,V>> entry : entries) {
+ real_entries.add(entry.getValue());
+ }
+ return real_entries;
+ }
+
+ public Collection<V> values() {
+ checkReferences();
+ Collection<SoftValue<K,V>> softValues = this.internalMap.values();
+ ArrayList<V> hardValues = new ArrayList<V>();
+ for(SoftValue<K,V> softValue : softValues) {
+ hardValues.add(softValue.get());
+ }
+ return hardValues;
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/Strings.java b/src/java/org/apache/hadoop/hbase/util/Strings.java
new file mode 100644
index 0000000..117312a
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Strings.java
@@ -0,0 +1,61 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+/**
+ * Utillity for Strings.
+ */
+public class Strings {
+ public final static String DEFAULT_SEPARATOR = "=";
+ public final static String DEFAULT_KEYVALUE_SEPARATOR = ", ";
+
+ /**
+ * Append to a StringBuilder a key/value.
+ * Uses default separators.
+ * @param sb StringBuilder to use
+ * @param key Key to append.
+ * @param value Value to append.
+ * @return Passed <code>sb</code> populated with key/value.
+ */
+ public static StringBuilder appendKeyValue(final StringBuilder sb,
+ final String key, final Object value) {
+ return appendKeyValue(sb, key, value, DEFAULT_SEPARATOR,
+ DEFAULT_KEYVALUE_SEPARATOR);
+ }
+
+ /**
+ * Append to a StringBuilder a key/value.
+ * Uses default separators.
+ * @param sb StringBuilder to use
+ * @param key Key to append.
+ * @param value Value to append.
+ * @param separator Value to use between key and value.
+ * @param keyValueSeparator Value to use between key/value sets.
+ * @return Passed <code>sb</code> populated with key/value.
+ */
+ public static StringBuilder appendKeyValue(final StringBuilder sb,
+ final String key, final Object value, final String separator,
+ final String keyValueSeparator) {
+ if (sb.length() > 0) {
+ sb.append(keyValueSeparator);
+ }
+ return sb.append(key).append(separator).append(value);
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/Threads.java b/src/java/org/apache/hadoop/hbase/util/Threads.java
new file mode 100644
index 0000000..39cdd0b
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Threads.java
@@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.lang.Thread.UncaughtExceptionHandler;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Thread Utility
+ */
+public class Threads {
+ protected static final Log LOG = LogFactory.getLog(Threads.class);
+
+ /**
+ * Utility method that sets name, daemon status and starts passed thread.
+ * @param t
+ * @param name
+ * @return Returns the passed Thread <code>t</code>.
+ */
+ public static Thread setDaemonThreadRunning(final Thread t,
+ final String name) {
+ return setDaemonThreadRunning(t, name, null);
+ }
+
+ /**
+ * Utility method that sets name, daemon status and starts passed thread.
+ * @param t
+ * @param name
+ * @param handler A handler to set on the thread. Pass null if want to
+ * use default handler.
+ * @return Returns the passed Thread <code>t</code>.
+ */
+ public static Thread setDaemonThreadRunning(final Thread t,
+ final String name, final UncaughtExceptionHandler handler) {
+ t.setName(name);
+ if (handler != null) {
+ t.setUncaughtExceptionHandler(handler);
+ }
+ t.setDaemon(true);
+ t.start();
+ return t;
+ }
+
+ /**
+ * Shutdown passed thread using isAlive and join.
+ * @param t Thread to shutdown
+ */
+ public static void shutdown(final Thread t) {
+ shutdown(t, 0);
+ }
+
+ /**
+ * Shutdown passed thread using isAlive and join.
+ * @param joinwait Pass 0 if we're to wait forever.
+ * @param t Thread to shutdown
+ */
+ public static void shutdown(final Thread t, final long joinwait) {
+ while (t.isAlive()) {
+ try {
+ t.join(joinwait);
+ } catch (InterruptedException e) {
+ LOG.warn(t.getName() + "; joinwait=" + joinwait, e);
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/util/VersionInfo.java b/src/java/org/apache/hadoop/hbase/util/VersionInfo.java
new file mode 100644
index 0000000..63f75f0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/VersionInfo.java
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.hbase.VersionAnnotation;
+
+/**
+ * This class finds the package info for hbase and the VersionAnnotation
+ * information. Taken from hadoop. Only name of annotation is different.
+ */
+public class VersionInfo {
+ private static Package myPackage;
+ private static VersionAnnotation version;
+
+ static {
+ myPackage = VersionAnnotation.class.getPackage();
+ version = myPackage.getAnnotation(VersionAnnotation.class);
+ }
+
+ /**
+ * Get the meta-data for the hbase package.
+ * @return
+ */
+ static Package getPackage() {
+ return myPackage;
+ }
+
+ /**
+ * Get the hbase version.
+ * @return the hbase version string, eg. "0.6.3-dev"
+ */
+ public static String getVersion() {
+ return version != null ? version.version() : "Unknown";
+ }
+
+ /**
+ * Get the subversion revision number for the root directory
+ * @return the revision number, eg. "451451"
+ */
+ public static String getRevision() {
+ return version != null ? version.revision() : "Unknown";
+ }
+
+ /**
+ * The date that hbase was compiled.
+ * @return the compilation date in unix date format
+ */
+ public static String getDate() {
+ return version != null ? version.date() : "Unknown";
+ }
+
+ /**
+ * The user that compiled hbase.
+ * @return the username of the user
+ */
+ public static String getUser() {
+ return version != null ? version.user() : "Unknown";
+ }
+
+ /**
+ * Get the subversion URL for the root hbase directory.
+ * @return the url
+ */
+ public static String getUrl() {
+ return version != null ? version.url() : "Unknown";
+ }
+
+ /**
+ * @param args
+ */
+ public static void main(String[] args) {
+ System.out.println("HBase " + getVersion());
+ System.out.println("Subversion " + getUrl() + " -r " + getRevision());
+ System.out.println("Compiled by " + getUser() + " on " + getDate());
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/util/Writables.java b/src/java/org/apache/hadoop/hbase/util/Writables.java
new file mode 100644
index 0000000..7b37d0e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/util/Writables.java
@@ -0,0 +1,196 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Utility class with methods for manipulating Writable objects
+ */
+public class Writables {
+ /**
+ * @param w
+ * @return The bytes of <code>w</code> gotten by running its
+ * {@link Writable#write(java.io.DataOutput)} method.
+ * @throws IOException
+ * @see #getWritable(byte[], Writable)
+ */
+ public static byte [] getBytes(final Writable w) throws IOException {
+ if (w == null) {
+ throw new IllegalArgumentException("Writable cannot be null");
+ }
+ ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(byteStream);
+ try {
+ w.write(out);
+ out.close();
+ out = null;
+ return byteStream.toByteArray();
+ } finally {
+ if (out != null) {
+ out.close();
+ }
+ }
+ }
+
+ /**
+ * Set bytes into the passed Writable by calling its
+ * {@link Writable#readFields(java.io.DataInput)}.
+ * @param bytes
+ * @param w An empty Writable (usually made by calling the null-arg
+ * constructor).
+ * @return The passed Writable after its readFields has been called fed
+ * by the passed <code>bytes</code> array or IllegalArgumentException
+ * if passed null or an empty <code>bytes</code> array.
+ * @throws IOException
+ * @throws IllegalArgumentException
+ */
+ public static Writable getWritable(final byte [] bytes, final Writable w)
+ throws IOException {
+ return getWritable(bytes, 0, bytes.length, w);
+ }
+
+ /**
+ * Set bytes into the passed Writable by calling its
+ * {@link Writable#readFields(java.io.DataInput)}.
+ * @param bytes
+ * @param offset
+ * @param length
+ * @param w An empty Writable (usually made by calling the null-arg
+ * constructor).
+ * @return The passed Writable after its readFields has been called fed
+ * by the passed <code>bytes</code> array or IllegalArgumentException
+ * if passed null or an empty <code>bytes</code> array.
+ * @throws IOException
+ * @throws IllegalArgumentException
+ */
+ public static Writable getWritable(final byte [] bytes, final int offset,
+ final int length, final Writable w)
+ throws IOException {
+ if (bytes == null || length <=0) {
+ throw new IllegalArgumentException("Can't build a writable with empty " +
+ "bytes array");
+ }
+ if (w == null) {
+ throw new IllegalArgumentException("Writable cannot be null");
+ }
+ DataInputBuffer in = new DataInputBuffer();
+ try {
+ in.reset(bytes, offset, length);
+ w.readFields(in);
+ return w;
+ } finally {
+ in.close();
+ }
+ }
+
+ /**
+ * @param bytes
+ * @return A HRegionInfo instance built out of passed <code>bytes</code>.
+ * @throws IOException
+ */
+ public static HRegionInfo getHRegionInfo(final byte [] bytes)
+ throws IOException {
+ return (HRegionInfo)getWritable(bytes, new HRegionInfo());
+ }
+
+ /**
+ * @param bytes
+ * @return A HRegionInfo instance built out of passed <code>bytes</code>
+ * or <code>null</code> if passed bytes are null or an empty array.
+ * @throws IOException
+ */
+ public static HRegionInfo getHRegionInfoOrNull(final byte [] bytes)
+ throws IOException {
+ return (bytes == null || bytes.length <= 0)?
+ (HRegionInfo)null: getHRegionInfo(bytes);
+ }
+
+ /**
+ * @param cell Cell object containing the serialized HRegionInfo
+ * @return A HRegionInfo instance built out of passed <code>cell</code>.
+ * @throws IOException
+ */
+ public static HRegionInfo getHRegionInfo(final Cell cell) throws IOException {
+ if (cell == null) {
+ return null;
+ }
+ return getHRegionInfo(cell.getValue());
+ }
+
+ /**
+ * Copy one Writable to another. Copies bytes using data streams.
+ * @param src Source Writable
+ * @param tgt Target Writable
+ * @return The target Writable.
+ * @throws IOException
+ */
+ public static Writable copyWritable(final Writable src, final Writable tgt)
+ throws IOException {
+ return copyWritable(getBytes(src), tgt);
+ }
+
+ /**
+ * Copy one Writable to another. Copies bytes using data streams.
+ * @param bytes Source Writable
+ * @param tgt Target Writable
+ * @return The target Writable.
+ * @throws IOException
+ */
+ public static Writable copyWritable(final byte [] bytes, final Writable tgt)
+ throws IOException {
+ DataInputStream dis = new DataInputStream(new ByteArrayInputStream(bytes));
+ try {
+ tgt.readFields(dis);
+ } finally {
+ dis.close();
+ }
+ return tgt;
+ }
+
+ /**
+ * @param c
+ * @return Cell value as a UTF-8 String
+ */
+ public static String cellToString(Cell c) {
+ if (c == null) {
+ return "";
+ }
+ return Bytes.toString(c.getValue());
+ }
+
+ /**
+ * @param c
+ * @return Cell as a long.
+ */
+ public static long cellToLong(Cell c) {
+ if (c == null) {
+ return 0;
+ }
+ return Bytes.toLong(c.getValue());
+ }
+}
\ No newline at end of file
diff --git a/src/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java b/src/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
new file mode 100644
index 0000000..9d17b36
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
@@ -0,0 +1,143 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Properties;
+import java.util.Map.Entry;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.zookeeper.server.ServerConfig;
+import org.apache.zookeeper.server.ZooKeeperServerMain;
+import org.apache.zookeeper.server.quorum.QuorumPeerConfig;
+import org.apache.zookeeper.server.quorum.QuorumPeerMain;
+
+/**
+ * HBase's version of ZooKeeper's QuorumPeer. When HBase is set to manage
+ * ZooKeeper, this class is used to start up QuorumPeer instances. By doing
+ * things in here rather than directly calling to ZooKeeper, we have more
+ * control over the process. Currently, this class allows us to parse the
+ * zoo.cfg and inject variables from HBase's site.xml configuration in.
+ */
+public class HQuorumPeer implements HConstants {
+ private static final Log LOG = LogFactory.getLog(HQuorumPeer.class);
+ private static final String VARIABLE_START = "${";
+ private static final int VARIABLE_START_LENGTH = VARIABLE_START.length();
+ private static final String VARIABLE_END = "}";
+ private static final int VARIABLE_END_LENGTH = VARIABLE_END.length();
+
+ /**
+ * Parse ZooKeeper configuration and run a QuorumPeer.
+ * While parsing the zoo.cfg, we substitute variables with values from
+ * hbase-site.xml.
+ * @param args String[] of command line arguments. Not used.
+ */
+ public static void main(String[] args) {
+ try {
+ Properties properties = parseZooKeeperConfig();
+ QuorumPeerConfig.parseProperties(properties);
+ } catch (Exception e) {
+ e.printStackTrace();
+ System.exit(-1);
+ }
+ if (ServerConfig.isStandalone()) {
+ ZooKeeperServerMain.main(args);
+ } else {
+ QuorumPeerMain.runPeerFromConfig();
+ }
+ }
+
+ /**
+ * Parse ZooKeeper's zoo.cfg, injecting HBase Configuration variables in.
+ * @return Properties parsed from config stream with variables substituted.
+ * @throws IOException if anything goes wrong parsing config
+ */
+ public static Properties parseZooKeeperConfig() throws IOException {
+ ClassLoader cl = HQuorumPeer.class.getClassLoader();
+ InputStream inputStream = cl.getResourceAsStream(ZOOKEEPER_CONFIG_NAME);
+ if (inputStream == null) {
+ throw new IOException(ZOOKEEPER_CONFIG_NAME + " not found");
+ }
+ return parseConfig(inputStream);
+ }
+
+ /**
+ * Parse ZooKeeper's zoo.cfg, injecting HBase Configuration variables in.
+ * This method is used for testing so we can pass our own InputStream.
+ * @param inputStream InputStream to read from.
+ * @return Properties parsed from config stream with variables substituted.
+ * @throws IOException if anything goes wrong parsing config
+ */
+ public static Properties parseConfig(InputStream inputStream) throws IOException {
+ HBaseConfiguration conf = new HBaseConfiguration();
+ Properties properties = new Properties();
+ try {
+ properties.load(inputStream);
+ } catch (IOException e) {
+ String msg = "fail to read properties from " + ZOOKEEPER_CONFIG_NAME;
+ LOG.fatal(msg);
+ throw new IOException(msg);
+ }
+ for (Entry<Object, Object> entry : properties.entrySet()) {
+ String value = entry.getValue().toString().trim();
+ StringBuilder newValue = new StringBuilder();
+ int varStart = value.indexOf(VARIABLE_START);
+ int varEnd = 0;
+ while (varStart != -1) {
+ varEnd = value.indexOf(VARIABLE_END, varStart);
+ if (varEnd == -1) {
+ String msg = "variable at " + varStart + " has no end marker";
+ LOG.fatal(msg);
+ throw new IOException(msg);
+ }
+ String variable = value.substring(varStart + VARIABLE_START_LENGTH, varEnd);
+
+ String substituteValue = System.getProperty(variable);
+ if (substituteValue == null) {
+ substituteValue = conf.get(variable);
+ }
+ if (substituteValue == null) {
+ String msg = "variable " + variable + " not set in system property "
+ + "or hbase configs";
+ LOG.fatal(msg);
+ throw new IOException(msg);
+ }
+ // Special case for 'hbase.master.hostname' property being 'local'
+ if (variable.equals(HConstants.MASTER_HOST_NAME) && substituteValue.equals("local")) {
+ substituteValue = "localhost";
+ }
+ newValue.append(substituteValue);
+
+ varEnd += VARIABLE_END_LENGTH;
+ varStart = value.indexOf(VARIABLE_START, varEnd);
+ }
+
+ newValue.append(value.substring(varEnd));
+
+ String key = entry.getKey().toString().trim();
+ properties.setProperty(key, newValue.toString());
+ }
+ return properties;
+ }
+}
diff --git a/src/java/org/apache/hadoop/hbase/zookeeper/WatcherWrapper.java b/src/java/org/apache/hadoop/hbase/zookeeper/WatcherWrapper.java
new file mode 100644
index 0000000..f2c40e0
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/zookeeper/WatcherWrapper.java
@@ -0,0 +1,49 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+
+/**
+ * Place-holder Watcher.
+ * Does nothing currently.
+ */
+public class WatcherWrapper implements Watcher {
+ private final Watcher otherWatcher;
+
+ /**
+ * Construct with a Watcher to pass events to.
+ * @param otherWatcher Watcher to pass events to.
+ */
+ public WatcherWrapper(Watcher otherWatcher) {
+ this.otherWatcher = otherWatcher;
+ }
+
+ /**
+ * @param event WatchedEvent from ZooKeeper.
+ */
+ public void process(WatchedEvent event) {
+ if (otherWatcher != null) {
+ otherWatcher.process(event);
+ }
+ }
+
+}
diff --git a/src/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWrapper.java b/src/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWrapper.java
new file mode 100644
index 0000000..a8e1f5e
--- /dev/null
+++ b/src/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWrapper.java
@@ -0,0 +1,537 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.Map.Entry;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.ZooDefs.Ids;
+import org.apache.zookeeper.ZooKeeper;
+import org.apache.zookeeper.ZooKeeper.States;
+import org.apache.zookeeper.data.Stat;
+
+/**
+ * Wraps a ZooKeeper instance and adds HBase specific functionality.
+ *
+ * This class provides methods to:
+ * - read/write/delete the root region location in ZooKeeper.
+ * - set/check out of safe mode flag.
+ */
+public class ZooKeeperWrapper implements HConstants {
+ protected static final Log LOG = LogFactory.getLog(ZooKeeperWrapper.class);
+
+ // TODO: Replace this with ZooKeeper constant when ZOOKEEPER-277 is resolved.
+ private static final char ZNODE_PATH_SEPARATOR = '/';
+
+ private static String quorumServers = null;
+ static {
+ loadZooKeeperConfig();
+ }
+
+ private final ZooKeeper zooKeeper;
+ private final WatcherWrapper watcher;
+
+ private final String rootRegionZNode;
+ private final String outOfSafeModeZNode;
+ private final String rsZNode;
+ private final String masterElectionZNode;
+
+ /**
+ * Create a ZooKeeperWrapper.
+ * @param conf HBaseConfiguration to read settings from.
+ * @throws IOException If a connection error occurs.
+ */
+ public ZooKeeperWrapper(HBaseConfiguration conf) throws IOException {
+ this(conf, null);
+ }
+
+ /**
+ * Create a ZooKeeperWrapper.
+ * @param conf HBaseConfiguration to read settings from.
+ * @param watcher ZooKeeper watcher to register.
+ * @throws IOException If a connection error occurs.
+ */
+ public ZooKeeperWrapper(HBaseConfiguration conf, Watcher watcher)
+ throws IOException {
+ if (quorumServers == null) {
+ throw new IOException("Could not read quorum servers from " +
+ ZOOKEEPER_CONFIG_NAME);
+ }
+
+ int sessionTimeout = conf.getInt("zookeeper.session.timeout", 10 * 1000);
+ this.watcher = new WatcherWrapper(watcher);
+ try {
+ zooKeeper = new ZooKeeper(quorumServers, sessionTimeout, this.watcher);
+ } catch (IOException e) {
+ LOG.error("Failed to create ZooKeeper object: " + e);
+ throw new IOException(e);
+ }
+
+ String parentZNode = conf.get("zookeeper.znode.parent", "/hbase");
+
+ String rootServerZNodeName = conf.get("zookeeper.znode.rootserver",
+ "root-region-server");
+ String outOfSafeModeZNodeName = conf.get("zookeeper.znode.safemode",
+ "safe-mode");
+ String rsZNodeName = conf.get("zookeeper.znode.rs", "rs");
+ String masterAddressZNodeName = conf.get("zookeeper.znode.master",
+ "master");
+
+ rootRegionZNode = getZNode(parentZNode, rootServerZNodeName);
+ outOfSafeModeZNode = getZNode(parentZNode, outOfSafeModeZNodeName);
+ rsZNode = getZNode(parentZNode, rsZNodeName);
+ masterElectionZNode = getZNode(parentZNode, masterAddressZNodeName);
+ }
+
+ /**
+ * This is for testing KeeperException.SessionExpiredExcseption.
+ * See HBASE-1232.
+ * @return long session ID of this ZooKeeper session.
+ */
+ public long getSessionID() {
+ return zooKeeper.getSessionId();
+ }
+
+ /**
+ * This is for testing KeeperException.SessionExpiredExcseption.
+ * See HBASE-1232.
+ * @return byte[] password of this ZooKeeper session.
+ */
+ public byte[] getSessionPassword() {
+ return zooKeeper.getSessionPasswd();
+ }
+
+ /**
+ * This is for tests to directly set the ZooKeeper quorum servers.
+ * @param servers comma separated host:port ZooKeeper quorum servers.
+ */
+ public static void setQuorumServers(String servers) {
+ quorumServers = servers;
+ }
+
+ /** @return comma separated host:port list of ZooKeeper quorum servers. */
+ public static String getQuorumServers() {
+ return quorumServers;
+ }
+
+ private static void loadZooKeeperConfig() {
+ Properties properties = null;
+ try {
+ properties = HQuorumPeer.parseZooKeeperConfig();
+ } catch (IOException e) {
+ LOG.error("fail to read properties from " + ZOOKEEPER_CONFIG_NAME);
+ return;
+ }
+
+ String clientPort = null;
+ List<String> servers = new ArrayList<String>();
+
+ // The clientPort option may come after the server.X hosts, so we need to
+ // grab everything and then create the final host:port comma separated list.
+ boolean anyValid = false;
+ for (Entry<Object,Object> property : properties.entrySet()) {
+ String key = property.getKey().toString().trim();
+ String value = property.getValue().toString().trim();
+ if (key.equals("clientPort")) {
+ clientPort = value;
+ }
+ else if (key.startsWith("server.")) {
+ String host = value.substring(0, value.indexOf(':'));
+ servers.add(host);
+ try {
+ InetAddress.getByName(host);
+ anyValid = true;
+ } catch (UnknownHostException e) {
+ LOG.warn(StringUtils.stringifyException(e));
+ }
+ }
+ }
+
+ if (!anyValid) {
+ LOG.error("no valid quorum servers found in " + ZOOKEEPER_CONFIG_NAME);
+ return;
+ }
+
+ if (clientPort == null) {
+ LOG.error("no clientPort found in " + ZOOKEEPER_CONFIG_NAME);
+ return;
+ }
+
+ if (servers.isEmpty()) {
+ LOG.fatal("No server.X lines found in conf/zoo.cfg. HBase must have a " +
+ "ZooKeeper cluster configured for its operation.");
+ System.exit(-1);
+ }
+
+ StringBuilder hostPortBuilder = new StringBuilder();
+ for (int i = 0; i < servers.size(); ++i) {
+ String host = servers.get(i);
+ if (i > 0) {
+ hostPortBuilder.append(',');
+ }
+ hostPortBuilder.append(host);
+ hostPortBuilder.append(':');
+ hostPortBuilder.append(clientPort);
+ }
+
+ quorumServers = hostPortBuilder.toString();
+ LOG.info("Quorum servers: " + quorumServers);
+ }
+
+ /** @return true if currently connected to ZooKeeper, false otherwise. */
+ public boolean isConnected() {
+ return zooKeeper.getState() == States.CONNECTED;
+ }
+
+ /**
+ * Read location of server storing root region.
+ * @return HServerAddress pointing to server serving root region or null if
+ * there was a problem reading the ZNode.
+ */
+ public HServerAddress readRootRegionLocation() {
+ return readAddress(rootRegionZNode, null);
+ }
+
+ /**
+ * Read address of master server.
+ * @return HServerAddress of master server.
+ * @throws IOException if there's a problem reading the ZNode.
+ */
+ public HServerAddress readMasterAddressOrThrow() throws IOException {
+ return readAddressOrThrow(masterElectionZNode, null);
+ }
+
+ /**
+ * Read master address and set a watch on it.
+ * @param watcher Watcher to set on master address ZNode if not null.
+ * @return HServerAddress of master or null if there was a problem reading the
+ * ZNode. The watcher is set only if the result is not null.
+ */
+ public HServerAddress readMasterAddress(Watcher watcher) {
+ return readAddress(masterElectionZNode, watcher);
+ }
+
+ /**
+ * Set a watcher on the master address ZNode. The watcher will be set unless
+ * an exception occurs with ZooKeeper.
+ * @param watcher Watcher to set on master address ZNode.
+ * @return true if watcher was set, false otherwise.
+ */
+ public boolean watchMasterAddress(Watcher watcher) {
+ try {
+ zooKeeper.exists(masterElectionZNode, watcher);
+ } catch (KeeperException e) {
+ LOG.warn("Failed to set watcher on ZNode " + masterElectionZNode, e);
+ return false;
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to set watcher on ZNode " + masterElectionZNode, e);
+ return false;
+ }
+ LOG.debug("Set watcher on master address ZNode " + masterElectionZNode);
+ return true;
+ }
+
+ private HServerAddress readAddress(String znode, Watcher watcher) {
+ try {
+ return readAddressOrThrow(znode, watcher);
+ } catch (IOException e) {
+ return null;
+ }
+ }
+
+ private HServerAddress readAddressOrThrow(String znode, Watcher watcher) throws IOException {
+ byte[] data;
+ try {
+ data = zooKeeper.getData(znode, watcher, null);
+ } catch (InterruptedException e) {
+ throw new IOException(e);
+ } catch (KeeperException e) {
+ throw new IOException(e);
+ }
+
+ String addressString = Bytes.toString(data);
+ LOG.debug("Read ZNode " + znode + " got " + addressString);
+ HServerAddress address = new HServerAddress(addressString);
+ return address;
+ }
+
+ private boolean ensureExists(final String znode) {
+ try {
+ zooKeeper.create(znode, new byte[0],
+ Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+ LOG.debug("Created ZNode " + znode);
+ return true;
+ } catch (KeeperException.NodeExistsException e) {
+ return true; // ok, move on.
+ } catch (KeeperException.NoNodeException e) {
+ return ensureParentExists(znode) && ensureExists(znode);
+ } catch (KeeperException e) {
+ LOG.warn("Failed to create " + znode + ":", e);
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to create " + znode + ":", e);
+ }
+ return false;
+ }
+
+ private boolean ensureParentExists(final String znode) {
+ int index = znode.lastIndexOf(ZNODE_PATH_SEPARATOR);
+ if (index <= 0) { // Parent is root, which always exists.
+ return true;
+ }
+ return ensureExists(znode.substring(0, index));
+ }
+
+ /**
+ * Delete ZNode containing root region location.
+ * @return true if operation succeeded, false otherwise.
+ */
+ public boolean deleteRootRegionLocation() {
+ if (!ensureParentExists(rootRegionZNode)) {
+ return false;
+ }
+
+ try {
+ zooKeeper.delete(rootRegionZNode, -1);
+ LOG.debug("Deleted ZNode " + rootRegionZNode);
+ return true;
+ } catch (KeeperException.NoNodeException e) {
+ return true; // ok, move on.
+ } catch (KeeperException e) {
+ LOG.warn("Failed to delete " + rootRegionZNode + ": " + e);
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to delete " + rootRegionZNode + ": " + e);
+ }
+
+ return false;
+ }
+
+ private boolean createRootRegionLocation(String address) {
+ byte[] data = Bytes.toBytes(address);
+ try {
+ zooKeeper.create(rootRegionZNode, data, Ids.OPEN_ACL_UNSAFE,
+ CreateMode.PERSISTENT);
+ LOG.debug("Created ZNode " + rootRegionZNode + " with data " + address);
+ return true;
+ } catch (KeeperException e) {
+ LOG.warn("Failed to create root region in ZooKeeper: " + e);
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to create root region in ZooKeeper: " + e);
+ }
+
+ return false;
+ }
+
+ private boolean updateRootRegionLocation(String address) {
+ byte[] data = Bytes.toBytes(address);
+ try {
+ zooKeeper.setData(rootRegionZNode, data, -1);
+ LOG.debug("SetData of ZNode " + rootRegionZNode + " with " + address);
+ return true;
+ } catch (KeeperException e) {
+ LOG.warn("Failed to set root region location in ZooKeeper: " + e);
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to set root region location in ZooKeeper: " + e);
+ }
+
+ return false;
+ }
+
+ /**
+ * Write root region location to ZooKeeper. If address is null, delete ZNode.
+ * containing root region location.
+ * @param address HServerAddress to write to ZK.
+ * @return true if operation succeeded, false otherwise.
+ */
+ public boolean writeRootRegionLocation(HServerAddress address) {
+ if (address == null) {
+ return deleteRootRegionLocation();
+ }
+
+ if (!ensureParentExists(rootRegionZNode)) {
+ return false;
+ }
+
+ String addressString = address.toString();
+
+ if (checkExistenceOf(rootRegionZNode)) {
+ return updateRootRegionLocation(addressString);
+ }
+
+ return createRootRegionLocation(addressString);
+ }
+
+ /**
+ * Write address of master to ZooKeeper.
+ * @param address HServerAddress of master.
+ * @return true if operation succeeded, false otherwise.
+ */
+ public boolean writeMasterAddress(HServerAddress address) {
+ if (!ensureParentExists(masterElectionZNode)) {
+ return false;
+ }
+
+ String addressStr = address.toString();
+ byte[] data = Bytes.toBytes(addressStr);
+ try {
+ zooKeeper.create(masterElectionZNode, data, Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL);
+ LOG.debug("Wrote master address " + address + " to ZooKeeper");
+ return true;
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to write master address " + address + " to ZooKeeper", e);
+ } catch (KeeperException e) {
+ LOG.warn("Failed to write master address " + address + " to ZooKeeper", e);
+ }
+
+ return false;
+ }
+
+ /**
+ * Check if we're out of safe mode. Being out of safe mode is signified by an
+ * ephemeral ZNode existing in ZooKeeper.
+ * @return true if we're out of safe mode, false otherwise.
+ */
+ public boolean checkOutOfSafeMode() {
+ if (!ensureParentExists(outOfSafeModeZNode)) {
+ return false;
+ }
+
+ return checkExistenceOf(outOfSafeModeZNode);
+ }
+
+ /**
+ * Create ephemeral ZNode signifying that we're out of safe mode.
+ * @return true if ephemeral ZNode created successfully, false otherwise.
+ */
+ public boolean writeOutOfSafeMode() {
+ if (!ensureParentExists(outOfSafeModeZNode)) {
+ return false;
+ }
+
+ try {
+ zooKeeper.create(outOfSafeModeZNode, new byte[0], Ids.OPEN_ACL_UNSAFE,
+ CreateMode.EPHEMERAL);
+ LOG.debug("Wrote out of safe mode");
+ return true;
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to create out of safe mode in ZooKeeper: " + e);
+ } catch (KeeperException e) {
+ LOG.warn("Failed to create out of safe mode in ZooKeeper: " + e);
+ }
+
+ return false;
+ }
+
+ /**
+ * Write in ZK this RS startCode and address.
+ * Ensures that the full path exists.
+ * @param info The RS info
+ * @return true if the location was written, false if it failed
+ */
+ public boolean writeRSLocation(HServerInfo info) {
+ ensureExists(rsZNode);
+ byte[] data = Bytes.toBytes(info.getServerAddress().getBindAddress());
+ String znode = joinPath(rsZNode, Long.toString(info.getStartCode()));
+ try {
+ zooKeeper.create(znode, data, Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL);
+ LOG.debug("Created ZNode " + znode
+ + " with data " + info.getServerAddress().getBindAddress());
+ return true;
+ } catch (KeeperException e) {
+ LOG.warn("Failed to create " + znode + " znode in ZooKeeper: " + e);
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to create " + znode + " znode in ZooKeeper: " + e);
+ }
+ return false;
+ }
+
+ /**
+ * Update the RS address and set a watcher on the znode
+ * @param info The RS info
+ * @param watcher The watcher to put on the znode
+ * @return true if the update is done, false if it failed
+ */
+ public boolean updateRSLocationGetWatch(HServerInfo info, Watcher watcher) {
+ byte[] data = Bytes.toBytes(info.getServerAddress().getBindAddress());
+ String znode = rsZNode + "/" + info.getStartCode();
+ try {
+ zooKeeper.setData(znode, data, -1);
+ LOG.debug("Updated ZNode " + znode
+ + " with data " + info.getServerAddress().getBindAddress());
+ zooKeeper.getData(znode, watcher, null);
+ return true;
+ } catch (KeeperException e) {
+ LOG.warn("Failed to update " + znode + " znode in ZooKeeper: " + e);
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to update " + znode + " znode in ZooKeeper: " + e);
+ }
+
+ return false;
+ }
+
+ private boolean checkExistenceOf(String path) {
+ Stat stat = null;
+ try {
+ stat = zooKeeper.exists(path, false);
+ } catch (KeeperException e) {
+ LOG.warn("checking existence of " + path, e);
+ } catch (InterruptedException e) {
+ LOG.warn("checking existence of " + path, e);
+ }
+
+ return stat != null;
+ }
+
+ /**
+ * Close this ZooKeeper session.
+ */
+ public void close() {
+ try {
+ zooKeeper.close();
+ LOG.debug("Closed connection with ZooKeeper");
+ } catch (InterruptedException e) {
+ LOG.warn("Failed to close connection with ZooKeeper");
+ }
+ }
+
+ private String getZNode(String parentZNode, String znodeName) {
+ return znodeName.charAt(0) == ZNODE_PATH_SEPARATOR ?
+ znodeName : joinPath(parentZNode, znodeName);
+ }
+
+ private String joinPath(String parent, String child) {
+ return parent + ZNODE_PATH_SEPARATOR + child;
+ }
+}
diff --git a/src/java/overview.html b/src/java/overview.html
new file mode 100644
index 0000000..1652bb2
--- /dev/null
+++ b/src/java/overview.html
@@ -0,0 +1,333 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html>
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<head>
+ <title>HBase</title>
+</head>
+<body bgcolor="white">
+<a href="http://hbase.org">HBase</a> is a scalable, distributed database built on <a href="http://hadoop.apache.org/core">Hadoop Core</a>.
+
+<h2><a name="requirements">Requirements</a></h2>
+<ul>
+ <li>Java 1.6.x, preferably from <a href="http://www.java.com/en/download/">Sun</a>.
+ </li>
+ <li><a href="http://hadoop.apache.org/core/releases.html">Hadoop 0.19.x</a>. This version of HBase will
+ only run on this version of Hadoop.
+ </li>
+ <li>
+ ssh must be installed and sshd must be running to use Hadoop's
+ scripts to manage remote Hadoop daemons.
+ </li>
+ <li>HBase currently is a file handle hog. The usual default of
+ 1024 on *nix systems is insufficient if you are loading any significant
+ amount of data into regionservers. See the
+ <a href="http://wiki.apache.org/hadoop/Hbase/FAQ#6">FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?</a>
+ for how to up the limit. Also, as of 0.18.x hadoop, datanodes have an upper-bound
+ on the number of threads they will support (<code>dfs.datanode.max.xcievers</code>).
+ Default is 256. If loading lots of data into hbase, up this limit on your
+ hadoop cluster. Also consider upping the number of datanode handlers from
+ the default of 3. See <code>dfs.datanode.handler.count</code>.</li>
+ <li>The clocks on cluster members should be in basic alignments. Some skew is tolerable but
+ wild skew can generate odd behaviors. Run <a href="http://en.wikipedia.org/wiki/Network_Time_Protocol">NTP</a>
+ on your cluster, or an equivalent.</li>
+</ul>
+<h3>Windows</h3>
+If you are running HBase on Windows, you must install <a href="http://cygwin.com/">Cygwin</a>. Additionally, it is <emph>strongly recommended</emph> that you add or append to the following environment variables. If you install Cygwin in a location that is not C:\cygwin you should modify the following appropriately.
+<p>
+<pre>
+HOME=c:\cygwin\home\jim
+ANT_HOME=(wherever you installed ant)
+JAVA_HOME=(wherever you installed java)
+PATH=C:\cygwin\bin;%JAVA_HOME%\bin;%ANT_HOME%\bin; other windows stuff
+SHELL=/bin/bash
+</pre>
+For additional information, see the <a href="http://hadoop.apache.org/core/docs/current/quickstart.html">Hadoop Quick Start Guide</a>
+</p>
+<h2><a name="getting_started" >Getting Started</a></h2>
+<p>
+What follows presumes you have obtained a copy of HBase and are installing
+for the first time. If upgrading your
+HBase instance, see <a href="#upgrading">Upgrading</a>.
+</p>
+<p>
+Define <code>${HBASE_HOME}</code> to be the location of the root of your HBase installation, e.g.
+<code>/user/local/hbase</code>. Edit <code>${HBASE_HOME}/conf/hbase-env.sh</code>. In this file you can
+set the heapsize for HBase, etc. At a minimum, set <code>JAVA_HOME</code> to point at the root of
+your Java installation.
+</p>
+<p>
+If you are running a standalone operation, there should be nothing further to configure; proceed to
+<a href=#runandconfirm>Running and Confirming Your Installation</a>. If you are running a distributed
+operation, continue reading.
+</p>
+
+<h2><a name="distributed">Distributed Operation</a></h2>
+<p>Distributed mode requires an instance of the Hadoop Distributed File System (DFS) and a ZooKeeper cluster.
+See the Hadoop <a href="http://lucene.apache.org/hadoop/api/overview-summary.html#overview_description">
+requirements and instructions</a> for how to set up a DFS.
+See the ZooKeeeper <a href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html">Getting Started Guide</a>
+for information about the ZooKeeper distributed coordination service.
+If you do not configure a ZooKeeper cluster, HBase will manage a single instance
+ZooKeeper service for you running on the master node.
+This is intended for development and local testing only.
+It SHOULD NOT be used in a fully-distributed production operation.
+</p>
+
+<h3><a name="pseudo-distrib">Pseudo-Distributed Operation</a></h3>
+<p>A pseudo-distributed operation is simply a distributed operation run on a single host.
+Once you have confirmed your DFS setup, configuring HBase for use on one host requires modification of
+<code>${HBASE_HOME}/conf/hbase-site.xml</code>, which needs to be pointed at the running Hadoop DFS instance.
+Use <code>hbase-site.xml</code> to override the properties defined in
+<code>${HBASE_HOME}/conf/hbase-default.xml</code> (<code>hbase-default.xml</code> itself
+should never be modified). At a minimum the <code>hbase.rootdir</code> property should be redefined
+in <code>hbase-site.xml</code> to point HBase at the Hadoop filesystem to use. For example, adding the property
+below to your <code>hbase-site.xml</code> says that HBase should use the <code>/hbase</code> directory in the
+HDFS whose namenode is at port 9000 on your local machine:
+</p>
+<pre>
+<configuration>
+ ...
+ <property>
+ <name>hbase.rootdir</name>
+ <value>hdfs://localhost:9000/hbase</value>
+ <description>The directory shared by region servers.
+ </description>
+ </property>
+ ...
+</configuration>
+</pre>
+<p>Note: Let hbase create the directory. If you don't, you'll get warning saying hbase
+needs a migration run because the directory is missing files expected by hbase (it'll
+create them if you let it).
+</p>
+
+<h3><a name="fully-distrib">Fully-Distributed Operation</a></h3>
+For running a fully-distributed operation on more than one host, the following
+configurations must be made <i>in addition</i> to those described in the
+<a href="#pseudo-distrib">pseudo-distributed operation</a> section above.
+In <code>hbase-site.xml</code>, you must also configure
+<code>hbase.master.hostname</code> to the host on which the HBase master runs
+(<a href="http://wiki.apache.org/lucene-hadoop/Hbase/HbaseArchitecture">read
+about the HBase master, regionservers, etc</a>).
+For example, adding the below to your <code>hbase-site.xml</code> says the
+master is up on the host example.org:
+</p>
+<pre>
+<configuration>
+ ...
+ <property>
+ <name>hbase.master.hostname</name>
+ <value>example.org</value>
+ <description>The host that the HBase master runs at.
+ A value of 'local' runs the master and regionserver in a single process.
+ </description>
+ </property>
+ ...
+</configuration>
+</pre>
+<p>
+Keep in mind that for a fully-distributed operation, you may not want your <code>hbase.rootdir</code>
+to point to localhost (maybe, as in the configuration above, you will want to use
+<code>example.org</code>). In addition to <code>hbase-site.xml</code>, a fully-distributed
+operation requires that you also modify <code>${HBASE_HOME}/conf/regionservers</code>.
+<code>regionserver</code> lists all the hosts running HRegionServers, one host per line (This file
+in HBase is like the hadoop slaves file at <code>${HADOOP_HOME}/conf/slaves</code>).
+</p>
+<p>
+Furthermore, you should configure a distributed ZooKeeper cluster.
+The ZooKeeper configuration file is stored at <code>${HBASE_HOME}/conf/zoo.cfg</code>.
+See the ZooKeeper <a href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html"> Getting Started Guide</a> for information about the format and options of that file.
+Specifically, look at the <a href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html#sc_RunningReplicatedZooKeeper">Running Replicated ZooKeeper</a> section.
+In <code>${HBASE_HOME}/conf/hbase-env.sh</code>, set <code>HBASE_MANAGES_ZK=false</code> to tell HBase not to manage its own single instance ZooKeeper service.
+</p>
+
+<p>Of note, if you have made <i>HDFS client configuration</i> on your hadoop cluster, hbase will not
+see this configuration unless you do one of the following:
+<ul>
+ <li>Add a pointer to your <code>HADOOP_CONF_DIR</code> to <code>CLASSPATH</code> in <code>hbase-env.sh</code></li>
+ <li>Add a copy of <code>hadoop-site.xml</code> to <code>${HBASE_HOME}/conf</code>, or</li>
+ <li>If only a small set of HDFS client configurations, add them to <code>hbase-site.xml</code></li>
+</ul>
+An example of such an HDFS client configuration is <code>dfs.replication</code>. If for example,
+you want to run with a replication factor of 5, hbase will create files with the default of 3 unless
+you do the above to make the configuration available to hbase.
+</p>
+
+<h2><a name="runandconfirm">Running and Confirming Your Installation</a></h2>
+<p>If you are running in standalone, non-distributed mode, HBase by default uses
+the local filesystem.</p>
+
+<p>If you are running a distributed cluster you will need to start the Hadoop DFS daemons
+before starting HBase and stop the daemons after HBase has shut down. Start and
+stop the Hadoop DFS daemons by running <code>${HADOOP_HOME}/bin/start-dfs.sh</code>.
+You can ensure it started properly by testing the put and get of files into the Hadoop filesystem.
+HBase does not normally use the mapreduce daemons. These do not need to be started.</p>
+
+<p>Start HBase with the following command:
+</p>
+<pre>
+${HBASE_HOME}/bin/start-hbase.sh
+</pre>
+<p>
+Once HBase has started, enter <code>${HBASE_HOME}/bin/hbase shell</code> to obtain a
+shell against HBase from which you can execute commands.
+Test your installation by creating, viewing, and dropping
+To stop HBase, exit the HBase shell and enter:
+</p>
+<pre>
+${HBASE_HOME}/bin/stop-hbase.sh
+</pre>
+<p>
+If you are running a distributed operation, be sure to wait until HBase has shut down completely
+before stopping the Hadoop daemons.
+</p>
+<p>
+The default location for logs is <code>${HBASE_HOME}/logs</code>.
+</p>
+<p>HBase also puts up a UI listing vital attributes. By default its deployed on the master host
+at port 60010 (HBase regionservers listen on port 60020 by default and put up an informational
+http server at 60030).</p>
+
+<h2><a name="upgrading" >Upgrading</a></h2>
+<p>After installing a new HBase on top of data written by a previous HBase version, before
+starting your cluster, run the <code>${HBASE_DIR}/bin/hbase migrate</code> migration script.
+It will make any adjustments to the filesystem data under <code>hbase.rootdir</code> necessary to run
+the HBase version. It does not change your install unless you explicitly ask it to.
+</p>
+
+<h2><a name="client_example">Example API Usage</a></h2>
+<p>Once you have a running HBase, you probably want a way to hook your application up to it.
+ If your application is in Java, then you should use the Java API. Here's an example of what
+ a simple client might look like. This example assumes that you've created a table called
+ "myTable" with a column family called "myColumnFamily".
+</p>
+
+<div style="background-color: #cccccc; padding: 2px">
+<code><pre>
+import java.io.IOException;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class MyClient {
+
+ public static void main(String args[]) throws IOException {
+ // You need a configuration object to tell the client where to connect.
+ // But don't worry, the defaults are pulled from the local config file.
+ HBaseConfiguration config = new HBaseConfiguration();
+
+ // This instantiates an HTable object that connects you to the "myTable"
+ // table.
+ HTable table = new HTable(config, "myTable");
+
+ // To do any sort of update on a row, you use an instance of the BatchUpdate
+ // class. A BatchUpdate takes a row and optionally a timestamp which your
+ // updates will affect. If no timestamp, the server applies current time
+ // to the edits.
+ BatchUpdate batchUpdate = new BatchUpdate("myRow");
+
+ // The BatchUpdate#put method takes a byte [] (or String) that designates
+ // what cell you want to put a value into, and a byte array that is the
+ // value you want to store. Note that if you want to store Strings, you
+ // have to getBytes() from the String for HBase to store it since HBase is
+ // all about byte arrays. The same goes for primitives like ints and longs
+ // and user-defined classes - you must find a way to reduce it to bytes.
+ // The Bytes class from the hbase util package has utility for going from
+ // String to utf-8 bytes and back again and help for other base types.
+ batchUpdate.put("myColumnFamily:columnQualifier1",
+ Bytes.toBytes("columnQualifier1 value!"));
+
+ // Deletes are batch operations in HBase as well.
+ batchUpdate.delete("myColumnFamily:cellIWantDeleted");
+
+ // Once you've done all the puts you want, you need to commit the results.
+ // The HTable#commit method takes the BatchUpdate instance you've been
+ // building and pushes the batch of changes you made into HBase.
+ table.commit(batchUpdate);
+
+ // Now, to retrieve the data we just wrote. The values that come back are
+ // Cell instances. A Cell is a combination of the value as a byte array and
+ // the timestamp the value was stored with. If you happen to know that the
+ // value contained is a string and want an actual string, then you must
+ // convert it yourself.
+ Cell cell = table.get("myRow", "myColumnFamily:columnQualifier1");
+ // This could throw a NullPointerException if there was no value at the cell
+ // location.
+ String valueStr = Bytes.toString(cell.getValue());
+
+ // Sometimes, you won't know the row you're looking for. In this case, you
+ // use a Scanner. This will give you cursor-like interface to the contents
+ // of the table.
+ Scanner scanner =
+ // we want to get back only "myColumnFamily:columnQualifier1" when we iterate
+ table.getScanner(new String[]{"myColumnFamily:columnQualifier1"});
+
+
+ // Scanners return RowResult instances. A RowResult is like the
+ // row key and the columns all wrapped up in a single Object.
+ // RowResult#getRow gives you the row key. RowResult also implements
+ // Map, so you can get to your column results easily.
+
+ // Now, for the actual iteration. One way is to use a while loop like so:
+ RowResult rowResult = scanner.next();
+
+ while (rowResult != null) {
+ // print out the row we found and the columns we were looking for
+ System.out.println("Found row: " + Bytes.toString(rowResult.getRow()) +
+ " with value: " + rowResult.get(Bytes.toBytes("myColumnFamily:columnQualifier1")));
+ rowResult = scanner.next();
+ }
+
+ // The other approach is to use a foreach loop. Scanners are iterable!
+ for (RowResult result : scanner) {
+ // print out the row we found and the columns we were looking for
+ System.out.println("Found row: " + Bytes.toString(rowResult.getRow()) +
+ " with value: " + rowResult.get(Bytes.toBytes("myColumnFamily:columnQualifier1")));
+ }
+
+ // Make sure you close your scanners when you are done!
+ // Its probably best to put the iteration into a try/finally with the below
+ // inside the finally clause.
+ scanner.close();
+ }
+}
+</pre></code>
+</div>
+
+<p>There are many other methods for putting data into and getting data out of
+ HBase, but these examples should get you started. See the HTable javadoc for
+ more methods. Additionally, there are methods for managing tables in the
+ HBaseAdmin class.</p>
+
+<p>If your client is NOT Java, then you should consider the Thrift or REST
+ libraries.</p>
+
+<h2><a name="related" >Related Documentation</a></h2>
+<ul>
+ <li><a href="http://hbase.org">HBase Home Page</a>
+ <li><a href="http://wiki.apache.org/hadoop/Hbase">HBase Wiki</a>
+ <li><a href="http://hadoop.apache.org/">Hadoop Home Page</a>
+</ul>
+
+</body>
+</html>
diff --git a/src/saveVersion.sh b/src/saveVersion.sh
new file mode 100755
index 0000000..b9e1168
--- /dev/null
+++ b/src/saveVersion.sh
@@ -0,0 +1,59 @@
+#!/bin/sh
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# This file is used to generate the annotation of package info that
+# records the user, url, revision and timestamp.
+
+# Copied from hadoop core r740386
+
+unset LANG
+unset LC_CTYPE
+version=$1
+user=`whoami`
+date=`date`
+cwd=`pwd`
+if [ -d .svn ]; then
+ revision=`svn info | sed -n -e 's/Last Changed Rev: \(.*\)/\1/p'`
+ url=`svn info | sed -n -e 's/URL: \(.*\)/\1/p'`
+ # Get canonical branch (branches/X, tags/X, or trunk)
+ branch=`echo $url | sed -n -e 's,.*\(branches/.*\)$,\1,p' \
+ -e 's,.*\(tags/.*\)$,\1,p' \
+ -e 's,.*trunk$,trunk,p'`
+elif [ -d .git ]; then
+ revision=`git log -1 --pretty=format:"%H"`
+ hostname=`hostname`
+ branch=`git branch | sed -n -e 's/^* //p'`
+ url="git://${hostname}${cwd}"
+else
+ revision="Unknown"
+ branch="Unknown"
+ url="file://$cwd"
+fi
+mkdir -p build/src/org/apache/hadoop/hbase
+cat << EOF | \
+ sed -e "s/VERSION/$version/" -e "s/USER/$user/" -e "s/DATE/$date/" \
+ -e "s|URL|$url|" -e "s/REV/$revision/" \
+ -e "s|BRANCH|$branch|" \
+ > build/src/org/apache/hadoop/hbase/package-info.java
+/*
+ * Generated by src/saveVersion.sh
+ */
+@VersionAnnotation(version="VERSION", revision="REV",
+ user="USER", date="DATE", url="URL")
+package org.apache.hadoop.hbase;
+EOF
diff --git a/src/test/hbase-site.xml b/src/test/hbase-site.xml
new file mode 100644
index 0000000..8482123
--- /dev/null
+++ b/src/test/hbase-site.xml
@@ -0,0 +1,130 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+ <property>
+ <name>hbase.regionserver.msginterval</name>
+ <value>1000</value>
+ <description>Interval between messages from the RegionServer to HMaster
+ in milliseconds. Default is 15. Set this value low if you want unit
+ tests to be responsive.
+ </description>
+ </property>
+ <property>
+ <name>hbase.client.pause</name>
+ <value>5000</value>
+ <description>General client pause value. Used mostly as value to wait
+ before running a retry of a failed get, region lookup, etc.</description>
+ </property>
+ <property>
+ <name>hbase.master.meta.thread.rescanfrequency</name>
+ <value>10000</value>
+ <description>How long the HMaster sleeps (in milliseconds) between scans of
+ the root and meta tables.
+ </description>
+ </property>
+ <property>
+ <name>hbase.server.thread.wakefrequency</name>
+ <value>1000</value>
+ <description>Time to sleep in between searches for work (in milliseconds).
+ Used as sleep interval by service threads such as META scanner and log roller.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.handler.count</name>
+ <value>5</value>
+ <description>Count of RPC Server instances spun up on RegionServers
+ Same property is used by the HMaster for count of master handlers.
+ Default is 10.
+ </description>
+ </property>
+ <property>
+ <name>hbase.master.lease.period</name>
+ <value>6000</value>
+ <description>Length of time the master will wait before timing out a region
+ server lease. Since region servers report in every second (see above), this
+ value has been reduced so that the master will notice a dead region server
+ sooner. The default is 30 seconds.
+ </description>
+ </property>
+ <property>
+ <name>hbase.master.info.port</name>
+ <value>-1</value>
+ <description>The port for the hbase master web UI
+ Set to -1 if you do not want the info server to run.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.info.port</name>
+ <value>-1</value>
+ <description>The port for the hbase regionserver web UI
+ Set to -1 if you do not want the info server to run.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.info.port.auto</name>
+ <value>true</value>
+ <description>Info server auto port bind. Enables automatic port
+ search if hbase.regionserver.info.port is already in use.
+ Enabled for testing to run multiple tests on one machine.
+ </description>
+ </property>
+ <property>
+ <name>hbase.master.lease.thread.wakefrequency</name>
+ <value>3000</value>
+ <description>The interval between checks for expired region server leases.
+ This value has been reduced due to the other reduced values above so that
+ the master will notice a dead region server sooner. The default is 15 seconds.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.optionalcacheflushinterval</name>
+ <value>10000</value>
+ <description>
+ Amount of time to wait since the last time a region was flushed before
+ invoking an optional cache flush. Default 60,000.
+ </description>
+ </property>
+ <property>
+ <name>hbase.regionserver.safemode</name>
+ <value>false</value>
+ <description>
+ Turn on/off safe mode in region server. Always on for production, always off
+ for tests.
+ </description>
+ </property>
+ <property>
+ <name>hbase.hregion.max.filesize</name>
+ <value>67108864</value>
+ <description>
+ Maximum desired file size for an HRegion. If filesize exceeds
+ value + (value / 2), the HRegion is split in two. Default: 256M.
+
+ Keep the maximum filesize small so we split more often in tests.
+ </description>
+ </property>
+ <property>
+ <name>hadoop.log.dir</name>
+ <value>${user.dir}/../logs</value>
+ </property>
+</configuration>
diff --git a/src/test/log4j.properties b/src/test/log4j.properties
new file mode 100644
index 0000000..4b8f2c4
--- /dev/null
+++ b/src/test/log4j.properties
@@ -0,0 +1,47 @@
+# Define some default values that can be overridden by system properties
+hbase.root.logger=INFO,console
+hbase.log.dir=.
+hbase.log.file=hbase.log
+
+# Define the root logger to the system property "hbase.root.logger".
+log4j.rootLogger=${hbase.root.logger}
+
+# Logging Threshold
+log4j.threshhold=ALL
+
+#
+# Daily Rolling File Appender
+#
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+
+# Debugging Pattern format
+log4j.appender.DRFA.layout.ConversionPattern=%d %-5p [%t] %C{2}(%L): %m%n
+
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this
+#
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d %-5p [%t] %C{2}(%L): %m%n
+
+# Custom Logging levels
+
+#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
+
+log4j.logger.org.apache.hadoop=WARN
+log4j.logger.org.apache.zookeeper=ERROR
+log4j.logger.org.apache.hadoop.hbase=DEBUG
diff --git a/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java b/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java
new file mode 100644
index 0000000..9f5ce23
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/AbstractMergeTestBase.java
@@ -0,0 +1,144 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.Random;
+
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/** Abstract base class for merge tests */
+public abstract class AbstractMergeTestBase extends HBaseClusterTestCase {
+ static final Log LOG =
+ LogFactory.getLog(AbstractMergeTestBase.class.getName());
+ static final byte [] COLUMN_NAME = Bytes.toBytes("contents:");
+ protected final Random rand = new Random();
+ protected HTableDescriptor desc;
+ protected ImmutableBytesWritable value;
+ protected boolean startMiniHBase;
+
+ public AbstractMergeTestBase() {
+ this(true);
+ }
+
+ /** constructor
+ * @param startMiniHBase
+ */
+ public AbstractMergeTestBase(boolean startMiniHBase) {
+ super();
+
+ this.startMiniHBase = startMiniHBase;
+
+ // We will use the same value for the rows as that is not really important here
+
+ String partialValue = String.valueOf(System.currentTimeMillis());
+ StringBuilder val = new StringBuilder();
+ while(val.length() < 1024) {
+ val.append(partialValue);
+ }
+
+ try {
+ value = new ImmutableBytesWritable(
+ val.toString().getBytes(HConstants.UTF8_ENCODING));
+ } catch (UnsupportedEncodingException e) {
+ fail();
+ }
+ desc = new HTableDescriptor(Bytes.toBytes("test"));
+ desc.addFamily(new HColumnDescriptor(COLUMN_NAME));
+ }
+
+ @Override
+ protected void hBaseClusterSetup() throws Exception {
+ if (startMiniHBase) {
+ super.hBaseClusterSetup();
+ }
+ }
+
+ @Override
+ public void preHBaseClusterSetup() throws Exception {
+ conf.setLong("hbase.hregion.max.filesize", 64L * 1024L * 1024L);
+
+ // We create three data regions: The first is too large to merge since it
+ // will be > 64 MB in size. The second two will be smaller and will be
+ // selected for merging.
+
+ // To ensure that the first region is larger than 64MB we need to write at
+ // least 65536 rows. We will make certain by writing 70000
+
+ byte [] row_70001 = Bytes.toBytes("row_70001");
+ byte [] row_80001 = Bytes.toBytes("row_80001");
+
+ // XXX: Note that the number of rows we put in is different for each region
+ // because currently we don't have a good mechanism to handle merging two
+ // store files with the same sequence id. We can't just dumbly stick them
+ // in because it will screw up the order when the store files are loaded up.
+ // The sequence ids are used for arranging the store files, so if two files
+ // have the same id, one will overwrite the other one in our listing, which
+ // is very bad. See HBASE-1212 and HBASE-1274.
+ HRegion[] regions = {
+ createAregion(null, row_70001, 1, 70000),
+ createAregion(row_70001, row_80001, 70001, 10000),
+ createAregion(row_80001, null, 80001, 11000)
+ };
+
+ // Now create the root and meta regions and insert the data regions
+ // created above into the meta
+
+ createRootAndMetaRegions();
+
+ for(int i = 0; i < regions.length; i++) {
+ HRegion.addRegionToMETA(meta, regions[i]);
+ }
+
+ closeRootAndMeta();
+ }
+
+ private HRegion createAregion(byte [] startKey, byte [] endKey, int firstRow,
+ int nrows) throws IOException {
+
+ HRegion region = createNewHRegion(desc, startKey, endKey);
+
+ System.out.println("created region " +
+ Bytes.toString(region.getRegionName()));
+
+ HRegionIncommon r = new HRegionIncommon(region);
+ for(int i = firstRow; i < firstRow + nrows; i++) {
+ BatchUpdate batchUpdate = new BatchUpdate(Bytes.toBytes("row_"
+ + String.format("%1$05d", i)));
+
+ batchUpdate.put(COLUMN_NAME, value.get());
+ region.batchUpdate(batchUpdate, null);
+ if(i % 10000 == 0) {
+ System.out.println("Flushing write #" + i);
+ r.flushcache();
+ }
+ }
+ region.close();
+ region.getLog().closeAndDelete();
+ region.getRegionInfo().setOffline(true);
+ return region;
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/DFSAbort.java b/src/test/org/apache/hadoop/hbase/DFSAbort.java
new file mode 100644
index 0000000..c2a9d87
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/DFSAbort.java
@@ -0,0 +1,73 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import junit.framework.TestSuite;
+import junit.textui.TestRunner;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+
+/**
+ * Test ability of HBase to handle DFS failure
+ */
+public class DFSAbort extends HBaseClusterTestCase {
+ /** constructor */
+ public DFSAbort() {
+ super();
+
+ // For less frequently updated regions flush after every 2 flushes
+ conf.setInt("hbase.hregion.memcache.optionalflushcount", 2);
+ }
+
+ @Override
+ public void setUp() throws Exception {
+ try {
+ super.setUp();
+ HTableDescriptor desc = new HTableDescriptor(getName());
+ desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY_STR));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ } catch (Exception e) {
+ e.printStackTrace();
+ throw e;
+ }
+ }
+
+ /**
+ * @throws Exception
+ */
+ public void testDFSAbort() throws Exception {
+ try {
+ // By now the Mini DFS is running, Mini HBase is running and we have
+ // created a table. Now let's yank the rug out from HBase
+ dfsCluster.shutdown();
+ threadDumpingJoin();
+ } catch (Exception e) {
+ e.printStackTrace();
+ throw e;
+ }
+ }
+
+ /**
+ * @param args unused
+ */
+ public static void main(String[] args) {
+ TestRunner.run(new TestSuite(DFSAbort.class));
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/HBaseClusterTestCase.java b/src/test/org/apache/hadoop/hbase/HBaseClusterTestCase.java
new file mode 100644
index 0000000..a974c27
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/HBaseClusterTestCase.java
@@ -0,0 +1,226 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * Abstract base class for HBase cluster junit tests. Spins up an hbase
+ * cluster in setup and tears it down again in tearDown.
+ */
+public abstract class HBaseClusterTestCase extends HBaseTestCase {
+ private static final Log LOG = LogFactory.getLog(HBaseClusterTestCase.class);
+ public MiniHBaseCluster cluster;
+ protected MiniDFSCluster dfsCluster;
+ protected MiniZooKeeperCluster zooKeeperCluster;
+ protected int regionServers;
+ protected boolean startDfs;
+ private boolean openMetaTable = true;
+
+ /** default constructor */
+ public HBaseClusterTestCase() {
+ this(1);
+ }
+
+ /**
+ * Start a MiniHBaseCluster with regionServers region servers in-process to
+ * start with. Also, start a MiniDfsCluster before starting the hbase cluster.
+ * The configuration used will be edited so that this works correctly.
+ * @param regionServers number of region servers to start.
+ */
+ public HBaseClusterTestCase(int regionServers) {
+ this(regionServers, true);
+ }
+
+ /** in-process to
+ * start with. Optionally, startDfs indicates if a MiniDFSCluster should be
+ * started. If startDfs is false, the assumption is that an external DFS is
+ * configured in hbase-site.xml and is already started, or you have started a
+ * MiniDFSCluster on your own and edited the configuration in memory. (You
+ * can modify the config used by overriding the preHBaseClusterSetup method.)
+ * @param regionServers number of region servers to start.
+ * @param startDfs set to true if MiniDFS should be started
+ */
+ public HBaseClusterTestCase(int regionServers, boolean startDfs) {
+ super();
+ this.startDfs = startDfs;
+ this.regionServers = regionServers;
+ }
+
+ protected void setOpenMetaTable(boolean val) {
+ openMetaTable = val;
+ }
+
+ /**
+ * Run after dfs is ready but before hbase cluster is started up.
+ */
+ protected void preHBaseClusterSetup() throws Exception {
+ // continue
+ }
+
+ /**
+ * Actually start the MiniHBase instance.
+ */
+ protected void hBaseClusterSetup() throws Exception {
+ File testDir = new File(getUnitTestdir(getName()).toString());
+
+ // Note that this is done before we create the MiniHBaseCluster because we
+ // need to edit the config to add the ZooKeeper servers.
+ this.zooKeeperCluster = new MiniZooKeeperCluster();
+ this.zooKeeperCluster.startup(testDir);
+
+ // start the mini cluster
+ this.cluster = new MiniHBaseCluster(conf, regionServers);
+
+ if (openMetaTable) {
+ // opening the META table ensures that cluster is running
+ new HTable(conf, HConstants.META_TABLE_NAME);
+ }
+ }
+
+ /**
+ * Run after hbase cluster is started up.
+ */
+ protected void postHBaseClusterSetup() throws Exception {
+ // continue
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ try {
+ if (startDfs) {
+ // start up the dfs
+ dfsCluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+
+ // mangle the conf so that the fs parameter points to the minidfs we
+ // just started up
+ FileSystem filesystem = dfsCluster.getFileSystem();
+ conf.set("fs.default.name", filesystem.getUri().toString());
+ Path parentdir = filesystem.getHomeDirectory();
+ conf.set(HConstants.HBASE_DIR, parentdir.toString());
+ filesystem.mkdirs(parentdir);
+ FSUtils.setVersion(filesystem, parentdir);
+ }
+
+ // do the super setup now. if we had done it first, then we would have
+ // gotten our conf all mangled and a local fs started up.
+ super.setUp();
+
+ // run the pre-cluster setup
+ preHBaseClusterSetup();
+
+ // start the instance
+ hBaseClusterSetup();
+
+ // run post-cluster setup
+ postHBaseClusterSetup();
+ } catch (Exception e) {
+ LOG.error("Exception in setup!", e);
+ if (cluster != null) {
+ cluster.shutdown();
+ }
+ if (zooKeeperCluster != null) {
+ zooKeeperCluster.shutdown();
+ }
+ if (dfsCluster != null) {
+ shutdownDfs(dfsCluster);
+ }
+ throw e;
+ }
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ if (!openMetaTable) {
+ // open the META table now to ensure cluster is running before shutdown.
+ new HTable(conf, HConstants.META_TABLE_NAME);
+ }
+ super.tearDown();
+ try {
+ HConnectionManager.deleteConnectionInfo(conf, true);
+ if (this.cluster != null) {
+ try {
+ this.cluster.shutdown();
+ } catch (Exception e) {
+ LOG.warn("Closing mini dfs", e);
+ }
+ try {
+ this.zooKeeperCluster.shutdown();
+ } catch (IOException e) {
+ LOG.warn("Shutting down ZooKeeper cluster", e);
+ }
+ }
+ if (startDfs) {
+ shutdownDfs(dfsCluster);
+ }
+ } catch (Exception e) {
+ LOG.error(e);
+ }
+ // ReflectionUtils.printThreadInfo(new PrintWriter(System.out),
+ // "Temporary end-of-test thread dump debugging HADOOP-2040: " + getName());
+ }
+
+
+ /**
+ * Use this utility method debugging why cluster won't go down. On a
+ * period it throws a thread dump. Method ends when all cluster
+ * regionservers and master threads are no long alive.
+ */
+ public void threadDumpingJoin() {
+ if (this.cluster.getRegionThreads() != null) {
+ for(Thread t: this.cluster.getRegionThreads()) {
+ threadDumpingJoin(t);
+ }
+ }
+ threadDumpingJoin(this.cluster.getMaster());
+ }
+
+ protected void threadDumpingJoin(final Thread t) {
+ if (t == null) {
+ return;
+ }
+ long startTime = System.currentTimeMillis();
+ while (t.isAlive()) {
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException e) {
+ LOG.info("Continuing...", e);
+ }
+ if (System.currentTimeMillis() - startTime > 60000) {
+ startTime = System.currentTimeMillis();
+ ReflectionUtils.printThreadInfo(new PrintWriter(System.out),
+ "Automatic Stack Trace every 60 seconds waiting on " +
+ t.getName());
+ }
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/HBaseTestCase.java b/src/test/org/apache/hadoop/hbase/HBaseTestCase.java
new file mode 100644
index 0000000..2f4b294
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/HBaseTestCase.java
@@ -0,0 +1,627 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.SortedMap;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Abstract base class for test cases. Performs all static initialization
+ */
+public abstract class HBaseTestCase extends TestCase {
+ private static final Log LOG = LogFactory.getLog(HBaseTestCase.class);
+
+ /** configuration parameter name for test directory */
+ public static final String TEST_DIRECTORY_KEY = "test.build.data";
+
+ protected final static byte [] COLFAMILY_NAME1 = Bytes.toBytes("colfamily1:");
+ protected final static byte [] COLFAMILY_NAME2 = Bytes.toBytes("colfamily2:");
+ protected final static byte [] COLFAMILY_NAME3 = Bytes.toBytes("colfamily3:");
+ protected static final byte [][] COLUMNS = {COLFAMILY_NAME1,
+ COLFAMILY_NAME2, COLFAMILY_NAME3};
+
+ private boolean localfs = false;
+ protected Path testDir = null;
+ protected FileSystem fs = null;
+ protected HRegion root = null;
+ protected HRegion meta = null;
+ protected static final char FIRST_CHAR = 'a';
+ protected static final char LAST_CHAR = 'z';
+ protected static final String PUNCTUATION = "~`@#$%^&*()-_+=:;',.<>/?[]{}|";
+ protected static final byte [] START_KEY_BYTES = {FIRST_CHAR, FIRST_CHAR, FIRST_CHAR};
+ protected String START_KEY;
+ protected static final int MAXVERSIONS = 3;
+
+ static {
+ initialize();
+ }
+
+ public volatile HBaseConfiguration conf;
+
+ /** constructor */
+ public HBaseTestCase() {
+ super();
+ init();
+ }
+
+ /**
+ * @param name
+ */
+ public HBaseTestCase(String name) {
+ super(name);
+ init();
+ }
+
+ private void init() {
+ conf = new HBaseConfiguration();
+ try {
+ START_KEY = new String(START_KEY_BYTES, HConstants.UTF8_ENCODING);
+ } catch (UnsupportedEncodingException e) {
+ LOG.fatal("error during initialization", e);
+ fail();
+ }
+ }
+
+ /**
+ * Note that this method must be called after the mini hdfs cluster has
+ * started or we end up with a local file system.
+ */
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ localfs =
+ (conf.get("fs.default.name", "file:///").compareTo("file:///") == 0);
+
+ if (fs == null) {
+ this.fs = FileSystem.get(conf);
+ }
+ try {
+ if (localfs) {
+ this.testDir = getUnitTestdir(getName());
+ if (fs.exists(testDir)) {
+ fs.delete(testDir, true);
+ }
+ } else {
+ this.testDir =
+ this.fs.makeQualified(new Path(conf.get(HConstants.HBASE_DIR)));
+ }
+ } catch (Exception e) {
+ LOG.fatal("error during setup", e);
+ throw e;
+ }
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ try {
+ if (localfs) {
+ if (this.fs.exists(testDir)) {
+ this.fs.delete(testDir, true);
+ }
+ }
+ } catch (Exception e) {
+ LOG.fatal("error during tear down", e);
+ }
+ super.tearDown();
+ }
+
+ protected Path getUnitTestdir(String testName) {
+ return new Path(
+ conf.get(TEST_DIRECTORY_KEY, "test/build/data"), testName);
+ }
+
+ protected HRegion createNewHRegion(HTableDescriptor desc, byte [] startKey,
+ byte [] endKey)
+ throws IOException {
+ FileSystem filesystem = FileSystem.get(conf);
+ Path rootdir = filesystem.makeQualified(
+ new Path(conf.get(HConstants.HBASE_DIR)));
+ filesystem.mkdirs(rootdir);
+
+ return HRegion.createHRegion(new HRegionInfo(desc, startKey, endKey),
+ rootdir, conf);
+ }
+
+ protected HRegion openClosedRegion(final HRegion closedRegion)
+ throws IOException {
+ HRegion r = new HRegion(closedRegion.getBaseDir(), closedRegion.getLog(),
+ closedRegion.getFilesystem(), closedRegion.getConf(),
+ closedRegion.getRegionInfo(), null);
+ r.initialize(null, null);
+ return r;
+ }
+
+ /**
+ * Create a table of name <code>name</code> with {@link COLUMNS} for
+ * families.
+ * @param name Name to give table.
+ * @return Column descriptor.
+ */
+ protected HTableDescriptor createTableDescriptor(final String name) {
+ return createTableDescriptor(name, MAXVERSIONS);
+ }
+
+ /**
+ * Create a table of name <code>name</code> with {@link COLUMNS} for
+ * families.
+ * @param name Name to give table.
+ * @param versions How many versions to allow per column.
+ * @return Column descriptor.
+ */
+ protected HTableDescriptor createTableDescriptor(final String name,
+ final int versions) {
+ HTableDescriptor htd = new HTableDescriptor(name);
+ htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME1, versions,
+ HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+ Integer.MAX_VALUE, HConstants.FOREVER, false));
+ htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME2, versions,
+ HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+ Integer.MAX_VALUE, HConstants.FOREVER, false));
+ htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME3, versions,
+ HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+ Integer.MAX_VALUE, HConstants.FOREVER, false));
+ return htd;
+ }
+
+ /**
+ * Add content to region <code>r</code> on the passed column
+ * <code>column</code>.
+ * Adds data of the from 'aaa', 'aab', etc where key and value are the same.
+ * @param r
+ * @param column
+ * @throws IOException
+ * @return count of what we added.
+ */
+ protected static long addContent(final HRegion r, final byte [] column)
+ throws IOException {
+ byte [] startKey = r.getRegionInfo().getStartKey();
+ byte [] endKey = r.getRegionInfo().getEndKey();
+ byte [] startKeyBytes = startKey;
+ if (startKeyBytes == null || startKeyBytes.length == 0) {
+ startKeyBytes = START_KEY_BYTES;
+ }
+ return addContent(new HRegionIncommon(r), Bytes.toString(column),
+ startKeyBytes, endKey, -1);
+ }
+
+ /**
+ * Add content to region <code>r</code> on the passed column
+ * <code>column</code>.
+ * Adds data of the from 'aaa', 'aab', etc where key and value are the same.
+ * @param updater An instance of {@link Incommon}.
+ * @param column
+ * @throws IOException
+ * @return count of what we added.
+ */
+ protected static long addContent(final Incommon updater, final String column)
+ throws IOException {
+ return addContent(updater, column, START_KEY_BYTES, null);
+ }
+
+ /**
+ * Add content to region <code>r</code> on the passed column
+ * <code>column</code>.
+ * Adds data of the from 'aaa', 'aab', etc where key and value are the same.
+ * @param updater An instance of {@link Incommon}.
+ * @param column
+ * @param startKeyBytes Where to start the rows inserted
+ * @param endKey Where to stop inserting rows.
+ * @return count of what we added.
+ * @throws IOException
+ */
+ protected static long addContent(final Incommon updater, final String column,
+ final byte [] startKeyBytes, final byte [] endKey)
+ throws IOException {
+ return addContent(updater, column, startKeyBytes, endKey, -1);
+ }
+
+ /**
+ * Add content to region <code>r</code> on the passed column
+ * <code>column</code>.
+ * Adds data of the from 'aaa', 'aab', etc where key and value are the same.
+ * @param updater An instance of {@link Incommon}.
+ * @param column
+ * @param startKeyBytes Where to start the rows inserted
+ * @param endKey Where to stop inserting rows.
+ * @param ts Timestamp to write the content with.
+ * @return count of what we added.
+ * @throws IOException
+ */
+ protected static long addContent(final Incommon updater, final String column,
+ final byte [] startKeyBytes, final byte [] endKey, final long ts)
+ throws IOException {
+ long count = 0;
+ // Add rows of three characters. The first character starts with the
+ // 'a' character and runs up to 'z'. Per first character, we run the
+ // second character over same range. And same for the third so rows
+ // (and values) look like this: 'aaa', 'aab', 'aac', etc.
+ char secondCharStart = (char)startKeyBytes[1];
+ char thirdCharStart = (char)startKeyBytes[2];
+ EXIT: for (char c = (char)startKeyBytes[0]; c <= LAST_CHAR; c++) {
+ for (char d = secondCharStart; d <= LAST_CHAR; d++) {
+ for (char e = thirdCharStart; e <= LAST_CHAR; e++) {
+ byte [] t = new byte [] {(byte)c, (byte)d, (byte)e};
+ if (endKey != null && endKey.length > 0
+ && Bytes.compareTo(endKey, t) <= 0) {
+ break EXIT;
+ }
+ try {
+ BatchUpdate batchUpdate = ts == -1 ?
+ new BatchUpdate(t) : new BatchUpdate(t, ts);
+ try {
+ batchUpdate.put(column, t);
+ updater.commit(batchUpdate);
+ count++;
+ } catch (RuntimeException ex) {
+ ex.printStackTrace();
+ throw ex;
+ } catch (IOException ex) {
+ ex.printStackTrace();
+ throw ex;
+ }
+ } catch (RuntimeException ex) {
+ ex.printStackTrace();
+ throw ex;
+ } catch (IOException ex) {
+ ex.printStackTrace();
+ throw ex;
+ }
+ }
+ // Set start character back to FIRST_CHAR after we've done first loop.
+ thirdCharStart = FIRST_CHAR;
+ }
+ secondCharStart = FIRST_CHAR;
+ }
+ return count;
+ }
+
+ /**
+ * Implementors can flushcache.
+ */
+ public static interface FlushCache {
+ /**
+ * @throws IOException
+ */
+ public void flushcache() throws IOException;
+ }
+
+ /**
+ * Interface used by tests so can do common operations against an HTable
+ * or an HRegion.
+ *
+ * TOOD: Come up w/ a better name for this interface.
+ */
+ public static interface Incommon {
+ /**
+ * @param row
+ * @param column
+ * @return value for row/column pair
+ * @throws IOException
+ */
+ public Cell get(byte [] row, byte [] column) throws IOException;
+ /**
+ * @param row
+ * @param column
+ * @param versions
+ * @return value for row/column pair for number of versions requested
+ * @throws IOException
+ */
+ public Cell[] get(byte [] row, byte [] column, int versions) throws IOException;
+ /**
+ * @param row
+ * @param column
+ * @param ts
+ * @param versions
+ * @return value for row/column/timestamp tuple for number of versions
+ * @throws IOException
+ */
+ public Cell[] get(byte [] row, byte [] column, long ts, int versions)
+ throws IOException;
+ /**
+ * @param row
+ * @param column
+ * @param ts
+ * @throws IOException
+ */
+ public void deleteAll(byte [] row, byte [] column, long ts) throws IOException;
+
+ /**
+ * @param batchUpdate
+ * @throws IOException
+ */
+ public void commit(BatchUpdate batchUpdate) throws IOException;
+
+ /**
+ * @param columns
+ * @param firstRow
+ * @param ts
+ * @return scanner for specified columns, first row and timestamp
+ * @throws IOException
+ */
+ public ScannerIncommon getScanner(byte [] [] columns, byte [] firstRow,
+ long ts) throws IOException;
+ }
+
+ /**
+ * A class that makes a {@link Incommon} out of a {@link HRegion}
+ */
+ public static class HRegionIncommon implements Incommon, FlushCache {
+ final HRegion region;
+
+ /**
+ * @param HRegion
+ */
+ public HRegionIncommon(final HRegion HRegion) {
+ this.region = HRegion;
+ }
+
+ public void commit(BatchUpdate batchUpdate) throws IOException {
+ region.batchUpdate(batchUpdate, null);
+ }
+
+ public void deleteAll(byte [] row, byte [] column, long ts)
+ throws IOException {
+ this.region.deleteAll(row, column, ts, null);
+ }
+
+ public ScannerIncommon getScanner(byte [][] columns, byte [] firstRow,
+ long ts)
+ throws IOException {
+ return new
+ InternalScannerIncommon(region.getScanner(columns, firstRow, ts, null));
+ }
+
+ public Cell get(byte [] row, byte [] column) throws IOException {
+ // TODO: Fix profligacy converting from List to Cell [].
+ Cell[] result = Cell.createSingleCellArray(this.region.get(row, column, -1, -1));
+ return (result == null)? null : result[0];
+ }
+
+ public Cell[] get(byte [] row, byte [] column, int versions)
+ throws IOException {
+ // TODO: Fix profligacy converting from List to Cell [].
+ return Cell.createSingleCellArray(this.region.get(row, column, -1, versions));
+ }
+
+ public Cell[] get(byte [] row, byte [] column, long ts, int versions)
+ throws IOException {
+ // TODO: Fix profligacy converting from List to Cell [].
+ return Cell.createSingleCellArray(this.region.get(row, column, ts, versions));
+ }
+
+ /**
+ * @param row
+ * @return values for each column in the specified row
+ * @throws IOException
+ */
+ public Map<byte [], Cell> getFull(byte [] row) throws IOException {
+ return region.getFull(row, null, HConstants.LATEST_TIMESTAMP, 1, null);
+ }
+
+ public void flushcache() throws IOException {
+ this.region.flushcache();
+ }
+ }
+
+ /**
+ * A class that makes a {@link Incommon} out of a {@link HTable}
+ */
+ public static class HTableIncommon implements Incommon {
+ final HTable table;
+
+ /**
+ * @param table
+ */
+ public HTableIncommon(final HTable table) {
+ super();
+ this.table = table;
+ }
+
+ public void commit(BatchUpdate batchUpdate) throws IOException {
+ table.commit(batchUpdate);
+ }
+
+ public void deleteAll(byte [] row, byte [] column, long ts)
+ throws IOException {
+ this.table.deleteAll(row, column, ts);
+ }
+
+ public ScannerIncommon getScanner(byte [][] columns, byte [] firstRow, long ts)
+ throws IOException {
+ return new
+ ClientScannerIncommon(table.getScanner(columns, firstRow, ts, null));
+ }
+
+ public Cell get(byte [] row, byte [] column) throws IOException {
+ return this.table.get(row, column);
+ }
+
+ public Cell[] get(byte [] row, byte [] column, int versions)
+ throws IOException {
+ return this.table.get(row, column, versions);
+ }
+
+ public Cell[] get(byte [] row, byte [] column, long ts, int versions)
+ throws IOException {
+ return this.table.get(row, column, ts, versions);
+ }
+ }
+
+ public interface ScannerIncommon
+ extends Iterable<Map.Entry<HStoreKey, SortedMap<byte [], Cell>>> {
+ public boolean next(List<KeyValue> values)
+ throws IOException;
+
+ public void close() throws IOException;
+ }
+
+ public static class ClientScannerIncommon implements ScannerIncommon {
+ Scanner scanner;
+ public ClientScannerIncommon(Scanner scanner) {
+ this.scanner = scanner;
+ }
+
+ public boolean next(List<KeyValue> values)
+ throws IOException {
+ RowResult results = scanner.next();
+ if (results == null) {
+ return false;
+ }
+ values.clear();
+ for (Map.Entry<byte [], Cell> entry : results.entrySet()) {
+ values.add(new KeyValue(results.getRow(), entry.getKey(),
+ entry.getValue().getTimestamp(), entry.getValue().getValue()));
+ }
+ return true;
+ }
+
+ public void close() throws IOException {
+ scanner.close();
+ }
+
+ @SuppressWarnings("unchecked")
+ public Iterator iterator() {
+ return scanner.iterator();
+ }
+ }
+
+ public static class InternalScannerIncommon implements ScannerIncommon {
+ InternalScanner scanner;
+
+ public InternalScannerIncommon(InternalScanner scanner) {
+ this.scanner = scanner;
+ }
+
+ public boolean next(List<KeyValue> results)
+ throws IOException {
+ return scanner.next(results);
+ }
+
+ public void close() throws IOException {
+ scanner.close();
+ }
+
+ public Iterator<Map.Entry<HStoreKey, SortedMap<byte [], Cell>>> iterator() {
+ throw new UnsupportedOperationException();
+ }
+ }
+
+ protected void assertCellEquals(final HRegion region, final byte [] row,
+ final byte [] column, final long timestamp, final String value)
+ throws IOException {
+ Map<byte [], Cell> result = region.getFull(row, null, timestamp, 1, null);
+ Cell cell_value = result.get(column);
+ if (value == null) {
+ assertEquals(Bytes.toString(column) + " at timestamp " + timestamp, null,
+ cell_value);
+ } else {
+ if (cell_value == null) {
+ fail(Bytes.toString(column) + " at timestamp " + timestamp +
+ "\" was expected to be \"" + value + " but was null");
+ }
+ if (cell_value != null) {
+ assertEquals(Bytes.toString(column) + " at timestamp "
+ + timestamp, value, new String(cell_value.getValue()));
+ }
+ }
+ }
+
+ /**
+ * Initializes parameters used in the test environment:
+ *
+ * Sets the configuration parameter TEST_DIRECTORY_KEY if not already set.
+ * Sets the boolean debugging if "DEBUGGING" is set in the environment.
+ * If debugging is enabled, reconfigures logging so that the root log level is
+ * set to WARN and the logging level for the package is set to DEBUG.
+ */
+ public static void initialize() {
+ if (System.getProperty(TEST_DIRECTORY_KEY) == null) {
+ System.setProperty(TEST_DIRECTORY_KEY, new File(
+ "build/hbase/test").getAbsolutePath());
+ }
+ }
+
+ /**
+ * Common method to close down a MiniDFSCluster and the associated file system
+ *
+ * @param cluster
+ */
+ public static void shutdownDfs(MiniDFSCluster cluster) {
+ if (cluster != null) {
+ try {
+ FileSystem fs = cluster.getFileSystem();
+ if (fs != null) {
+ LOG.info("Shutting down FileSystem");
+ fs.close();
+ }
+ } catch (IOException e) {
+ LOG.error("error closing file system", e);
+ }
+
+ LOG.info("Shutting down Mini DFS ");
+ try {
+ cluster.shutdown();
+ } catch (Exception e) {
+ /// Can get a java.lang.reflect.UndeclaredThrowableException thrown
+ // here because of an InterruptedException. Don't let exceptions in
+ // here be cause of test failure.
+ }
+ }
+ }
+
+ protected void createRootAndMetaRegions() throws IOException {
+ root = HRegion.createHRegion(HRegionInfo.ROOT_REGIONINFO, testDir, conf);
+ meta = HRegion.createHRegion(HRegionInfo.FIRST_META_REGIONINFO, testDir,
+ conf);
+ HRegion.addRegionToMETA(root, meta);
+ }
+
+ protected void closeRootAndMeta() throws IOException {
+ if (meta != null) {
+ meta.close();
+ meta.getLog().closeAndDelete();
+ }
+ if (root != null) {
+ root.close();
+ root.getLog().closeAndDelete();
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java b/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
new file mode 100644
index 0000000..54afb1f
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
@@ -0,0 +1,365 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.math.random.RandomData;
+import org.apache.commons.math.random.RandomDataImpl;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * <p>
+ * This class runs performance benchmarks for {@link HFile}.
+ * </p>
+ */
+public class HFilePerformanceEvaluation {
+
+ private static final int ROW_LENGTH = 10;
+ private static final int ROW_COUNT = 1000000;
+ private static final int RFILE_BLOCKSIZE = 8 * 1024;
+
+ static final Log LOG =
+ LogFactory.getLog(HFilePerformanceEvaluation.class.getName());
+
+ static byte [] format(final int i) {
+ String v = Integer.toString(i);
+ return Bytes.toBytes("0000000000".substring(v.length()) + v);
+ }
+
+ static ImmutableBytesWritable format(final int i, ImmutableBytesWritable w) {
+ w.set(format(i));
+ return w;
+ }
+
+ private void runBenchmarks() throws Exception {
+ final Configuration conf = new Configuration();
+ final FileSystem fs = FileSystem.get(conf);
+ final Path mf = fs.makeQualified(new Path("performanceevaluation.mapfile"));
+ if (fs.exists(mf)) {
+ fs.delete(mf, true);
+ }
+
+ runBenchmark(new SequentialWriteBenchmark(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+ public void run() {
+ try {
+ runBenchmark(new UniformRandomSmallScan(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ });
+ PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+ public void run() {
+ try {
+ runBenchmark(new UniformRandomReadBenchmark(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ });
+ PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+ public void run() {
+ try {
+ runBenchmark(new GaussianRandomReadBenchmark(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ });
+ PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+ public void run() {
+ try {
+ runBenchmark(new SequentialReadBenchmark(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ });
+
+ }
+
+ protected void runBenchmark(RowOrientedBenchmark benchmark, int rowCount)
+ throws Exception {
+ LOG.info("Running " + benchmark.getClass().getSimpleName() + " for " +
+ rowCount + " rows.");
+ long elapsedTime = benchmark.run();
+ LOG.info("Running " + benchmark.getClass().getSimpleName() + " for " +
+ rowCount + " rows took " + elapsedTime + "ms.");
+ }
+
+ static abstract class RowOrientedBenchmark {
+
+ protected final Configuration conf;
+ protected final FileSystem fs;
+ protected final Path mf;
+ protected final int totalRows;
+
+ public RowOrientedBenchmark(Configuration conf, FileSystem fs, Path mf,
+ int totalRows) {
+ this.conf = conf;
+ this.fs = fs;
+ this.mf = mf;
+ this.totalRows = totalRows;
+ }
+
+ void setUp() throws Exception {
+ // do nothing
+ }
+
+ abstract void doRow(int i) throws Exception;
+
+ protected int getReportingPeriod() {
+ return this.totalRows / 10;
+ }
+
+ void tearDown() throws Exception {
+ // do nothing
+ }
+
+ /**
+ * Run benchmark
+ * @return elapsed time.
+ * @throws Exception
+ */
+ long run() throws Exception {
+ long elapsedTime;
+ setUp();
+ long startTime = System.currentTimeMillis();
+ try {
+ for (int i = 0; i < totalRows; i++) {
+ if (i > 0 && i % getReportingPeriod() == 0) {
+ LOG.info("Processed " + i + " rows.");
+ }
+ doRow(i);
+ }
+ elapsedTime = System.currentTimeMillis() - startTime;
+ } finally {
+ tearDown();
+ }
+ return elapsedTime;
+ }
+
+ }
+
+ static class SequentialWriteBenchmark extends RowOrientedBenchmark {
+ protected HFile.Writer writer;
+ private Random random = new Random();
+ private byte[] bytes = new byte[ROW_LENGTH];
+
+ public SequentialWriteBenchmark(Configuration conf, FileSystem fs, Path mf,
+ int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void setUp() throws Exception {
+ writer = new HFile.Writer(this.fs, this.mf, RFILE_BLOCKSIZE, null, null);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ writer.append(format(i), generateValue());
+ }
+
+ private byte[] generateValue() {
+ random.nextBytes(bytes);
+ return bytes;
+ }
+
+ @Override
+ protected int getReportingPeriod() {
+ return this.totalRows; // don't report progress
+ }
+
+ @Override
+ void tearDown() throws Exception {
+ writer.close();
+ }
+
+ }
+
+ static abstract class ReadBenchmark extends RowOrientedBenchmark {
+ ImmutableBytesWritable key = new ImmutableBytesWritable();
+ ImmutableBytesWritable value = new ImmutableBytesWritable();
+
+ protected HFile.Reader reader;
+
+ public ReadBenchmark(Configuration conf, FileSystem fs, Path mf,
+ int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void setUp() throws Exception {
+ reader = new HFile.Reader(this.fs, this.mf, null);
+ this.reader.loadFileInfo();
+ }
+
+ @Override
+ void tearDown() throws Exception {
+ reader.close();
+ }
+
+ }
+
+ static class SequentialReadBenchmark extends ReadBenchmark {
+ private HFileScanner scanner;
+
+ public SequentialReadBenchmark(Configuration conf, FileSystem fs,
+ Path mf, int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void setUp() throws Exception {
+ super.setUp();
+ this.scanner = this.reader.getScanner();
+ this.scanner.seekTo();
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ if (this.scanner.next()) {
+ ByteBuffer k = this.scanner.getKey();
+ PerformanceEvaluationCommons.assertKey(format(i + 1), k);
+ ByteBuffer v = scanner.getValue();
+ PerformanceEvaluationCommons.assertValueSize(v.limit(), ROW_LENGTH);
+ }
+ }
+
+ @Override
+ protected int getReportingPeriod() {
+ return this.totalRows; // don't report progress
+ }
+
+ }
+
+ static class UniformRandomReadBenchmark extends ReadBenchmark {
+
+ private Random random = new Random();
+
+ public UniformRandomReadBenchmark(Configuration conf, FileSystem fs,
+ Path mf, int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ HFileScanner scanner = this.reader.getScanner();
+ byte [] b = getRandomRow();
+ scanner.seekTo(b);
+ ByteBuffer k = scanner.getKey();
+ PerformanceEvaluationCommons.assertKey(b, k);
+ ByteBuffer v = scanner.getValue();
+ PerformanceEvaluationCommons.assertValueSize(v.limit(), ROW_LENGTH);
+ }
+
+ private byte [] getRandomRow() {
+ return format(random.nextInt(totalRows));
+ }
+ }
+
+ static class UniformRandomSmallScan extends ReadBenchmark {
+ private Random random = new Random();
+
+ public UniformRandomSmallScan(Configuration conf, FileSystem fs,
+ Path mf, int totalRows) {
+ super(conf, fs, mf, totalRows/10);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ HFileScanner scanner = this.reader.getScanner();
+ byte [] b = getRandomRow();
+ if (scanner.seekTo(b) != 0) {
+ System.out.println("Nonexistent row: " + new String(b));
+ return;
+ }
+ ByteBuffer k = scanner.getKey();
+ PerformanceEvaluationCommons.assertKey(b, k);
+ // System.out.println("Found row: " + new String(b));
+ for (int ii = 0; ii < 30; ii++) {
+ if (!scanner.next()) {
+ System.out.println("NOTHING FOLLOWS");
+ }
+ ByteBuffer v = scanner.getValue();
+ PerformanceEvaluationCommons.assertValueSize(v.limit(), ROW_LENGTH);
+ }
+ }
+
+ private byte [] getRandomRow() {
+ return format(random.nextInt(totalRows));
+ }
+ }
+
+ static class GaussianRandomReadBenchmark extends ReadBenchmark {
+
+ private RandomData randomData = new RandomDataImpl();
+
+ public GaussianRandomReadBenchmark(Configuration conf, FileSystem fs,
+ Path mf, int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ HFileScanner scanner = this.reader.getScanner();
+ scanner.seekTo(getGaussianRandomRowBytes());
+ for (int ii = 0; ii < 30; ii++) {
+ if (!scanner.next()) {
+ System.out.println("NOTHING FOLLOWS");
+ }
+ scanner.getKey();
+ scanner.getValue();
+ }
+ }
+
+ private byte [] getGaussianRandomRowBytes() {
+ int r = (int) randomData.nextGaussian(totalRows / 2, totalRows / 10);
+ return format(r);
+ }
+ }
+
+ /**
+ * @param args
+ * @throws Exception
+ * @throws IOException
+ */
+ public static void main(String[] args) throws Exception {
+ new HFilePerformanceEvaluation().runBenchmarks();
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java b/src/test/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java
new file mode 100644
index 0000000..4c80c6f
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java
@@ -0,0 +1,348 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.math.random.RandomData;
+import org.apache.commons.math.random.RandomDataImpl;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.MapFile;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * <p>
+ * This class runs performance benchmarks for {@link MapFile}.
+ * </p>
+ */
+public class MapFilePerformanceEvaluation {
+ protected final HBaseConfiguration conf;
+ private static final int ROW_LENGTH = 10;
+ private static final int ROW_COUNT = 100000;
+
+ static final Log LOG =
+ LogFactory.getLog(MapFilePerformanceEvaluation.class.getName());
+
+ /**
+ * @param c
+ */
+ public MapFilePerformanceEvaluation(final HBaseConfiguration c) {
+ super();
+ this.conf = c;
+ }
+
+ static ImmutableBytesWritable format(final int i, ImmutableBytesWritable w) {
+ String v = Integer.toString(i);
+ w.set(Bytes.toBytes("0000000000".substring(v.length()) + v));
+ return w;
+ }
+
+ private void runBenchmarks() throws Exception {
+ final FileSystem fs = FileSystem.get(this.conf);
+ final Path mf = fs.makeQualified(new Path("performanceevaluation.mapfile"));
+ if (fs.exists(mf)) {
+ fs.delete(mf, true);
+ }
+ runBenchmark(new SequentialWriteBenchmark(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+
+ PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+ public void run() {
+ try {
+ runBenchmark(new UniformRandomSmallScan(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ });
+ PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+ public void run() {
+ try {
+ runBenchmark(new UniformRandomReadBenchmark(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ });
+ PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+ public void run() {
+ try {
+ runBenchmark(new GaussianRandomReadBenchmark(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ });
+ PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+ public void run() {
+ try {
+ runBenchmark(new SequentialReadBenchmark(conf, fs, mf, ROW_COUNT),
+ ROW_COUNT);
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ });
+ }
+
+ protected void runBenchmark(RowOrientedBenchmark benchmark, int rowCount)
+ throws Exception {
+ LOG.info("Running " + benchmark.getClass().getSimpleName() + " for " +
+ rowCount + " rows.");
+ long elapsedTime = benchmark.run();
+ LOG.info("Running " + benchmark.getClass().getSimpleName() + " for " +
+ rowCount + " rows took " + elapsedTime + "ms.");
+ }
+
+ static abstract class RowOrientedBenchmark {
+
+ protected final Configuration conf;
+ protected final FileSystem fs;
+ protected final Path mf;
+ protected final int totalRows;
+
+ public RowOrientedBenchmark(Configuration conf, FileSystem fs, Path mf,
+ int totalRows) {
+ this.conf = conf;
+ this.fs = fs;
+ this.mf = mf;
+ this.totalRows = totalRows;
+ }
+
+ void setUp() throws Exception {
+ // do nothing
+ }
+
+ abstract void doRow(int i) throws Exception;
+
+ protected int getReportingPeriod() {
+ return this.totalRows / 10;
+ }
+
+ void tearDown() throws Exception {
+ // do nothing
+ }
+
+ /**
+ * Run benchmark
+ * @return elapsed time.
+ * @throws Exception
+ */
+ long run() throws Exception {
+ long elapsedTime;
+ setUp();
+ long startTime = System.currentTimeMillis();
+ try {
+ for (int i = 0; i < totalRows; i++) {
+ if (i > 0 && i % getReportingPeriod() == 0) {
+ LOG.info("Processed " + i + " rows.");
+ }
+ doRow(i);
+ }
+ elapsedTime = System.currentTimeMillis() - startTime;
+ } finally {
+ tearDown();
+ }
+ return elapsedTime;
+ }
+
+ }
+
+ static class SequentialWriteBenchmark extends RowOrientedBenchmark {
+
+ protected MapFile.Writer writer;
+ private Random random = new Random();
+ private byte[] bytes = new byte[ROW_LENGTH];
+ private ImmutableBytesWritable key = new ImmutableBytesWritable();
+ private ImmutableBytesWritable value = new ImmutableBytesWritable();
+
+ public SequentialWriteBenchmark(Configuration conf, FileSystem fs, Path mf,
+ int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void setUp() throws Exception {
+ writer = new MapFile.Writer(conf, fs, mf.toString(),
+ ImmutableBytesWritable.class, ImmutableBytesWritable.class);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ value.set(generateValue());
+ writer.append(format(i, key), value);
+ }
+
+ private byte[] generateValue() {
+ random.nextBytes(bytes);
+ return bytes;
+ }
+
+ @Override
+ protected int getReportingPeriod() {
+ return this.totalRows; // don't report progress
+ }
+
+ @Override
+ void tearDown() throws Exception {
+ writer.close();
+ }
+
+ }
+
+ static abstract class ReadBenchmark extends RowOrientedBenchmark {
+ ImmutableBytesWritable key = new ImmutableBytesWritable();
+ ImmutableBytesWritable value = new ImmutableBytesWritable();
+
+ protected MapFile.Reader reader;
+
+ public ReadBenchmark(Configuration conf, FileSystem fs, Path mf,
+ int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void setUp() throws Exception {
+ reader = new MapFile.Reader(fs, mf.toString(), conf);
+ }
+
+ @Override
+ void tearDown() throws Exception {
+ reader.close();
+ }
+
+ }
+
+ static class SequentialReadBenchmark extends ReadBenchmark {
+ ImmutableBytesWritable verify = new ImmutableBytesWritable();
+
+ public SequentialReadBenchmark(Configuration conf, FileSystem fs,
+ Path mf, int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ this.reader.next(key, value);
+ PerformanceEvaluationCommons.assertKey(this.key.get(),
+ format(i, this.verify).get());
+ PerformanceEvaluationCommons.assertValueSize(ROW_LENGTH, value.getSize());
+ }
+
+ @Override
+ protected int getReportingPeriod() {
+ return this.totalRows; // don't report progress
+ }
+
+ }
+
+ static class UniformRandomReadBenchmark extends ReadBenchmark {
+
+ private Random random = new Random();
+
+ public UniformRandomReadBenchmark(Configuration conf, FileSystem fs,
+ Path mf, int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ ImmutableBytesWritable k = getRandomRow();
+ ImmutableBytesWritable r = (ImmutableBytesWritable)reader.get(k, value);
+ PerformanceEvaluationCommons.assertValueSize(r.getSize(), ROW_LENGTH);
+ }
+
+ private ImmutableBytesWritable getRandomRow() {
+ return format(random.nextInt(totalRows), key);
+ }
+
+ }
+
+ static class UniformRandomSmallScan extends ReadBenchmark {
+ private Random random = new Random();
+
+ public UniformRandomSmallScan(Configuration conf, FileSystem fs,
+ Path mf, int totalRows) {
+ super(conf, fs, mf, totalRows/10);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ ImmutableBytesWritable ibw = getRandomRow();
+ WritableComparable<?> wc = this.reader.getClosest(ibw, this.value);
+ if (wc == null) {
+ throw new NullPointerException();
+ }
+ PerformanceEvaluationCommons.assertKey(ibw.get(),
+ ((ImmutableBytesWritable)wc).get());
+ // TODO: Verify we're getting right values.
+ for (int ii = 0; ii < 29; ii++) {
+ this.reader.next(this.key, this.value);
+ PerformanceEvaluationCommons.assertValueSize(this.value.getSize(), ROW_LENGTH);
+ }
+ }
+
+ private ImmutableBytesWritable getRandomRow() {
+ return format(random.nextInt(totalRows), key);
+ }
+ }
+
+ static class GaussianRandomReadBenchmark extends ReadBenchmark {
+ private RandomData randomData = new RandomDataImpl();
+
+ public GaussianRandomReadBenchmark(Configuration conf, FileSystem fs,
+ Path mf, int totalRows) {
+ super(conf, fs, mf, totalRows);
+ }
+
+ @Override
+ void doRow(int i) throws Exception {
+ ImmutableBytesWritable k = getGaussianRandomRow();
+ ImmutableBytesWritable r = (ImmutableBytesWritable)reader.get(k, value);
+ PerformanceEvaluationCommons.assertValueSize(r.getSize(), ROW_LENGTH);
+ }
+
+ private ImmutableBytesWritable getGaussianRandomRow() {
+ int r = (int) randomData.nextGaussian(totalRows / 2, totalRows / 10);
+ return format(r, key);
+ }
+
+ }
+
+ /**
+ * @param args
+ * @throws Exception
+ * @throws IOException
+ */
+ public static void main(String[] args) throws Exception {
+ new MapFilePerformanceEvaluation(new HBaseConfiguration()).
+ runBenchmarks();
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java b/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java
new file mode 100644
index 0000000..8ccb6a8
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/MiniHBaseCluster.java
@@ -0,0 +1,205 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.net.BindException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+
+/**
+ * This class creates a single process HBase cluster. One thread is created for
+ * each server.
+ */
+public class MiniHBaseCluster implements HConstants {
+ static final Log LOG = LogFactory.getLog(MiniHBaseCluster.class.getName());
+
+ private HBaseConfiguration conf;
+ private LocalHBaseCluster hbaseCluster;
+
+ /**
+ * Start a MiniHBaseCluster.
+ * @param conf HBaseConfiguration to be used for cluster
+ * @param numRegionServers initial number of region servers to start.
+ * @throws IOException
+ */
+ public MiniHBaseCluster(HBaseConfiguration conf, int numRegionServers)
+ throws IOException {
+ this.conf = conf;
+ init(numRegionServers);
+ }
+
+ private void init(final int nRegionNodes) throws IOException {
+ try {
+ // start up a LocalHBaseCluster
+ while (true) {
+ try {
+ hbaseCluster = new LocalHBaseCluster(conf, nRegionNodes);
+ hbaseCluster.startup();
+ } catch (BindException e) {
+ //this port is already in use. try to use another (for multiple testing)
+ int port = conf.getInt("hbase.master.port", DEFAULT_MASTER_PORT);
+ LOG.info("MiniHBaseCluster: Failed binding Master to port: " + port);
+ port++;
+ conf.setInt("hbase.master.port", port);
+ continue;
+ }
+ break;
+ }
+ } catch(IOException e) {
+ shutdown();
+ throw e;
+ }
+ }
+
+ /**
+ * Starts a region server thread running
+ *
+ * @throws IOException
+ * @return Name of regionserver started.
+ */
+ public String startRegionServer() throws IOException {
+ LocalHBaseCluster.RegionServerThread t =
+ this.hbaseCluster.addRegionServer();
+ t.start();
+ t.waitForServerOnline();
+ return t.getName();
+ }
+
+ /**
+ * @return Returns the rpc address actually used by the master server, because
+ * the supplied port is not necessarily the actual port used.
+ */
+ public HServerAddress getHMasterAddress() {
+ return this.hbaseCluster.getMaster().getMasterAddress();
+ }
+
+ /**
+ * @return the HMaster
+ */
+ public HMaster getMaster() {
+ return this.hbaseCluster.getMaster();
+ }
+
+ /**
+ * Cause a region server to exit without cleaning up
+ *
+ * @param serverNumber Used as index into a list.
+ */
+ public void abortRegionServer(int serverNumber) {
+ HRegionServer server = getRegionServer(serverNumber);
+ LOG.info("Aborting " + server.getServerInfo().toString());
+ server.abort();
+ }
+
+ /**
+ * Shut down the specified region server cleanly
+ *
+ * @param serverNumber Used as index into a list.
+ * @return the region server that was stopped
+ */
+ public LocalHBaseCluster.RegionServerThread stopRegionServer(int serverNumber) {
+ return stopRegionServer(serverNumber, true);
+ }
+
+ /**
+ * Shut down the specified region server cleanly
+ *
+ * @param serverNumber Used as index into a list.
+ * @param shutdownFS True is we are to shutdown the filesystem as part of this
+ * regionserver's shutdown. Usually we do but you do not want to do this if
+ * you are running multiple regionservers in a test and you shut down one
+ * before end of the test.
+ * @return the region server that was stopped
+ */
+ public LocalHBaseCluster.RegionServerThread stopRegionServer(int serverNumber,
+ final boolean shutdownFS) {
+ LocalHBaseCluster.RegionServerThread server =
+ hbaseCluster.getRegionServers().get(serverNumber);
+ LOG.info("Stopping " + server.toString());
+ if (!shutdownFS) {
+ // Stop the running of the hdfs shutdown thread in tests.
+ server.getRegionServer().setHDFSShutdownThreadOnExit(null);
+ }
+ server.getRegionServer().stop();
+ return server;
+ }
+
+ /**
+ * Wait for the specified region server to stop
+ * Removes this thread from list of running threads.
+ * @param serverNumber
+ * @return Name of region server that just went down.
+ */
+ public String waitOnRegionServer(final int serverNumber) {
+ return this.hbaseCluster.waitOnRegionServer(serverNumber);
+ }
+
+ /**
+ * Wait for Mini HBase Cluster to shut down.
+ */
+ public void join() {
+ this.hbaseCluster.join();
+ }
+
+ /**
+ * Shut down the mini HBase cluster
+ */
+ public void shutdown() {
+ if (this.hbaseCluster != null) {
+ this.hbaseCluster.shutdown();
+ }
+ }
+
+ /**
+ * Call flushCache on all regions on all participating regionservers.
+ * @throws IOException
+ */
+ public void flushcache() throws IOException {
+ for (LocalHBaseCluster.RegionServerThread t:
+ this.hbaseCluster.getRegionServers()) {
+ for(HRegion r: t.getRegionServer().getOnlineRegions()) {
+ r.flushcache();
+ }
+ }
+ }
+
+ /**
+ * @return List of region server threads.
+ */
+ public List<LocalHBaseCluster.RegionServerThread> getRegionThreads() {
+ return this.hbaseCluster.getRegionServers();
+ }
+
+ /**
+ * Grab a numbered region server of your choice.
+ * @param serverNumber
+ * @return region server
+ */
+ public HRegionServer getRegionServer(int serverNumber) {
+ return hbaseCluster.getRegionServer(serverNumber);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/MiniZooKeeperCluster.java b/src/test/org/apache/hadoop/hbase/MiniZooKeeperCluster.java
new file mode 100644
index 0000000..082601d
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/MiniZooKeeperCluster.java
@@ -0,0 +1,205 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.Reader;
+import java.net.BindException;
+import java.net.Socket;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+import org.apache.zookeeper.server.NIOServerCnxn;
+import org.apache.zookeeper.server.ZooKeeperServer;
+import org.apache.zookeeper.server.persistence.FileTxnLog;
+
+/**
+ * TODO: Most of the code in this class is ripped from ZooKeeper tests. Instead
+ * of redoing it, we should contribute updates to their code which let us more
+ * easily access testing helper objects.
+ */
+public class MiniZooKeeperCluster {
+ private static final Log LOG = LogFactory.getLog(MiniZooKeeperCluster.class);
+
+ // TODO: make this more configurable?
+ private static final int TICK_TIME = 2000;
+ private static final int CONNECTION_TIMEOUT = 30000;
+
+ private boolean started;
+ private int clientPort = 21810; // use non-standard port
+
+ private NIOServerCnxn.Factory standaloneServerFactory;
+
+ /** Create mini ZooKeeper cluster. */
+ public MiniZooKeeperCluster() {
+ this.started = false;
+ }
+
+ // / XXX: From o.a.zk.t.ClientBase
+ private static void setupTestEnv() {
+ // during the tests we run with 100K prealloc in the logs.
+ // on windows systems prealloc of 64M was seen to take ~15seconds
+ // resulting in test failure (client timeout on first session).
+ // set env and directly in order to handle static init/gc issues
+ System.setProperty("zookeeper.preAllocSize", "100");
+ FileTxnLog.setPreallocSize(100);
+ }
+
+ /**
+ * @param baseDir
+ * @throws IOException
+ * @throws InterruptedException
+ */
+ public void startup(File baseDir) throws IOException,
+ InterruptedException {
+ setupTestEnv();
+
+ shutdown();
+
+ File dir = new File(baseDir, "zookeeper").getAbsoluteFile();
+ recreateDir(dir);
+
+ ZooKeeperServer server = new ZooKeeperServer(dir, dir, TICK_TIME);
+ while (true) {
+ try {
+ standaloneServerFactory = new NIOServerCnxn.Factory(clientPort);
+ } catch (BindException e) {
+ LOG.info("Faild binding ZK Server to client port: " + clientPort);
+ //this port is already in use. try to use another
+ clientPort++;
+ continue;
+ }
+ break;
+ }
+ standaloneServerFactory.startup(server);
+
+ String quorumServers = "localhost:" + clientPort;
+ ZooKeeperWrapper.setQuorumServers(quorumServers);
+
+ if (!waitForServerUp(clientPort, CONNECTION_TIMEOUT)) {
+ throw new IOException("Waiting for startup of standalone server");
+ }
+
+ started = true;
+ }
+
+ private void recreateDir(File dir) throws IOException {
+ if (dir.exists()) {
+ FileUtil.fullyDelete(dir);
+ }
+ try {
+ dir.mkdirs();
+ } catch (SecurityException e) {
+ throw new IOException("creating dir: " + dir, e);
+ }
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void shutdown() throws IOException {
+ if (!started) {
+ return;
+ }
+
+ standaloneServerFactory.shutdown();
+ if (!waitForServerDown(clientPort, CONNECTION_TIMEOUT)) {
+ throw new IOException("Waiting for shutdown of standalone server");
+ }
+
+ started = false;
+ }
+
+ // XXX: From o.a.zk.t.ClientBase
+ private static boolean waitForServerDown(int port, long timeout) {
+ long start = System.currentTimeMillis();
+ while (true) {
+ try {
+ Socket sock = new Socket("localhost", port);
+ try {
+ OutputStream outstream = sock.getOutputStream();
+ outstream.write("stat".getBytes());
+ outstream.flush();
+ } finally {
+ sock.close();
+ }
+ } catch (IOException e) {
+ return true;
+ }
+
+ if (System.currentTimeMillis() > start + timeout) {
+ break;
+ }
+ try {
+ Thread.sleep(250);
+ } catch (InterruptedException e) {
+ // ignore
+ }
+ }
+ return false;
+ }
+
+ // XXX: From o.a.zk.t.ClientBase
+ private static boolean waitForServerUp(int port, long timeout) {
+ long start = System.currentTimeMillis();
+ while (true) {
+ try {
+ Socket sock = new Socket("localhost", port);
+ BufferedReader reader = null;
+ try {
+ OutputStream outstream = sock.getOutputStream();
+ outstream.write("stat".getBytes());
+ outstream.flush();
+
+ Reader isr = new InputStreamReader(sock.getInputStream());
+ reader = new BufferedReader(isr);
+ String line = reader.readLine();
+ if (line != null && line.startsWith("Zookeeper version:")) {
+ return true;
+ }
+ } finally {
+ sock.close();
+ if (reader != null) {
+ reader.close();
+ }
+ }
+ } catch (IOException e) {
+ // ignore as this is expected
+ LOG.info("server localhost:" + port + " not up " + e);
+ }
+
+ if (System.currentTimeMillis() > start + timeout) {
+ break;
+ }
+ try {
+ Thread.sleep(250);
+ } catch (InterruptedException e) {
+ // ignore
+ }
+ }
+ return false;
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/MultiRegionTable.java b/src/test/org/apache/hadoop/hbase/MultiRegionTable.java
new file mode 100644
index 0000000..17d48dc
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/MultiRegionTable.java
@@ -0,0 +1,113 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Utility class to build a table of multiple regions.
+ */
+public class MultiRegionTable extends HBaseClusterTestCase {
+ private static final byte [][] KEYS = {
+ HConstants.EMPTY_BYTE_ARRAY,
+ Bytes.toBytes("bbb"),
+ Bytes.toBytes("ccc"),
+ Bytes.toBytes("ddd"),
+ Bytes.toBytes("eee"),
+ Bytes.toBytes("fff"),
+ Bytes.toBytes("ggg"),
+ Bytes.toBytes("hhh"),
+ Bytes.toBytes("iii"),
+ Bytes.toBytes("jjj"),
+ Bytes.toBytes("kkk"),
+ Bytes.toBytes("lll"),
+ Bytes.toBytes("mmm"),
+ Bytes.toBytes("nnn"),
+ Bytes.toBytes("ooo"),
+ Bytes.toBytes("ppp"),
+ Bytes.toBytes("qqq"),
+ Bytes.toBytes("rrr"),
+ Bytes.toBytes("sss"),
+ Bytes.toBytes("ttt"),
+ Bytes.toBytes("uuu"),
+ Bytes.toBytes("vvv"),
+ Bytes.toBytes("www"),
+ Bytes.toBytes("xxx"),
+ Bytes.toBytes("yyy")
+ };
+
+ protected final byte [] columnName;
+ protected HTableDescriptor desc;
+
+ /**
+ * @param columnName the column to populate.
+ */
+ public MultiRegionTable(final String columnName) {
+ super();
+ this.columnName = Bytes.toBytes(columnName);
+ // These are needed for the new and improved Map/Reduce framework
+ System.setProperty("hadoop.log.dir", conf.get("hadoop.log.dir"));
+ conf.set("mapred.output.dir", conf.get("hadoop.tmp.dir"));
+ }
+
+ /**
+ * Run after dfs is ready but before hbase cluster is started up.
+ */
+ @Override
+ protected void preHBaseClusterSetup() throws Exception {
+ try {
+ // Create a bunch of regions
+ HRegion[] regions = new HRegion[KEYS.length];
+ for (int i = 0; i < regions.length; i++) {
+ int j = (i + 1) % regions.length;
+ regions[i] = createARegion(KEYS[i], KEYS[j]);
+ }
+
+ // Now create the root and meta regions and insert the data regions
+ // created above into the meta
+
+ createRootAndMetaRegions();
+
+ for(int i = 0; i < regions.length; i++) {
+ HRegion.addRegionToMETA(meta, regions[i]);
+ }
+
+ closeRootAndMeta();
+ } catch (Exception e) {
+ shutdownDfs(dfsCluster);
+ throw e;
+ }
+ }
+
+ private HRegion createARegion(byte [] startKey, byte [] endKey) throws IOException {
+ HRegion region = createNewHRegion(desc, startKey, endKey);
+ addContent(region, this.columnName);
+ closeRegionAndDeleteLog(region);
+ return region;
+ }
+
+ private void closeRegionAndDeleteLog(HRegion region) throws IOException {
+ region.close();
+ region.getLog().closeAndDelete();
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java b/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java
new file mode 100644
index 0000000..d72b3e3
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/PerformanceEvaluation.java
@@ -0,0 +1,819 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.io.PrintStream;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.TreeMap;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.filter.PageRowFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.Hash;
+import org.apache.hadoop.hbase.util.MurmurHash;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.FileInputFormat;
+import org.apache.hadoop.mapred.FileOutputFormat;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.Mapper;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.hadoop.mapred.TextOutputFormat;
+
+
+/**
+ * Script used evaluating HBase performance and scalability. Runs a HBase
+ * client that steps through one of a set of hardcoded tests or 'experiments'
+ * (e.g. a random reads test, a random writes test, etc.). Pass on the
+ * command-line which test to run and how many clients are participating in
+ * this experiment. Run <code>java PerformanceEvaluation --help</code> to
+ * obtain usage.
+ *
+ * <p>This class sets up and runs the evaluation programs described in
+ * Section 7, <i>Performance Evaluation</i>, of the <a
+ * href="http://labs.google.com/papers/bigtable.html">Bigtable</a>
+ * paper, pages 8-10.
+ *
+ * <p>If number of clients > 1, we start up a MapReduce job. Each map task
+ * runs an individual client. Each client does about 1GB of data.
+ */
+public class PerformanceEvaluation implements HConstants {
+ protected static final Log LOG = LogFactory.getLog(PerformanceEvaluation.class.getName());
+
+ private static final int ROW_LENGTH = 1000;
+ private static final int ONE_GB = 1024 * 1024 * 1000;
+ private static final int ROWS_PER_GB = ONE_GB / ROW_LENGTH;
+
+ static final byte [] COLUMN_NAME = Bytes.toBytes(COLUMN_FAMILY_STR + "data");
+
+ protected static final HTableDescriptor TABLE_DESCRIPTOR;
+ static {
+ TABLE_DESCRIPTOR = new HTableDescriptor("TestTable");
+ TABLE_DESCRIPTOR.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
+ }
+
+ private static final String RANDOM_READ = "randomRead";
+ private static final String RANDOM_SEEK_SCAN = "randomSeekScan";
+ private static final String RANDOM_READ_MEM = "randomReadMem";
+ private static final String RANDOM_WRITE = "randomWrite";
+ private static final String SEQUENTIAL_READ = "sequentialRead";
+ private static final String SEQUENTIAL_WRITE = "sequentialWrite";
+ private static final String SCAN = "scan";
+
+ private static final List<String> COMMANDS =
+ Arrays.asList(new String [] {RANDOM_READ,
+ RANDOM_SEEK_SCAN,
+ RANDOM_READ_MEM,
+ RANDOM_WRITE,
+ SEQUENTIAL_READ,
+ SEQUENTIAL_WRITE,
+ SCAN});
+
+ volatile HBaseConfiguration conf;
+ private boolean miniCluster = false;
+ private boolean nomapred = false;
+ private int N = 1;
+ private int R = ROWS_PER_GB;
+ private static final Path PERF_EVAL_DIR = new Path("performance_evaluation");
+
+ /**
+ * Regex to parse lines in input file passed to mapreduce task.
+ */
+ public static final Pattern LINE_PATTERN =
+ Pattern.compile("startRow=(\\d+),\\s+" +
+ "perClientRunRows=(\\d+),\\s+totalRows=(\\d+),\\s+clients=(\\d+)");
+
+ /**
+ * Enum for map metrics. Keep it out here rather than inside in the Map
+ * inner-class so we can find associated properties.
+ */
+ protected static enum Counter {
+ /** elapsed time */
+ ELAPSED_TIME,
+ /** number of rows */
+ ROWS}
+
+
+ /**
+ * Constructor
+ * @param c Configuration object
+ */
+ public PerformanceEvaluation(final HBaseConfiguration c) {
+ this.conf = c;
+ }
+
+ /**
+ * Implementations can have their status set.
+ */
+ static interface Status {
+ /**
+ * Sets status
+ * @param msg status message
+ * @throws IOException
+ */
+ void setStatus(final String msg) throws IOException;
+ }
+
+ /**
+ * MapReduce job that runs a performance evaluation client in each map task.
+ */
+ @SuppressWarnings("unchecked")
+ public static class EvaluationMapTask extends MapReduceBase
+ implements Mapper {
+ /** configuration parameter name that contains the command */
+ public final static String CMD_KEY = "EvaluationMapTask.command";
+ private String cmd;
+ private PerformanceEvaluation pe;
+
+ @Override
+ public void configure(JobConf j) {
+ this.cmd = j.get(CMD_KEY);
+
+ this.pe = new PerformanceEvaluation(new HBaseConfiguration(j));
+ }
+
+ public void map(final Object key,
+ final Object value, final OutputCollector output,
+ final Reporter reporter)
+ throws IOException {
+ Matcher m = LINE_PATTERN.matcher(((Text)value).toString());
+ if (m != null && m.matches()) {
+ int startRow = Integer.parseInt(m.group(1));
+ int perClientRunRows = Integer.parseInt(m.group(2));
+ int totalRows = Integer.parseInt(m.group(3));
+ Status status = new Status() {
+ public void setStatus(String msg) {
+ reporter.setStatus(msg);
+ }
+ };
+ long elapsedTime = this.pe.runOneClient(this.cmd, startRow,
+ perClientRunRows, totalRows, status);
+ // Collect how much time the thing took. Report as map output and
+ // to the ELAPSED_TIME counter.
+ reporter.incrCounter(Counter.ELAPSED_TIME, elapsedTime);
+ reporter.incrCounter(Counter.ROWS, perClientRunRows);
+ output.collect(new LongWritable(startRow),
+ new Text(Long.toString(elapsedTime)));
+ }
+ }
+ }
+
+ /*
+ * If table does not already exist, create.
+ * @param c Client to use checking.
+ * @return True if we created the table.
+ * @throws IOException
+ */
+ private boolean checkTable(HBaseAdmin admin) throws IOException {
+ boolean tableExists = admin.tableExists(TABLE_DESCRIPTOR.getName());
+ if (!tableExists) {
+ admin.createTable(TABLE_DESCRIPTOR);
+ LOG.info("Table " + TABLE_DESCRIPTOR + " created");
+ }
+ return !tableExists;
+ }
+
+ /*
+ * We're to run multiple clients concurrently. Setup a mapreduce job. Run
+ * one map per client. Then run a single reduce to sum the elapsed times.
+ * @param cmd Command to run.
+ * @throws IOException
+ */
+ private void runNIsMoreThanOne(final String cmd)
+ throws IOException {
+ checkTable(new HBaseAdmin(conf));
+ if (this.nomapred) {
+ doMultipleClients(cmd);
+ } else {
+ doMapReduce(cmd);
+ }
+ }
+
+ /*
+ * Run all clients in this vm each to its own thread.
+ * @param cmd Command to run.
+ * @throws IOException
+ */
+ @SuppressWarnings("unused")
+ private void doMultipleClients(final String cmd) throws IOException {
+ final List<Thread> threads = new ArrayList<Thread>(this.N);
+ final int perClientRows = R/N;
+ for (int i = 0; i < this.N; i++) {
+ Thread t = new Thread (Integer.toString(i)) {
+ @Override
+ public void run() {
+ super.run();
+ PerformanceEvaluation pe = new PerformanceEvaluation(conf);
+ int index = Integer.parseInt(getName());
+ try {
+ long elapsedTime = pe.runOneClient(cmd, index * perClientRows,
+ perClientRows, perClientRows,
+ new Status() {
+ public void setStatus(final String msg) throws IOException {
+ LOG.info("client-" + getName() + " " + msg);
+ }
+ });
+ LOG.info("Finished " + getName() + " in " + elapsedTime +
+ "ms writing " + perClientRows + " rows");
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ };
+ threads.add(t);
+ }
+ for (Thread t: threads) {
+ t.start();
+ }
+ for (Thread t: threads) {
+ while(t.isAlive()) {
+ try {
+ t.join();
+ } catch (InterruptedException e) {
+ LOG.debug("Interrupted, continuing" + e.toString());
+ }
+ }
+ }
+ }
+
+ /*
+ * Run a mapreduce job. Run as many maps as asked-for clients.
+ * Before we start up the job, write out an input file with instruction
+ * per client regards which row they are to start on.
+ * @param cmd Command to run.
+ * @throws IOException
+ */
+ private void doMapReduce(final String cmd) throws IOException {
+ Path inputDir = writeInputFile(this.conf);
+ this.conf.set(EvaluationMapTask.CMD_KEY, cmd);
+ JobConf job = new JobConf(this.conf, this.getClass());
+ FileInputFormat.setInputPaths(job, inputDir);
+ job.setInputFormat(TextInputFormat.class);
+ job.setJobName("HBase Performance Evaluation");
+ job.setMapperClass(EvaluationMapTask.class);
+ job.setMaxMapAttempts(1);
+ job.setMaxReduceAttempts(1);
+ job.setNumMapTasks(this.N * 10); // Ten maps per client.
+ job.setNumReduceTasks(1);
+ job.setOutputFormat(TextOutputFormat.class);
+ FileOutputFormat.setOutputPath(job, new Path(inputDir, "outputs"));
+ JobClient.runJob(job);
+ }
+
+ /*
+ * Write input file of offsets-per-client for the mapreduce job.
+ * @param c Configuration
+ * @return Directory that contains file written.
+ * @throws IOException
+ */
+ private Path writeInputFile(final Configuration c) throws IOException {
+ FileSystem fs = FileSystem.get(c);
+ if (!fs.exists(PERF_EVAL_DIR)) {
+ fs.mkdirs(PERF_EVAL_DIR);
+ }
+ SimpleDateFormat formatter = new SimpleDateFormat("yyyyMMddHHmmss");
+ Path subdir = new Path(PERF_EVAL_DIR, formatter.format(new Date()));
+ fs.mkdirs(subdir);
+ Path inputFile = new Path(subdir, "input.txt");
+ PrintStream out = new PrintStream(fs.create(inputFile));
+ // Make input random.
+ Map<Integer, String> m = new TreeMap<Integer, String>();
+ Hash h = MurmurHash.getInstance();
+ int perClientRows = (this.R / this.N);
+ try {
+ for (int i = 0; i < 10; i++) {
+ for (int j = 0; j < N; j++) {
+ String s = "startRow=" + ((j * perClientRows) + (i * (perClientRows/10))) +
+ ", perClientRunRows=" + (perClientRows / 10) +
+ ", totalRows=" + this.R +
+ ", clients=" + this.N;
+ int hash = h.hash(Bytes.toBytes(s));
+ m.put(hash, s);
+ }
+ }
+ for (Map.Entry<Integer, String> e: m.entrySet()) {
+ out.println(e.getValue());
+ }
+ } finally {
+ out.close();
+ }
+ return subdir;
+ }
+
+ /*
+ * A test.
+ * Subclass to particularize what happens per row.
+ */
+ static abstract class Test {
+ protected final Random rand = new Random(System.currentTimeMillis());
+ protected final int startRow;
+ protected final int perClientRunRows;
+ protected final int totalRows;
+ private final Status status;
+ protected HBaseAdmin admin;
+ protected HTable table;
+ protected volatile HBaseConfiguration conf;
+
+ Test(final HBaseConfiguration conf, final int startRow,
+ final int perClientRunRows, final int totalRows, final Status status) {
+ super();
+ this.startRow = startRow;
+ this.perClientRunRows = perClientRunRows;
+ this.totalRows = totalRows;
+ this.status = status;
+ this.table = null;
+ this.conf = conf;
+ }
+
+ private String generateStatus(final int sr, final int i, final int lr) {
+ return sr + "/" + i + "/" + lr;
+ }
+
+ protected int getReportingPeriod() {
+ return this.perClientRunRows / 10;
+ }
+
+ void testSetup() throws IOException {
+ this.admin = new HBaseAdmin(conf);
+ this.table = new HTable(conf, TABLE_DESCRIPTOR.getName());
+ this.table.setAutoFlush(false);
+ this.table.setWriteBufferSize(1024*1024*12);
+ }
+
+ void testTakedown() throws IOException {
+ this.table.flushCommits();
+ }
+
+ /*
+ * Run test
+ * @return Elapsed time.
+ * @throws IOException
+ */
+ long test() throws IOException {
+ long elapsedTime;
+ testSetup();
+ long startTime = System.currentTimeMillis();
+ try {
+ int lastRow = this.startRow + this.perClientRunRows;
+ // Report on completion of 1/10th of total.
+ for (int i = this.startRow; i < lastRow; i++) {
+ testRow(i);
+ if (status != null && i > 0 && (i % getReportingPeriod()) == 0) {
+ status.setStatus(generateStatus(this.startRow, i, lastRow));
+ }
+ }
+ elapsedTime = System.currentTimeMillis() - startTime;
+ } finally {
+ testTakedown();
+ }
+ return elapsedTime;
+ }
+
+ /*
+ * Test for individual row.
+ * @param i Row index.
+ */
+ abstract void testRow(final int i) throws IOException;
+
+ /*
+ * @return Test name.
+ */
+ abstract String getTestName();
+ }
+
+ class RandomSeekScanTest extends Test {
+ RandomSeekScanTest(final HBaseConfiguration conf, final int startRow,
+ final int perClientRunRows, final int totalRows, final Status status) {
+ super(conf, startRow, perClientRunRows, totalRows, status);
+ }
+
+ @Override
+ void testRow(final int i) throws IOException {
+ Scanner s = this.table.getScanner(new byte [][] {COLUMN_NAME},
+ getRandomRow(this.rand, this.totalRows),
+ new WhileMatchRowFilter(new PageRowFilter(120)));
+ //int count = 0;
+ for (RowResult rr = null; (rr = s.next()) != null;) {
+ // LOG.info("" + count++ + " " + rr.toString());
+ }
+ s.close();
+ }
+
+ @Override
+ protected int getReportingPeriod() {
+ //
+ return this.perClientRunRows / 100;
+ }
+
+ @Override
+ String getTestName() {
+ return "randomSeekScanTest";
+ }
+ }
+
+ class RandomReadTest extends Test {
+ RandomReadTest(final HBaseConfiguration conf, final int startRow,
+ final int perClientRunRows, final int totalRows, final Status status) {
+ super(conf, startRow, perClientRunRows, totalRows, status);
+ }
+
+ @Override
+ void testRow(final int i) throws IOException {
+ this.table.get(getRandomRow(this.rand, this.totalRows), COLUMN_NAME);
+ }
+
+ @Override
+ protected int getReportingPeriod() {
+ //
+ return this.perClientRunRows / 100;
+ }
+
+ @Override
+ String getTestName() {
+ return "randomRead";
+ }
+ }
+
+ class RandomWriteTest extends Test {
+ RandomWriteTest(final HBaseConfiguration conf, final int startRow,
+ final int perClientRunRows, final int totalRows, final Status status) {
+ super(conf, startRow, perClientRunRows, totalRows, status);
+ }
+
+ @Override
+ void testRow(final int i) throws IOException {
+ byte [] row = getRandomRow(this.rand, this.totalRows);
+ BatchUpdate b = new BatchUpdate(row);
+ b.put(COLUMN_NAME, generateValue(this.rand));
+ table.commit(b);
+ }
+
+ @Override
+ String getTestName() {
+ return "randomWrite";
+ }
+ }
+
+ class ScanTest extends Test {
+ private Scanner testScanner;
+
+ ScanTest(final HBaseConfiguration conf, final int startRow,
+ final int perClientRunRows, final int totalRows, final Status status) {
+ super(conf, startRow, perClientRunRows, totalRows, status);
+ }
+
+ @Override
+ void testSetup() throws IOException {
+ super.testSetup();
+ this.testScanner = table.getScanner(new byte [][] {COLUMN_NAME},
+ format(this.startRow));
+ }
+
+ @Override
+ void testTakedown() throws IOException {
+ if (this.testScanner != null) {
+ this.testScanner.close();
+ }
+ super.testTakedown();
+ }
+
+
+ @Override
+ void testRow(final int i) throws IOException {
+ testScanner.next();
+ }
+
+ @Override
+ String getTestName() {
+ return "scan";
+ }
+ }
+
+ class SequentialReadTest extends Test {
+ SequentialReadTest(final HBaseConfiguration conf, final int startRow,
+ final int perClientRunRows, final int totalRows, final Status status) {
+ super(conf, startRow, perClientRunRows, totalRows, status);
+ }
+
+ @Override
+ void testRow(final int i) throws IOException {
+ table.get(format(i), COLUMN_NAME);
+ }
+
+ @Override
+ String getTestName() {
+ return "sequentialRead";
+ }
+ }
+
+ class SequentialWriteTest extends Test {
+ SequentialWriteTest(final HBaseConfiguration conf, final int startRow,
+ final int perClientRunRows, final int totalRows, final Status status) {
+ super(conf, startRow, perClientRunRows, totalRows, status);
+ }
+
+ @Override
+ void testRow(final int i) throws IOException {
+ BatchUpdate b = new BatchUpdate(format(i));
+ b.put(COLUMN_NAME, generateValue(this.rand));
+ table.commit(b);
+ }
+
+ @Override
+ String getTestName() {
+ return "sequentialWrite";
+ }
+ }
+
+ /*
+ * Format passed integer.
+ * @param number
+ * @return Returns zero-prefixed 10-byte wide decimal version of passed
+ * number (Does absolute in case number is negative).
+ */
+ public static byte [] format(final int number) {
+ byte [] b = new byte[10];
+ int d = Math.abs(number);
+ for (int i = b.length - 1; i >= 0; i--) {
+ b[i] = (byte)((d % 10) + '0');
+ d /= 10;
+ }
+ return b;
+ }
+
+ /*
+ * This method takes some time and is done inline uploading data. For
+ * example, doing the mapfile test, generation of the key and value
+ * consumes about 30% of CPU time.
+ * @return Generated random value to insert into a table cell.
+ */
+ static byte[] generateValue(final Random r) {
+ byte [] b = new byte [ROW_LENGTH];
+ r.nextBytes(b);
+ return b;
+ }
+
+ static byte [] getRandomRow(final Random random, final int totalRows) {
+ return format(random.nextInt(Integer.MAX_VALUE) % totalRows);
+ }
+
+ long runOneClient(final String cmd, final int startRow,
+ final int perClientRunRows, final int totalRows, final Status status)
+ throws IOException {
+ status.setStatus("Start " + cmd + " at offset " + startRow + " for " +
+ perClientRunRows + " rows");
+ long totalElapsedTime = 0;
+ if (cmd.equals(RANDOM_READ)) {
+ Test t = new RandomReadTest(this.conf, startRow, perClientRunRows,
+ totalRows, status);
+ totalElapsedTime = t.test();
+ } else if (cmd.equals(RANDOM_READ_MEM)) {
+ throw new UnsupportedOperationException("Not yet implemented");
+ } else if (cmd.equals(RANDOM_WRITE)) {
+ Test t = new RandomWriteTest(this.conf, startRow, perClientRunRows,
+ totalRows, status);
+ totalElapsedTime = t.test();
+ } else if (cmd.equals(SCAN)) {
+ Test t = new ScanTest(this.conf, startRow, perClientRunRows,
+ totalRows, status);
+ totalElapsedTime = t.test();
+ } else if (cmd.equals(SEQUENTIAL_READ)) {
+ Test t = new SequentialReadTest(this.conf, startRow, perClientRunRows,
+ totalRows, status);
+ totalElapsedTime = t.test();
+ } else if (cmd.equals(SEQUENTIAL_WRITE)) {
+ Test t = new SequentialWriteTest(this.conf, startRow, perClientRunRows,
+ totalRows, status);
+ totalElapsedTime = t.test();
+ } else if (cmd.equals(RANDOM_SEEK_SCAN)) {
+ Test t = new RandomSeekScanTest(this.conf, startRow, perClientRunRows,
+ totalRows, status);
+ totalElapsedTime = t.test();
+ } else {
+ throw new IllegalArgumentException("Invalid command value: " + cmd);
+ }
+ status.setStatus("Finished " + cmd + " in " + totalElapsedTime +
+ "ms at offset " + startRow + " for " + perClientRunRows + " rows");
+ return totalElapsedTime;
+ }
+
+ private void runNIsOne(final String cmd) {
+ Status status = new Status() {
+ public void setStatus(String msg) throws IOException {
+ LOG.info(msg);
+ }
+ };
+
+ HBaseAdmin admin = null;
+ try {
+ admin = new HBaseAdmin(this.conf);
+ checkTable(admin);
+ runOneClient(cmd, 0, this.R, this.R, status);
+ } catch (Exception e) {
+ LOG.error("Failed", e);
+ }
+ }
+
+ private void runTest(final String cmd) throws IOException {
+ if (cmd.equals(RANDOM_READ_MEM)) {
+ // For this one test, so all fits in memory, make R smaller (See
+ // pg. 9 of BigTable paper).
+ R = (this.R / 10) * N;
+ }
+
+ MiniHBaseCluster hbaseMiniCluster = null;
+ MiniDFSCluster dfsCluster = null;
+ if (this.miniCluster) {
+ dfsCluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+ // mangle the conf so that the fs parameter points to the minidfs we
+ // just started up
+ FileSystem fs = dfsCluster.getFileSystem();
+ conf.set("fs.default.name", fs.getUri().toString());
+ Path parentdir = fs.getHomeDirectory();
+ conf.set(HConstants.HBASE_DIR, parentdir.toString());
+ fs.mkdirs(parentdir);
+ FSUtils.setVersion(fs, parentdir);
+ hbaseMiniCluster = new MiniHBaseCluster(this.conf, N);
+ }
+
+ try {
+ if (N == 1) {
+ // If there is only one client and one HRegionServer, we assume nothing
+ // has been set up at all.
+ runNIsOne(cmd);
+ } else {
+ // Else, run
+ runNIsMoreThanOne(cmd);
+ }
+ } finally {
+ if(this.miniCluster && hbaseMiniCluster != null) {
+ hbaseMiniCluster.shutdown();
+ HBaseTestCase.shutdownDfs(dfsCluster);
+ }
+ }
+ }
+
+ private void printUsage() {
+ printUsage(null);
+ }
+
+ private void printUsage(final String message) {
+ if (message != null && message.length() > 0) {
+ System.err.println(message);
+ }
+ System.err.println("Usage: java " + this.getClass().getName() +
+ " [--master=HOST:PORT] \\");
+ System.err.println(" [--miniCluster] [--nomapred] [--rows=ROWS] <command> <nclients>");
+ System.err.println();
+ System.err.println("Options:");
+ System.err.println(" master Specify host and port of HBase " +
+ "cluster master. If not present,");
+ System.err.println(" address is read from configuration");
+ System.err.println(" miniCluster Run the test on an HBaseMiniCluster");
+ System.err.println(" nomapred Run multiple clients using threads " +
+ "(rather than use mapreduce)");
+ System.err.println(" rows Rows each client runs. Default: One million");
+ System.err.println();
+ System.err.println("Command:");
+ System.err.println(" randomRead Run random read test");
+ System.err.println(" randomReadMem Run random read test where table " +
+ "is in memory");
+ System.err.println(" randomSeekScan Run random seek and scan 100 test");
+ System.err.println(" randomWrite Run random write test");
+ System.err.println(" sequentialRead Run sequential read test");
+ System.err.println(" sequentialWrite Run sequential write test");
+ System.err.println(" scan Run scan test");
+ System.err.println();
+ System.err.println("Args:");
+ System.err.println(" nclients Integer. Required. Total number of " +
+ "clients (and HRegionServers)");
+ System.err.println(" running: 1 <= value <= 500");
+ System.err.println("Examples:");
+ System.err.println(" To run a single evaluation client:");
+ System.err.println(" $ bin/hbase " +
+ "org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 1");
+ }
+
+ private void getArgs(final int start, final String[] args) {
+ if(start + 1 > args.length) {
+ throw new IllegalArgumentException("must supply the number of clients");
+ }
+
+ N = Integer.parseInt(args[start]);
+ if (N < 1) {
+ throw new IllegalArgumentException("Number of clients must be > 1");
+ }
+
+ // Set total number of rows to write.
+ this.R = this.R * N;
+ }
+
+ private int doCommandLine(final String[] args) {
+ // Process command-line args. TODO: Better cmd-line processing
+ // (but hopefully something not as painful as cli options).
+ int errCode = -1;
+ if (args.length < 1) {
+ printUsage();
+ return errCode;
+ }
+
+ try {
+ for (int i = 0; i < args.length; i++) {
+ String cmd = args[i];
+ if (cmd.equals("-h") || cmd.startsWith("--h")) {
+ printUsage();
+ errCode = 0;
+ break;
+ }
+
+ final String masterArgKey = "--master=";
+ if (cmd.startsWith(masterArgKey)) {
+ this.conf.set(MASTER_ADDRESS, cmd.substring(masterArgKey.length()));
+ continue;
+ }
+
+ final String miniClusterArgKey = "--miniCluster";
+ if (cmd.startsWith(miniClusterArgKey)) {
+ this.miniCluster = true;
+ continue;
+ }
+
+ final String nmr = "--nomapred";
+ if (cmd.startsWith(nmr)) {
+ this.nomapred = true;
+ continue;
+ }
+
+ final String rows = "--rows=";
+ if (cmd.startsWith(rows)) {
+ this.R = Integer.parseInt(cmd.substring(rows.length()));
+ continue;
+ }
+
+ if (COMMANDS.contains(cmd)) {
+ getArgs(i + 1, args);
+ runTest(cmd);
+ errCode = 0;
+ break;
+ }
+
+ printUsage();
+ break;
+ }
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+
+ return errCode;
+ }
+
+ /**
+ * @param args
+ */
+ public static void main(final String[] args) {
+ HBaseConfiguration c = new HBaseConfiguration();
+ System.exit(new PerformanceEvaluation(c).doCommandLine(args));
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/PerformanceEvaluationCommons.java b/src/test/org/apache/hadoop/hbase/PerformanceEvaluationCommons.java
new file mode 100644
index 0000000..78d984c
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/PerformanceEvaluationCommons.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+
+/**
+ * Code shared by PE tests.
+ */
+public class PerformanceEvaluationCommons {
+ static final Log LOG =
+ LogFactory.getLog(PerformanceEvaluationCommons.class.getName());
+
+ public static void assertValueSize(final int expectedSize, final int got) {
+ if (got != expectedSize) {
+ throw new AssertionError("Expected " + expectedSize + " but got " + got);
+ }
+ }
+
+ public static void assertKey(final byte [] expected, final ByteBuffer got) {
+ byte [] b = new byte[got.limit()];
+ got.get(b, 0, got.limit());
+ assertKey(expected, b);
+ }
+
+ public static void assertKey(final byte [] expected, final byte [] got) {
+ if (!org.apache.hadoop.hbase.util.Bytes.equals(expected, got)) {
+ throw new AssertionError("Expected " +
+ org.apache.hadoop.hbase.util.Bytes.toString(expected) +
+ " but got " + org.apache.hadoop.hbase.util.Bytes.toString(got));
+ }
+ }
+
+ public static void concurrentReads(final Runnable r) {
+ final int count = 1;
+ long now = System.currentTimeMillis();
+ List<Thread> threads = new ArrayList<Thread>(count);
+ for (int i = 0; i < count; i++) {
+ Thread t = new Thread(r);
+ t.setName("" + i);
+ threads.add(t);
+ }
+ for (Thread t: threads) {
+ t.start();
+ }
+ for (Thread t: threads) {
+ try {
+ t.join();
+ } catch (InterruptedException e) {
+ e.printStackTrace();
+ }
+ }
+ LOG.info("Test took " + (System.currentTimeMillis() - now));
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/PerformanceEvaluation_Counter.properties b/src/test/org/apache/hadoop/hbase/PerformanceEvaluation_Counter.properties
new file mode 100644
index 0000000..28493ff
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/PerformanceEvaluation_Counter.properties
@@ -0,0 +1,30 @@
+# ResourceBundle properties file for Map-Reduce counters
+
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements. See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership. The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License. You may obtain a copy of the License at
+# *
+# * http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+CounterGroupName= HBase Performance Evaluation
+ELAPSED_TIME.name= Elapsed time in milliseconds
+ROWS.name= Row count
+# ResourceBundle properties file for Map-Reduce counters
+
+CounterGroupName= HBase Performance Evaluation
+ELAPSED_TIME.name= Elapsed time in milliseconds
+ROWS.name= Row count
diff --git a/src/test/org/apache/hadoop/hbase/TestClassMigration.java b/src/test/org/apache/hadoop/hbase/TestClassMigration.java
new file mode 100644
index 0000000..fb7f45b
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestClassMigration.java
@@ -0,0 +1,261 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+
+import junit.framework.TestCase;
+
+/**
+ * Test that individual classes can migrate themselves.
+ */
+public class TestClassMigration extends TestCase {
+
+ /**
+ * Test we can migrate a 0.1 version of HSK.
+ * @throws Exception
+ */
+ public void testMigrateHStoreKey() throws Exception {
+ long now = System.currentTimeMillis();
+ byte [] nameBytes = Bytes.toBytes(getName());
+ Text nameText = new Text(nameBytes);
+ HStoreKey01Branch hsk = new HStoreKey01Branch(nameText, nameText, now);
+ byte [] b = Writables.getBytes(hsk);
+ HStoreKey deserializedHsk =
+ (HStoreKey)Writables.getWritable(b, new HStoreKey());
+ assertEquals(deserializedHsk.getTimestamp(), hsk.getTimestamp());
+ assertTrue(Bytes.equals(nameBytes, deserializedHsk.getColumn()));
+ assertTrue(Bytes.equals(nameBytes, deserializedHsk.getRow()));
+ }
+
+ /**
+ * HBase 0.1 branch HStoreKey. Same in all regards except the utility
+ * methods have been removed.
+ * Used in test of HSK migration test.
+ */
+ private static class HStoreKey01Branch implements WritableComparable {
+ /**
+ * Colon character in UTF-8
+ */
+ public static final char COLUMN_FAMILY_DELIMITER = ':';
+
+ private Text row;
+ private Text column;
+ private long timestamp;
+
+
+ /** Default constructor used in conjunction with Writable interface */
+ public HStoreKey01Branch() {
+ this(new Text());
+ }
+
+ /**
+ * Create an HStoreKey specifying only the row
+ * The column defaults to the empty string and the time stamp defaults to
+ * Long.MAX_VALUE
+ *
+ * @param row - row key
+ */
+ public HStoreKey01Branch(Text row) {
+ this(row, Long.MAX_VALUE);
+ }
+
+ /**
+ * Create an HStoreKey specifying the row and timestamp
+ * The column name defaults to the empty string
+ *
+ * @param row row key
+ * @param timestamp timestamp value
+ */
+ public HStoreKey01Branch(Text row, long timestamp) {
+ this(row, new Text(), timestamp);
+ }
+
+ /**
+ * Create an HStoreKey specifying the row and column names
+ * The timestamp defaults to LATEST_TIMESTAMP
+ *
+ * @param row row key
+ * @param column column key
+ */
+ public HStoreKey01Branch(Text row, Text column) {
+ this(row, column, HConstants.LATEST_TIMESTAMP);
+ }
+
+ /**
+ * Create an HStoreKey specifying all the fields
+ *
+ * @param row row key
+ * @param column column key
+ * @param timestamp timestamp value
+ */
+ public HStoreKey01Branch(Text row, Text column, long timestamp) {
+ // Make copies by doing 'new Text(arg)'.
+ this.row = new Text(row);
+ this.column = new Text(column);
+ this.timestamp = timestamp;
+ }
+
+ /** @return Approximate size in bytes of this key. */
+ public long getSize() {
+ return this.row.getLength() + this.column.getLength() +
+ 8 /* There is no sizeof in java. Presume long is 8 (64bit machine)*/;
+ }
+
+ /**
+ * Constructs a new HStoreKey from another
+ *
+ * @param other the source key
+ */
+ public HStoreKey01Branch(HStoreKey01Branch other) {
+ this(other.row, other.column, other.timestamp);
+ }
+
+ /**
+ * Change the value of the row key
+ *
+ * @param newrow new row key value
+ */
+ public void setRow(Text newrow) {
+ this.row.set(newrow);
+ }
+
+ /**
+ * Change the value of the column key
+ *
+ * @param newcol new column key value
+ */
+ public void setColumn(Text newcol) {
+ this.column.set(newcol);
+ }
+
+ /**
+ * Change the value of the timestamp field
+ *
+ * @param timestamp new timestamp value
+ */
+ public void setVersion(long timestamp) {
+ this.timestamp = timestamp;
+ }
+
+ /**
+ * Set the value of this HStoreKey from the supplied key
+ *
+ * @param k key value to copy
+ */
+ public void set(HStoreKey01Branch k) {
+ this.row = k.getRow();
+ this.column = k.getColumn();
+ this.timestamp = k.getTimestamp();
+ }
+
+ /** @return value of row key */
+ public Text getRow() {
+ return row;
+ }
+
+ /** @return value of column key */
+ public Text getColumn() {
+ return column;
+ }
+
+ /** @return value of timestamp */
+ public long getTimestamp() {
+ return timestamp;
+ }
+
+ @Override
+ public String toString() {
+ return row.toString() + "/" + column.toString() + "/" + timestamp;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ return compareTo(obj) == 0;
+ }
+
+ @Override
+ public int hashCode() {
+ int result = this.row.hashCode();
+ result ^= this.column.hashCode();
+ result ^= this.timestamp;
+ return result;
+ }
+
+ // Comparable
+
+ public int compareTo(Object o) {
+ HStoreKey01Branch other = (HStoreKey01Branch)o;
+ int result = this.row.compareTo(other.row);
+ if (result != 0) {
+ return result;
+ }
+ result = this.column.compareTo(other.column);
+ if (result != 0) {
+ return result;
+ }
+ // The below older timestamps sorting ahead of newer timestamps looks
+ // wrong but it is intentional. This way, newer timestamps are first
+ // found when we iterate over a memcache and newer versions are the
+ // first we trip over when reading from a store file.
+ if (this.timestamp < other.timestamp) {
+ result = 1;
+ } else if (this.timestamp > other.timestamp) {
+ result = -1;
+ }
+ return result;
+ }
+
+ // Writable
+
+ public void write(DataOutput out) throws IOException {
+ row.write(out);
+ column.write(out);
+ out.writeLong(timestamp);
+ }
+
+ public void readFields(DataInput in) throws IOException {
+ row.readFields(in);
+ column.readFields(in);
+ timestamp = in.readLong();
+ }
+
+ /**
+ * Returns row and column bytes out of an HStoreKey.
+ * @param hsk Store key.
+ * @return byte array encoding of HStoreKey
+ * @throws UnsupportedEncodingException
+ */
+ public static byte[] getBytes(final HStoreKey hsk)
+ throws UnsupportedEncodingException {
+ StringBuilder s = new StringBuilder(Bytes.toString(hsk.getRow()));
+ s.append(Bytes.toString(hsk.getColumn()));
+ return s.toString().getBytes(HConstants.UTF8_ENCODING);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestCompare.java b/src/test/org/apache/hadoop/hbase/TestCompare.java
new file mode 100644
index 0000000..a825a4e
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestCompare.java
@@ -0,0 +1,125 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+/**
+ * Test comparing HBase objects.
+ */
+public class TestCompare extends TestCase {
+
+ /**
+ * HStoreKey sorts as you would expect in the row and column portions but
+ * for the timestamps, it sorts in reverse with the newest sorting before
+ * the oldest (This is intentional so we trip over the latest first when
+ * iterating or looking in store files).
+ */
+ public void testHStoreKey() {
+ long timestamp = System.currentTimeMillis();
+ byte [] a = Bytes.toBytes("a");
+ HStoreKey past = new HStoreKey(a, a, timestamp - 10);
+ HStoreKey now = new HStoreKey(a, a, timestamp);
+ HStoreKey future = new HStoreKey(a, a, timestamp + 10);
+ assertTrue(past.compareTo(now) > 0);
+ assertTrue(now.compareTo(now) == 0);
+ assertTrue(future.compareTo(now) < 0);
+ // Check that empty column comes before one with a column
+ HStoreKey nocolumn = new HStoreKey(a, timestamp);
+ HStoreKey withcolumn = new HStoreKey(a, a, timestamp);
+ assertTrue(nocolumn.compareTo(withcolumn) < 0);
+ // Check that empty column comes and LATEST comes before one with a column
+ // and old timestamp.
+ nocolumn = new HStoreKey(a, HConstants.LATEST_TIMESTAMP);
+ withcolumn = new HStoreKey(a, a, timestamp);
+ assertTrue(nocolumn.compareTo(withcolumn) < 0);
+ // Test null keys.
+ HStoreKey normal = new HStoreKey("a", "b");
+ assertTrue(normal.compareTo(null) > 0);
+ assertTrue(HStoreKey.compareTo(null, null) == 0);
+ assertTrue(HStoreKey.compareTo(null, normal) < 0);
+ }
+
+ /**
+ * Tests cases where rows keys have characters below the ','.
+ * See HBASE-832
+ */
+ public void testHStoreKeyBorderCases() {
+ /** TODO!!!!
+ HRegionInfo info = new HRegionInfo(new HTableDescriptor("testtable"),
+ HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY);
+ HStoreKey rowA = new HStoreKey("testtable,www.hbase.org/,1234",
+ "", Long.MAX_VALUE, info);
+ HStoreKey rowB = new HStoreKey("testtable,www.hbase.org/%20,99999",
+ "", Long.MAX_VALUE, info);
+
+ assertTrue(rowA.compareTo(rowB) > 0);
+
+ rowA = new HStoreKey("testtable,www.hbase.org/,1234",
+ "", Long.MAX_VALUE, HRegionInfo.FIRST_META_REGIONINFO);
+ rowB = new HStoreKey("testtable,www.hbase.org/%20,99999",
+ "", Long.MAX_VALUE, HRegionInfo.FIRST_META_REGIONINFO);
+
+ assertTrue(rowA.compareTo(rowB) < 0);
+
+ rowA = new HStoreKey("testtable,,1234",
+ "", Long.MAX_VALUE, HRegionInfo.FIRST_META_REGIONINFO);
+ rowB = new HStoreKey("testtable,$www.hbase.org/,99999",
+ "", Long.MAX_VALUE, HRegionInfo.FIRST_META_REGIONINFO);
+
+ assertTrue(rowA.compareTo(rowB) < 0);
+
+ rowA = new HStoreKey(".META.,testtable,www.hbase.org/,1234,4321",
+ "", Long.MAX_VALUE, HRegionInfo.ROOT_REGIONINFO);
+ rowB = new HStoreKey(".META.,testtable,www.hbase.org/%20,99999,99999",
+ "", Long.MAX_VALUE, HRegionInfo.ROOT_REGIONINFO);
+
+ assertTrue(rowA.compareTo(rowB) > 0);
+ */
+ }
+
+
+ /**
+ * Sort of HRegionInfo.
+ */
+ public void testHRegionInfo() {
+ HRegionInfo a = new HRegionInfo(new HTableDescriptor("a"), null, null);
+ HRegionInfo b = new HRegionInfo(new HTableDescriptor("b"), null, null);
+ assertTrue(a.compareTo(b) != 0);
+ HTableDescriptor t = new HTableDescriptor("t");
+ byte [] midway = Bytes.toBytes("midway");
+ a = new HRegionInfo(t, null, midway);
+ b = new HRegionInfo(t, midway, null);
+ assertTrue(a.compareTo(b) < 0);
+ assertTrue(b.compareTo(a) > 0);
+ assertEquals(a, a);
+ assertTrue(a.compareTo(a) == 0);
+ a = new HRegionInfo(t, Bytes.toBytes("a"), Bytes.toBytes("d"));
+ b = new HRegionInfo(t, Bytes.toBytes("e"), Bytes.toBytes("g"));
+ assertTrue(a.compareTo(b) < 0);
+ a = new HRegionInfo(t, Bytes.toBytes("aaaa"), Bytes.toBytes("dddd"));
+ b = new HRegionInfo(t, Bytes.toBytes("e"), Bytes.toBytes("g"));
+ assertTrue(a.compareTo(b) < 0);
+ a = new HRegionInfo(t, Bytes.toBytes("aaaa"), Bytes.toBytes("dddd"));
+ b = new HRegionInfo(t, Bytes.toBytes("aaaa"), Bytes.toBytes("eeee"));
+ assertTrue(a.compareTo(b) < 0);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java b/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java
new file mode 100644
index 0000000..122d278
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestEmptyMetaInfo.java
@@ -0,0 +1,77 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests master cleanup of rows in meta table where there is no HRegionInfo
+ */
+public class TestEmptyMetaInfo extends HBaseClusterTestCase {
+ /**
+ * Insert some bogus rows in meta. Master should clean them up.
+ * @throws IOException
+ */
+ public void testEmptyMetaInfo() throws IOException {
+ HTable t = new HTable(conf, HConstants.META_TABLE_NAME);
+ final int COUNT = 5;
+ final byte [] tableName = Bytes.toBytes(getName());
+ for (int i = 0; i < COUNT; i++) {
+ byte [] regionName = HRegionInfo.createRegionName(tableName,
+ Bytes.toBytes(i == 0? "": Integer.toString(i)),
+ Long.toString(System.currentTimeMillis()));
+ BatchUpdate b = new BatchUpdate(regionName);
+ b.put(HConstants.COL_SERVER, Bytes.toBytes("localhost:1234"));
+ t.commit(b);
+ }
+ long sleepTime =
+ conf.getLong("hbase.master.meta.thread.rescanfrequency", 10000);
+ int tries = conf.getInt("hbase.client.retries.number", 5);
+ int count = 0;
+ do {
+ tries -= 1;
+ try {
+ Thread.sleep(sleepTime);
+ } catch (InterruptedException e) {
+ // ignore
+ }
+ Scanner scanner = t.getScanner(HConstants.ALL_META_COLUMNS, tableName);
+ try {
+ count = 0;
+ for (RowResult r: scanner) {
+ if (r.size() > 0) {
+ count += 1;
+ }
+ }
+ } finally {
+ scanner.close();
+ }
+ } while (count != 0 && tries >= 0);
+ assertTrue(tries >= 0);
+ assertEquals(0, count);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java b/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java
new file mode 100644
index 0000000..2c46619
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestHBaseCluster.java
@@ -0,0 +1,207 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Iterator;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test HBase Master and Region servers, client API
+ */
+public class TestHBaseCluster extends HBaseClusterTestCase {
+ private static final Log LOG = LogFactory.getLog(TestHBaseCluster.class);
+
+ private HTableDescriptor desc;
+ private HBaseAdmin admin;
+ private HTable table;
+
+ /** constructor */
+ public TestHBaseCluster() {
+ super();
+ this.desc = null;
+ this.admin = null;
+ this.table = null;
+
+ // Make the thread wake frequency a little slower so other threads
+ // can run
+ conf.setInt("hbase.server.thread.wakefrequency", 2000);
+
+ // Make lease timeout longer, lease checks less frequent
+ conf.setInt("hbase.master.lease.period", 10 * 1000);
+
+ // Increase the amount of time between client retries
+ conf.setLong("hbase.client.pause", 15 * 1000);
+ }
+
+ /**
+ * Since all the "tests" depend on the results of the previous test, they are
+ * not Junit tests that can stand alone. Consequently we have a single Junit
+ * test that runs the "sub-tests" as private methods.
+ * @throws IOException
+ */
+ public void testHBaseCluster() throws IOException {
+ setup();
+ basic();
+ scanner();
+ listTables();
+ }
+
+ private static final int FIRST_ROW = 1;
+ private static final int NUM_VALS = 1000;
+ private static final byte [] CONTENTS = Bytes.toBytes("contents:");
+ private static final String CONTENTS_BASIC_STR = "contents:basic";
+ private static final byte [] CONTENTS_BASIC = Bytes.toBytes(CONTENTS_BASIC_STR);
+ private static final String CONTENTSTR = "contentstr";
+ private static final byte [] ANCHOR = Bytes.toBytes("anchor:");
+ private static final String ANCHORNUM = "anchor:anchornum-";
+ private static final String ANCHORSTR = "anchorstr";
+
+ private void setup() throws IOException {
+ desc = new HTableDescriptor("test");
+ desc.addFamily(new HColumnDescriptor(CONTENTS));
+ desc.addFamily(new HColumnDescriptor(ANCHOR));
+ admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ table = new HTable(conf, desc.getName());
+ }
+
+ // Test basic functionality. Writes to contents:basic and anchor:anchornum-*
+
+ private void basic() throws IOException {
+ long startTime = System.currentTimeMillis();
+
+ // Write out a bunch of values
+
+ for (int k = FIRST_ROW; k <= NUM_VALS; k++) {
+ BatchUpdate b = new BatchUpdate("row_" + k);
+ b.put(CONTENTS_BASIC, Bytes.toBytes(CONTENTSTR + k));
+ b.put(ANCHORNUM + k, Bytes.toBytes(ANCHORSTR + k));
+ table.commit(b);
+ }
+ LOG.info("Write " + NUM_VALS + " rows. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // Read them back in
+
+ startTime = System.currentTimeMillis();
+
+ byte [] collabel = null;
+ for (int k = FIRST_ROW; k <= NUM_VALS; k++) {
+ String rowlabelStr = "row_" + k;
+ byte [] rowlabel = Bytes.toBytes(rowlabelStr);
+
+ byte bodydata[] = table.get(rowlabel, CONTENTS_BASIC).getValue();
+ assertNotNull("no data for row " + rowlabelStr + "/" + CONTENTS_BASIC_STR,
+ bodydata);
+ String bodystr = new String(bodydata, HConstants.UTF8_ENCODING);
+ String teststr = CONTENTSTR + k;
+ assertTrue("Incorrect value for key: (" + rowlabelStr + "/" +
+ CONTENTS_BASIC_STR + "), expected: '" + teststr + "' got: '" +
+ bodystr + "'", teststr.compareTo(bodystr) == 0);
+
+ String collabelStr = ANCHORNUM + k;
+ collabel = Bytes.toBytes(collabelStr);
+ bodydata = table.get(rowlabel, collabel).getValue();
+ assertNotNull("no data for row " + rowlabelStr + "/" + collabelStr, bodydata);
+ bodystr = new String(bodydata, HConstants.UTF8_ENCODING);
+ teststr = ANCHORSTR + k;
+ assertTrue("Incorrect value for key: (" + rowlabelStr + "/" + collabelStr +
+ "), expected: '" + teststr + "' got: '" + bodystr + "'",
+ teststr.compareTo(bodystr) == 0);
+ }
+
+ LOG.info("Read " + NUM_VALS + " rows. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+ }
+
+ private void scanner() throws IOException {
+ byte [][] cols = new byte [][] {Bytes.toBytes(ANCHORNUM + "[0-9]+"),
+ CONTENTS_BASIC};
+
+ long startTime = System.currentTimeMillis();
+
+ Scanner s = table.getScanner(cols, HConstants.EMPTY_BYTE_ARRAY);
+ try {
+
+ int contentsFetched = 0;
+ int anchorFetched = 0;
+ int k = 0;
+ for (RowResult curVals : s) {
+ for (Iterator<byte []> it = curVals.keySet().iterator(); it.hasNext(); ) {
+ byte [] col = it.next();
+ byte val[] = curVals.get(col).getValue();
+ String curval = Bytes.toString(val);
+ if (Bytes.compareTo(col, CONTENTS_BASIC) == 0) {
+ assertTrue("Error at:" + Bytes.toString(curVals.getRow())
+ + ", Value for " + Bytes.toString(col) + " should start with: " + CONTENTSTR
+ + ", but was fetched as: " + curval,
+ curval.startsWith(CONTENTSTR));
+ contentsFetched++;
+
+ } else if (Bytes.toString(col).startsWith(ANCHORNUM)) {
+ assertTrue("Error at:" + Bytes.toString(curVals.getRow())
+ + ", Value for " + Bytes.toString(col) + " should start with: " + ANCHORSTR
+ + ", but was fetched as: " + curval,
+ curval.startsWith(ANCHORSTR));
+ anchorFetched++;
+
+ } else {
+ LOG.info(Bytes.toString(col));
+ }
+ }
+ k++;
+ }
+ assertEquals("Expected " + NUM_VALS + " " +
+ Bytes.toString(CONTENTS_BASIC) + " values, but fetched " +
+ contentsFetched,
+ NUM_VALS, contentsFetched);
+ assertEquals("Expected " + NUM_VALS + " " + ANCHORNUM +
+ " values, but fetched " + anchorFetched,
+ NUM_VALS, anchorFetched);
+
+ LOG.info("Scanned " + NUM_VALS
+ + " rows. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ } finally {
+ s.close();
+ }
+ }
+
+ private void listTables() throws IOException {
+ HTableDescriptor[] tables = admin.listTables();
+ assertEquals(1, tables.length);
+ assertTrue(Bytes.equals(desc.getName(), tables[0].getName()));
+ Collection<HColumnDescriptor> families = tables[0].getFamilies();
+ assertEquals(2, families.size());
+ assertTrue(tables[0].hasFamily(CONTENTS));
+ assertTrue(tables[0].hasFamily(ANCHOR));
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestHStoreKey.java b/src/test/org/apache/hadoop/hbase/TestHStoreKey.java
new file mode 100644
index 0000000..50f765b
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestHStoreKey.java
@@ -0,0 +1,270 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Set;
+import java.util.TreeSet;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * Tests for the HStoreKey Plain and Meta RawComparators.
+ */
+public class TestHStoreKey extends TestCase {
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ super.tearDown();
+ }
+
+ public void testMoreComparisons() throws Exception {
+ // Root compares
+ HStoreKey a = new HStoreKey(".META.,,99999999999999");
+ HStoreKey b = new HStoreKey(".META.,,1");
+ HStoreKey.StoreKeyComparator c = new HStoreKey.RootStoreKeyComparator();
+ assertTrue(c.compare(b.getBytes(), a.getBytes()) < 0);
+ HStoreKey aa = new HStoreKey(".META.,,1");
+ HStoreKey bb = new HStoreKey(".META.,,1", "info:regioninfo", 1235943454602L);
+ assertTrue(c.compare(aa.getBytes(), bb.getBytes()) < 0);
+
+ // Meta compares
+ HStoreKey aaa = new HStoreKey("TestScanMultipleVersions,row_0500,1236020145502");
+ HStoreKey bbb = new HStoreKey("TestScanMultipleVersions,,99999999999999");
+ c = new HStoreKey.MetaStoreKeyComparator();
+ assertTrue(c.compare(bbb.getBytes(), aaa.getBytes()) < 0);
+
+ HStoreKey aaaa = new HStoreKey("TestScanMultipleVersions,,1236023996656",
+ "info:regioninfo", 1236024396271L);
+ assertTrue(c.compare(aaaa.getBytes(), bbb.getBytes()) < 0);
+
+ HStoreKey x = new HStoreKey("TestScanMultipleVersions,row_0500,1236034574162",
+ "", 9223372036854775807L);
+ HStoreKey y = new HStoreKey("TestScanMultipleVersions,row_0500,1236034574162",
+ "info:regioninfo", 1236034574912L);
+ assertTrue(c.compare(x.getBytes(), y.getBytes()) < 0);
+
+ comparisons(new HStoreKey.HStoreKeyRootComparator());
+ comparisons(new HStoreKey.HStoreKeyMetaComparator());
+ comparisons(new HStoreKey.HStoreKeyComparator());
+ metacomparisons(new HStoreKey.HStoreKeyRootComparator());
+ metacomparisons(new HStoreKey.HStoreKeyMetaComparator());
+ }
+
+ /**
+ * Tests cases where rows keys have characters below the ','.
+ * See HBASE-832
+ * @throws IOException
+ */
+ public void testHStoreKeyBorderCases() throws IOException {
+ HStoreKey rowA = new HStoreKey("testtable,www.hbase.org/,1234",
+ "", Long.MAX_VALUE);
+ byte [] rowABytes = Writables.getBytes(rowA);
+ HStoreKey rowB = new HStoreKey("testtable,www.hbase.org/%20,99999",
+ "", Long.MAX_VALUE);
+ byte [] rowBBytes = Writables.getBytes(rowB);
+ // This is a plain compare on the row. It gives wrong answer for meta table
+ // row entry.
+ assertTrue(rowA.compareTo(rowB) > 0);
+ HStoreKey.MetaStoreKeyComparator c =
+ new HStoreKey.MetaStoreKeyComparator();
+ assertTrue(c.compare(rowABytes, rowBBytes) < 0);
+
+ rowA = new HStoreKey("testtable,,1234", "", Long.MAX_VALUE);
+ rowB = new HStoreKey("testtable,$www.hbase.org/,99999", "", Long.MAX_VALUE);
+ assertTrue(rowA.compareTo(rowB) > 0);
+ assertTrue(c.compare( Writables.getBytes(rowA), Writables.getBytes(rowB)) < 0);
+
+ rowA = new HStoreKey(".META.,testtable,www.hbase.org/,1234,4321", "",
+ Long.MAX_VALUE);
+ rowB = new HStoreKey(".META.,testtable,www.hbase.org/%20,99999,99999", "",
+ Long.MAX_VALUE);
+ assertTrue(rowA.compareTo(rowB) > 0);
+ HStoreKey.RootStoreKeyComparator rootComparator =
+ new HStoreKey.RootStoreKeyComparator();
+ assertTrue(rootComparator.compare( Writables.getBytes(rowA),
+ Writables.getBytes(rowB)) < 0);
+ }
+
+ private void metacomparisons(final HStoreKey.HStoreKeyComparator c) {
+ assertTrue(c.compare(new HStoreKey(".META.,a,,0,1"),
+ new HStoreKey(".META.,a,,0,1")) == 0);
+ assertTrue(c.compare(new HStoreKey(".META.,a,,0,1"),
+ new HStoreKey(".META.,a,,0,2")) < 0);
+ assertTrue(c.compare(new HStoreKey(".META.,a,,0,2"),
+ new HStoreKey(".META.,a,,0,1")) > 0);
+ }
+
+ private void comparisons(final HStoreKey.HStoreKeyComparator c) {
+ assertTrue(c.compare(new HStoreKey(".META.,,1"),
+ new HStoreKey(".META.,,1")) == 0);
+ assertTrue(c.compare(new HStoreKey(".META.,,1"),
+ new HStoreKey(".META.,,2")) < 0);
+ assertTrue(c.compare(new HStoreKey(".META.,,2"),
+ new HStoreKey(".META.,,1")) > 0);
+ }
+
+ @SuppressWarnings("unchecked")
+ public void testBinaryKeys() throws Exception {
+ Set<HStoreKey> set = new TreeSet<HStoreKey>(new HStoreKey.HStoreKeyComparator());
+ HStoreKey [] keys = {new HStoreKey("aaaaa,\u0000\u0000,2", getName(), 2),
+ new HStoreKey("aaaaa,\u0001,3", getName(), 3),
+ new HStoreKey("aaaaa,,1", getName(), 1),
+ new HStoreKey("aaaaa,\u1000,5", getName(), 5),
+ new HStoreKey("aaaaa,a,4", getName(), 4),
+ new HStoreKey("a,a,0", getName(), 0),
+ };
+ // Add to set with bad comparator
+ for (int i = 0; i < keys.length; i++) {
+ set.add(keys[i]);
+ }
+ // This will output the keys incorrectly.
+ boolean assertion = false;
+ int count = 0;
+ try {
+ for (HStoreKey k: set) {
+ assertTrue(count++ == k.getTimestamp());
+ }
+ } catch (junit.framework.AssertionFailedError e) {
+ // Expected
+ assertion = true;
+ }
+ assertTrue(assertion);
+ // Make set with good comparator
+ set = new TreeSet<HStoreKey>(new HStoreKey.HStoreKeyMetaComparator());
+ for (int i = 0; i < keys.length; i++) {
+ set.add(keys[i]);
+ }
+ count = 0;
+ for (HStoreKey k: set) {
+ assertTrue(count++ == k.getTimestamp());
+ }
+ // Make up -ROOT- table keys.
+ HStoreKey [] rootKeys = {
+ new HStoreKey(".META.,aaaaa,\u0000\u0000,0,2", getName(), 2),
+ new HStoreKey(".META.,aaaaa,\u0001,0,3", getName(), 3),
+ new HStoreKey(".META.,aaaaa,,0,1", getName(), 1),
+ new HStoreKey(".META.,aaaaa,\u1000,0,5", getName(), 5),
+ new HStoreKey(".META.,aaaaa,a,0,4", getName(), 4),
+ new HStoreKey(".META.,,0", getName(), 0),
+ };
+ // This will output the keys incorrectly.
+ set = new TreeSet<HStoreKey>(new HStoreKey.HStoreKeyMetaComparator());
+ // Add to set with bad comparator
+ for (int i = 0; i < keys.length; i++) {
+ set.add(rootKeys[i]);
+ }
+ assertion = false;
+ count = 0;
+ try {
+ for (HStoreKey k: set) {
+ assertTrue(count++ == k.getTimestamp());
+ }
+ } catch (junit.framework.AssertionFailedError e) {
+ // Expected
+ assertion = true;
+ }
+ // Now with right comparator
+ set = new TreeSet<HStoreKey>(new HStoreKey.HStoreKeyRootComparator());
+ // Add to set with bad comparator
+ for (int i = 0; i < keys.length; i++) {
+ set.add(rootKeys[i]);
+ }
+ count = 0;
+ for (HStoreKey k: set) {
+ assertTrue(count++ == k.getTimestamp());
+ }
+ }
+
+ public void testSerialization() throws IOException {
+ HStoreKey hsk = new HStoreKey(getName(), getName(), 123);
+ byte [] b = hsk.getBytes();
+ HStoreKey hsk2 = HStoreKey.create(b);
+ assertTrue(hsk.equals(hsk2));
+ // Test getBytes with empty column
+ hsk = new HStoreKey(getName());
+ assertTrue(Bytes.equals(hsk.getBytes(),
+ HStoreKey.getBytes(Bytes.toBytes(getName()), null,
+ HConstants.LATEST_TIMESTAMP)));
+ }
+
+ public void testGetBytes() throws IOException {
+ long now = System.currentTimeMillis();
+ HStoreKey hsk = new HStoreKey("one", "two", now);
+ byte [] writablesBytes = Writables.getBytes(hsk);
+ byte [] selfSerializationBytes = hsk.getBytes();
+ Bytes.equals(writablesBytes, selfSerializationBytes);
+ }
+
+ public void testByteBuffer() throws Exception {
+ final long ts = 123;
+ final byte [] row = Bytes.toBytes("row");
+ final byte [] column = Bytes.toBytes("column");
+ HStoreKey hsk = new HStoreKey(row, column, ts);
+ ByteBuffer bb = ByteBuffer.wrap(hsk.getBytes());
+ assertTrue(Bytes.equals(row, HStoreKey.getRow(bb)));
+ assertTrue(Bytes.equals(column, HStoreKey.getColumn(bb)));
+ assertEquals(ts, HStoreKey.getTimestamp(bb));
+ }
+
+ /**
+ * Test the byte comparator works same as the object comparator.
+ * @throws IOException
+ */
+ public void testRawComparator() throws IOException {
+ long timestamp = System.currentTimeMillis();
+ byte [] a = Bytes.toBytes("a");
+ HStoreKey past = new HStoreKey(a, a, timestamp - 10);
+ byte [] pastBytes = Writables.getBytes(past);
+ HStoreKey now = new HStoreKey(a, a, timestamp);
+ byte [] nowBytes = Writables.getBytes(now);
+ HStoreKey future = new HStoreKey(a, a, timestamp + 10);
+ byte [] futureBytes = Writables.getBytes(future);
+ HStoreKey.StoreKeyComparator comparator =
+ new HStoreKey.StoreKeyComparator();
+ assertTrue(past.compareTo(now) > 0);
+ assertTrue(comparator.compare(pastBytes, nowBytes) > 0);
+ assertTrue(now.compareTo(now) == 0);
+ assertTrue(comparator.compare(nowBytes, nowBytes) == 0);
+ assertTrue(future.compareTo(now) < 0);
+ assertTrue(comparator.compare(futureBytes, nowBytes) < 0);
+ // Check that empty column comes before one with a column
+ HStoreKey nocolumn = new HStoreKey(a, timestamp);
+ byte [] nocolumnBytes = Writables.getBytes(nocolumn);
+ HStoreKey withcolumn = new HStoreKey(a, a, timestamp);
+ byte [] withcolumnBytes = Writables.getBytes(withcolumn);
+ assertTrue(nocolumn.compareTo(withcolumn) < 0);
+ assertTrue(comparator.compare(nocolumnBytes, withcolumnBytes) < 0);
+ // Check that empty column comes and LATEST comes before one with a column
+ // and old timestamp.
+ nocolumn = new HStoreKey(a, HConstants.LATEST_TIMESTAMP);
+ nocolumnBytes = Writables.getBytes(nocolumn);
+ withcolumn = new HStoreKey(a, a, timestamp);
+ withcolumnBytes = Writables.getBytes(withcolumn);
+ assertTrue(nocolumn.compareTo(withcolumn) < 0);
+ assertTrue(comparator.compare(nocolumnBytes, withcolumnBytes) < 0);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestInfoServers.java b/src/test/org/apache/hadoop/hbase/TestInfoServers.java
new file mode 100644
index 0000000..911ac44
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestInfoServers.java
@@ -0,0 +1,76 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.BufferedInputStream;
+import java.io.IOException;
+import java.net.URL;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HTable;
+
+/**
+ * Testing, info servers are disabled. This test enables then and checks that
+ * they serve pages.
+ */
+public class TestInfoServers extends HBaseClusterTestCase {
+ static final Log LOG = LogFactory.getLog(TestInfoServers.class);
+
+ @Override
+ protected void preHBaseClusterSetup() {
+ // Bring up info servers on 'odd' port numbers in case the test is not
+ // sourcing the src/test/hbase-default.xml.
+ conf.setInt("hbase.master.info.port", 60011);
+ conf.setInt("hbase.regionserver.info.port", 60031);
+ }
+
+ /**
+ * @throws Exception
+ */
+ public void testInfoServersAreUp() throws Exception {
+ // give the cluster time to start up
+ new HTable(conf, ".META.");
+ int port = cluster.getMaster().getInfoServer().getPort();
+ assertHasExpectedContent(new URL("http://localhost:" + port +
+ "/index.html"), "master");
+ port = cluster.getRegionThreads().get(0).getRegionServer().
+ getInfoServer().getPort();
+ assertHasExpectedContent(new URL("http://localhost:" + port +
+ "/index.html"), "regionserver");
+ }
+
+ private void assertHasExpectedContent(final URL u, final String expected)
+ throws IOException {
+ LOG.info("Testing " + u.toString() + " has " + expected);
+ java.net.URLConnection c = u.openConnection();
+ c.connect();
+ assertTrue(c.getContentLength() > 0);
+ StringBuilder sb = new StringBuilder(c.getContentLength());
+ BufferedInputStream bis = new BufferedInputStream(c.getInputStream());
+ byte [] bytes = new byte[1024];
+ for (int read = -1; (read = bis.read(bytes)) != -1;) {
+ sb.append(new String(bytes, 0, read));
+ }
+ bis.close();
+ String content = sb.toString();
+ assertTrue(content.contains(expected));
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestKeyValue.java b/src/test/org/apache/hadoop/hbase/TestKeyValue.java
new file mode 100644
index 0000000..861f4f7
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestKeyValue.java
@@ -0,0 +1,263 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.Set;
+import java.util.TreeSet;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValue.KVComparator;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TestKeyValue extends TestCase {
+ private final Log LOG = LogFactory.getLog(this.getClass().getName());
+
+ public void testColumnCompare() throws Exception {
+ final byte [] a = Bytes.toBytes("aaa");
+ byte [] column1 = Bytes.toBytes("abc:def");
+ byte [] column2 = Bytes.toBytes("abcd:ef");
+ KeyValue aaa = new KeyValue(a, column1, a);
+ assertFalse(KeyValue.COMPARATOR.
+ compareColumns(aaa, column2, 0, column2.length, 4) == 0);
+ column1 = Bytes.toBytes("abcd:");
+ aaa = new KeyValue(a, column1, a);
+ assertFalse(KeyValue.COMPARATOR.
+ compareColumns(aaa, column1, 0, column1.length, 4) == 0);
+ }
+
+ public void testBasics() throws Exception {
+ LOG.info("LOWKEY: " + KeyValue.LOWESTKEY.toString());
+ check(Bytes.toBytes(getName()),
+ Bytes.toBytes(getName() + ":" + getName()), 1,
+ Bytes.toBytes(getName()));
+ // Test empty value and empty column -- both should work.
+ check(Bytes.toBytes(getName()), null, 1, null);
+ check(HConstants.EMPTY_BYTE_ARRAY, null, 1, null);
+ }
+
+ private void check(final byte [] row, final byte [] column,
+ final long timestamp, final byte [] value) {
+ KeyValue kv = new KeyValue(row, column, timestamp, value);
+ assertTrue(Bytes.compareTo(kv.getRow(), row) == 0);
+ if (column != null && column.length > 0) {
+ int index = KeyValue.getFamilyDelimiterIndex(column, 0, column.length);
+ byte [] family = new byte [index];
+ System.arraycopy(column, 0, family, 0, family.length);
+ assertTrue(kv.matchingFamily(family));
+ }
+ // Call toString to make sure it works.
+ LOG.info(kv.toString());
+ }
+
+ public void testPlainCompare() throws Exception {
+ final byte [] a = Bytes.toBytes("aaa");
+ final byte [] b = Bytes.toBytes("bbb");
+ final byte [] column = Bytes.toBytes("col:umn");
+ KeyValue aaa = new KeyValue(a, column, a);
+ KeyValue bbb = new KeyValue(b, column, b);
+ byte [] keyabb = aaa.getKey();
+ byte [] keybbb = bbb.getKey();
+ assertTrue(KeyValue.COMPARATOR.compare(aaa, bbb) < 0);
+ assertTrue(KeyValue.KEY_COMPARATOR.compare(keyabb, 0, keyabb.length, keybbb,
+ 0, keybbb.length) < 0);
+ assertTrue(KeyValue.COMPARATOR.compare(bbb, aaa) > 0);
+ assertTrue(KeyValue.KEY_COMPARATOR.compare(keybbb, 0, keybbb.length, keyabb,
+ 0, keyabb.length) > 0);
+ // Compare breaks if passed same ByteBuffer as both left and right arguments.
+ assertTrue(KeyValue.COMPARATOR.compare(bbb, bbb) == 0);
+ assertTrue(KeyValue.KEY_COMPARATOR.compare(keybbb, 0, keybbb.length, keybbb,
+ 0, keybbb.length) == 0);
+ assertTrue(KeyValue.COMPARATOR.compare(aaa, aaa) == 0);
+ assertTrue(KeyValue.KEY_COMPARATOR.compare(keyabb, 0, keyabb.length, keyabb,
+ 0, keyabb.length) == 0);
+ // Do compare with different timestamps.
+ aaa = new KeyValue(a, column, 1, a);
+ bbb = new KeyValue(a, column, 2, a);
+ assertTrue(KeyValue.COMPARATOR.compare(aaa, bbb) > 0);
+ assertTrue(KeyValue.COMPARATOR.compare(bbb, aaa) < 0);
+ assertTrue(KeyValue.COMPARATOR.compare(aaa, aaa) == 0);
+ // Do compare with different types. Higher numbered types -- Delete
+ // should sort ahead of lower numbers; i.e. Put
+ aaa = new KeyValue(a, column, 1, KeyValue.Type.Delete, a);
+ bbb = new KeyValue(a, column, 1, a);
+ assertTrue(KeyValue.COMPARATOR.compare(aaa, bbb) < 0);
+ assertTrue(KeyValue.COMPARATOR.compare(bbb, aaa) > 0);
+ assertTrue(KeyValue.COMPARATOR.compare(aaa, aaa) == 0);
+ }
+
+ public void testMoreComparisons() throws Exception {
+ // Root compares
+ long now = System.currentTimeMillis();
+ KeyValue a = new KeyValue(".META.,,99999999999999", now);
+ KeyValue b = new KeyValue(".META.,,1", now);
+ KVComparator c = new KeyValue.RootComparator();
+ assertTrue(c.compare(b, a) < 0);
+ KeyValue aa = new KeyValue(".META.,,1", now);
+ KeyValue bb = new KeyValue(".META.,,1", "info:regioninfo",
+ 1235943454602L);
+ assertTrue(c.compare(aa, bb) < 0);
+
+ // Meta compares
+ KeyValue aaa =
+ new KeyValue("TestScanMultipleVersions,row_0500,1236020145502", now);
+ KeyValue bbb = new KeyValue("TestScanMultipleVersions,,99999999999999",
+ now);
+ c = new KeyValue.MetaComparator();
+ assertTrue(c.compare(bbb, aaa) < 0);
+
+ KeyValue aaaa = new KeyValue("TestScanMultipleVersions,,1236023996656",
+ "info:regioninfo", 1236024396271L);
+ assertTrue(c.compare(aaaa, bbb) < 0);
+
+ KeyValue x = new KeyValue("TestScanMultipleVersions,row_0500,1236034574162",
+ "", 9223372036854775807L);
+ KeyValue y = new KeyValue("TestScanMultipleVersions,row_0500,1236034574162",
+ "info:regioninfo", 1236034574912L);
+ assertTrue(c.compare(x, y) < 0);
+ comparisons(new KeyValue.MetaComparator());
+ comparisons(new KeyValue.KVComparator());
+ metacomparisons(new KeyValue.RootComparator());
+ metacomparisons(new KeyValue.MetaComparator());
+ }
+
+ /**
+ * Tests cases where rows keys have characters below the ','.
+ * See HBASE-832
+ * @throws IOException
+ */
+ public void testKeyValueBorderCases() throws IOException {
+ // % sorts before , so if we don't do special comparator, rowB would
+ // come before rowA.
+ KeyValue rowA = new KeyValue("testtable,www.hbase.org/,1234",
+ "", Long.MAX_VALUE);
+ KeyValue rowB = new KeyValue("testtable,www.hbase.org/%20,99999",
+ "", Long.MAX_VALUE);
+ assertTrue(KeyValue.META_COMPARATOR.compare(rowA, rowB) < 0);
+
+ rowA = new KeyValue("testtable,,1234", "", Long.MAX_VALUE);
+ rowB = new KeyValue("testtable,$www.hbase.org/,99999", "", Long.MAX_VALUE);
+ assertTrue(KeyValue.META_COMPARATOR.compare(rowA, rowB) < 0);
+
+ rowA = new KeyValue(".META.,testtable,www.hbase.org/,1234,4321", "",
+ Long.MAX_VALUE);
+ rowB = new KeyValue(".META.,testtable,www.hbase.org/%20,99999,99999", "",
+ Long.MAX_VALUE);
+ assertTrue(KeyValue.ROOT_COMPARATOR.compare(rowA, rowB) < 0);
+ }
+
+ private void metacomparisons(final KeyValue.MetaComparator c) {
+ long now = System.currentTimeMillis();
+ assertTrue(c.compare(new KeyValue(".META.,a,,0,1", now),
+ new KeyValue(".META.,a,,0,1", now)) == 0);
+ KeyValue a = new KeyValue(".META.,a,,0,1", now);
+ KeyValue b = new KeyValue(".META.,a,,0,2", now);
+ assertTrue(c.compare(a, b) < 0);
+ assertTrue(c.compare(new KeyValue(".META.,a,,0,2", now),
+ new KeyValue(".META.,a,,0,1", now)) > 0);
+ }
+
+ private void comparisons(final KeyValue.KVComparator c) {
+ long now = System.currentTimeMillis();
+ assertTrue(c.compare(new KeyValue(".META.,,1", now),
+ new KeyValue(".META.,,1", now)) == 0);
+ assertTrue(c.compare(new KeyValue(".META.,,1", now),
+ new KeyValue(".META.,,2", now)) < 0);
+ assertTrue(c.compare(new KeyValue(".META.,,2", now),
+ new KeyValue(".META.,,1", now)) > 0);
+ }
+
+ public void testBinaryKeys() throws Exception {
+ Set<KeyValue> set = new TreeSet<KeyValue>(KeyValue.COMPARATOR);
+ String column = "col:umn";
+ KeyValue [] keys = {new KeyValue("aaaaa,\u0000\u0000,2", column, 2),
+ new KeyValue("aaaaa,\u0001,3", column, 3),
+ new KeyValue("aaaaa,,1", column, 1),
+ new KeyValue("aaaaa,\u1000,5", column, 5),
+ new KeyValue("aaaaa,a,4", column, 4),
+ new KeyValue("a,a,0", column, 0),
+ };
+ // Add to set with bad comparator
+ for (int i = 0; i < keys.length; i++) {
+ set.add(keys[i]);
+ }
+ // This will output the keys incorrectly.
+ boolean assertion = false;
+ int count = 0;
+ try {
+ for (KeyValue k: set) {
+ assertTrue(count++ == k.getTimestamp());
+ }
+ } catch (junit.framework.AssertionFailedError e) {
+ // Expected
+ assertion = true;
+ }
+ assertTrue(assertion);
+ // Make set with good comparator
+ set = new TreeSet<KeyValue>(new KeyValue.MetaComparator());
+ for (int i = 0; i < keys.length; i++) {
+ set.add(keys[i]);
+ }
+ count = 0;
+ for (KeyValue k: set) {
+ assertTrue(count++ == k.getTimestamp());
+ }
+ // Make up -ROOT- table keys.
+ KeyValue [] rootKeys = {
+ new KeyValue(".META.,aaaaa,\u0000\u0000,0,2", column, 2),
+ new KeyValue(".META.,aaaaa,\u0001,0,3", column, 3),
+ new KeyValue(".META.,aaaaa,,0,1", column, 1),
+ new KeyValue(".META.,aaaaa,\u1000,0,5", column, 5),
+ new KeyValue(".META.,aaaaa,a,0,4", column, 4),
+ new KeyValue(".META.,,0", column, 0),
+ };
+ // This will output the keys incorrectly.
+ set = new TreeSet<KeyValue>(new KeyValue.MetaComparator());
+ // Add to set with bad comparator
+ for (int i = 0; i < keys.length; i++) {
+ set.add(rootKeys[i]);
+ }
+ assertion = false;
+ count = 0;
+ try {
+ for (KeyValue k: set) {
+ assertTrue(count++ == k.getTimestamp());
+ }
+ } catch (junit.framework.AssertionFailedError e) {
+ // Expected
+ assertion = true;
+ }
+ // Now with right comparator
+ set = new TreeSet<KeyValue>(new KeyValue.RootComparator());
+ // Add to set with bad comparator
+ for (int i = 0; i < keys.length; i++) {
+ set.add(rootKeys[i]);
+ }
+ count = 0;
+ for (KeyValue k: set) {
+ assertTrue(count++ == k.getTimestamp());
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestMasterAdmin.java b/src/test/org/apache/hadoop/hbase/TestMasterAdmin.java
new file mode 100644
index 0000000..19a6e6d
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestMasterAdmin.java
@@ -0,0 +1,86 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/** tests administrative functions */
+public class TestMasterAdmin extends HBaseClusterTestCase {
+ private final Log LOG = LogFactory.getLog(this.getClass().getName());
+
+ private static final byte [] COLUMN_NAME = Bytes.toBytes("col1:");
+ private static HTableDescriptor testDesc;
+ static {
+ testDesc = new HTableDescriptor("testadmin1");
+ testDesc.addFamily(new HColumnDescriptor(COLUMN_NAME));
+ }
+
+ private HBaseAdmin admin;
+
+ /** constructor */
+ public TestMasterAdmin() {
+ super();
+ admin = null;
+
+ // Make the thread wake frequency a little slower so other threads
+ // can run
+ conf.setInt("hbase.server.thread.wakefrequency", 2000);
+ }
+
+ /** @throws Exception */
+ public void testMasterAdmin() throws Exception {
+ admin = new HBaseAdmin(conf);
+ // Add test that exception is thrown if descriptor is without a table name.
+ // HADOOP-2156.
+ boolean exception = false;
+ try {
+ admin.createTable(new HTableDescriptor());
+ } catch (IllegalArgumentException e) {
+ exception = true;
+ }
+ assertTrue(exception);
+ admin.createTable(testDesc);
+ LOG.info("Table " + testDesc.getNameAsString() + " created");
+ admin.disableTable(testDesc.getName());
+ LOG.info("Table " + testDesc.getNameAsString() + " disabled");
+ try {
+ @SuppressWarnings("unused")
+ HTable table = new HTable(conf, testDesc.getName());
+ } catch (org.apache.hadoop.hbase.client.RegionOfflineException e) {
+ // Expected
+ }
+
+ admin.addColumn(testDesc.getName(), new HColumnDescriptor("col2:"));
+ admin.enableTable(testDesc.getName());
+ try {
+ admin.deleteColumn(testDesc.getName(), Bytes.toBytes("col2:"));
+ } catch(TableNotDisabledException e) {
+ // Expected
+ }
+
+ admin.disableTable(testDesc.getName());
+ admin.deleteColumn(testDesc.getName(), Bytes.toBytes("col2:"));
+ admin.deleteTable(testDesc.getName());
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/TestMergeMeta.java b/src/test/org/apache/hadoop/hbase/TestMergeMeta.java
new file mode 100644
index 0000000..9ee211a
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestMergeMeta.java
@@ -0,0 +1,44 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/** Tests region merging */
+public class TestMergeMeta extends AbstractMergeTestBase {
+
+ /** constructor
+ * @throws Exception
+ */
+ public TestMergeMeta() throws Exception {
+ super(false);
+ conf.setLong("hbase.client.pause", 1 * 1000);
+ conf.setInt("hbase.client.retries.number", 2);
+ }
+
+ /**
+ * test case
+ * @throws IOException
+ */
+ public void testMergeMeta() throws IOException {
+ assertNotNull(dfsCluster);
+ HMerge.merge(conf, dfsCluster.getFileSystem(), HConstants.META_TABLE_NAME);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestMergeTable.java b/src/test/org/apache/hadoop/hbase/TestMergeTable.java
new file mode 100644
index 0000000..ec37f9b
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestMergeTable.java
@@ -0,0 +1,37 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Tests merging a normal table's regions
+ */
+public class TestMergeTable extends AbstractMergeTestBase {
+
+ /**
+ * Test case
+ * @throws IOException
+ */
+ public void testMergeTable() throws IOException {
+ assertNotNull(dfsCluster);
+ HMerge.merge(conf, dfsCluster.getFileSystem(), desc.getName());
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java b/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java
new file mode 100644
index 0000000..9aabda7
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestRegionRebalancing.java
@@ -0,0 +1,219 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.client.HTable;
+
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test whether region rebalancing works. (HBASE-71)
+ */
+public class TestRegionRebalancing extends HBaseClusterTestCase {
+ final Log LOG = LogFactory.getLog(this.getClass().getName());
+ HTable table;
+
+ HTableDescriptor desc;
+
+ final byte[] FIVE_HUNDRED_KBYTES;
+
+ final byte [] COLUMN_NAME = Bytes.toBytes("col:");
+
+ /** constructor */
+ public TestRegionRebalancing() {
+ super(1);
+ FIVE_HUNDRED_KBYTES = new byte[500 * 1024];
+ for (int i = 0; i < 500 * 1024; i++) {
+ FIVE_HUNDRED_KBYTES[i] = 'x';
+ }
+
+ desc = new HTableDescriptor("test");
+ desc.addFamily(new HColumnDescriptor(COLUMN_NAME));
+ }
+
+ /**
+ * Before the hbase cluster starts up, create some dummy regions.
+ */
+ @Override
+ public void preHBaseClusterSetup() throws IOException {
+ // create a 20-region table by writing directly to disk
+ List<byte []> startKeys = new ArrayList<byte []>();
+ startKeys.add(null);
+ for (int i = 10; i < 29; i++) {
+ startKeys.add(Bytes.toBytes("row_" + i));
+ }
+ startKeys.add(null);
+ LOG.info(startKeys.size() + " start keys generated");
+
+ List<HRegion> regions = new ArrayList<HRegion>();
+ for (int i = 0; i < 20; i++) {
+ regions.add(createAregion(startKeys.get(i), startKeys.get(i+1)));
+ }
+
+ // Now create the root and meta regions and insert the data regions
+ // created above into the meta
+
+ createRootAndMetaRegions();
+ for (HRegion region : regions) {
+ HRegion.addRegionToMETA(meta, region);
+ }
+ closeRootAndMeta();
+ }
+
+ /**
+ * For HBASE-71. Try a few different configurations of starting and stopping
+ * region servers to see if the assignment or regions is pretty balanced.
+ * @throws IOException
+ */
+ public void testRebalancing() throws IOException {
+ table = new HTable(conf, "test");
+ assertEquals("Test table should have 20 regions",
+ 20, table.getStartKeys().length);
+
+ // verify that the region assignments are balanced to start out
+ assertRegionsAreBalanced();
+
+ LOG.debug("Adding 2nd region server.");
+ // add a region server - total of 2
+ cluster.startRegionServer();
+ assertRegionsAreBalanced();
+
+ // add a region server - total of 3
+ LOG.debug("Adding 3rd region server.");
+ cluster.startRegionServer();
+ assertRegionsAreBalanced();
+
+ // kill a region server - total of 2
+ LOG.debug("Killing the 3rd region server.");
+ cluster.stopRegionServer(2, false);
+ cluster.waitOnRegionServer(2);
+ assertRegionsAreBalanced();
+
+ // start two more region servers - total of 4
+ LOG.debug("Adding 3rd region server");
+ cluster.startRegionServer();
+ LOG.debug("Adding 4th region server");
+ cluster.startRegionServer();
+ assertRegionsAreBalanced();
+ }
+
+ /** figure out how many regions are currently being served. */
+ private int getRegionCount() {
+ int total = 0;
+ for (HRegionServer server : getOnlineRegionServers()) {
+ total += server.getOnlineRegions().size();
+ }
+ return total;
+ }
+
+ /**
+ * Determine if regions are balanced. Figure out the total, divide by the
+ * number of online servers, then test if each server is +/- 1 of average
+ * rounded up.
+ */
+ private void assertRegionsAreBalanced() {
+ boolean success = false;
+
+ for (int i = 0; i < 5; i++) {
+ success = true;
+ // make sure all the regions are reassigned before we test balance
+ waitForAllRegionsAssigned();
+
+ int regionCount = getRegionCount();
+ List<HRegionServer> servers = getOnlineRegionServers();
+ double avg = Math.ceil((double)regionCount / (double)servers.size());
+ LOG.debug("There are " + servers.size() + " servers and " + regionCount
+ + " regions. Load Average: " + avg);
+
+ for (HRegionServer server : servers) {
+ int serverLoad = server.getOnlineRegions().size();
+ LOG.debug(server.hashCode() + " Avg: " + avg + " actual: " + serverLoad);
+ if (!(serverLoad <= avg + 2 && serverLoad >= avg - 2)) {
+ success = false;
+ }
+ }
+
+ if (!success) {
+ // one or more servers are not balanced. sleep a little to give it a
+ // chance to catch up. then, go back to the retry loop.
+ try {
+ Thread.sleep(10000);
+ } catch (InterruptedException e) {}
+
+ continue;
+ }
+
+ // if we get here, all servers were balanced, so we should just return.
+ return;
+ }
+ // if we get here, we tried 5 times and never got to short circuit out of
+ // the retry loop, so this is a failure.
+ fail("After 5 attempts, region assignments were not balanced.");
+ }
+
+ private List<HRegionServer> getOnlineRegionServers() {
+ List<HRegionServer> list = new ArrayList<HRegionServer>();
+ for (LocalHBaseCluster.RegionServerThread rst : cluster.getRegionThreads()) {
+ if (rst.getRegionServer().isOnline()) {
+ list.add(rst.getRegionServer());
+ }
+ }
+ return list;
+ }
+
+ /**
+ * Wait until all the regions are assigned.
+ */
+ private void waitForAllRegionsAssigned() {
+ while (getRegionCount() < 22) {
+ // while (!cluster.getMaster().allRegionsAssigned()) {
+ LOG.debug("Waiting for there to be 22 regions, but there are " + getRegionCount() + " right now.");
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException e) {}
+ }
+ }
+
+ /**
+ * create a region with the specified start and end key and exactly one row
+ * inside.
+ */
+ private HRegion createAregion(byte [] startKey, byte [] endKey)
+ throws IOException {
+ HRegion region = createNewHRegion(desc, startKey, endKey);
+ byte [] keyToWrite = startKey == null ? Bytes.toBytes("row_000") : startKey;
+ BatchUpdate bu = new BatchUpdate(keyToWrite);
+ bu.put(COLUMN_NAME, "test".getBytes());
+ region.batchUpdate(bu, null);
+ region.close();
+ region.getLog().closeAndDelete();
+ return region;
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java b/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java
new file mode 100644
index 0000000..37a89ce
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestScanMultipleVersions.java
@@ -0,0 +1,177 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Regression test for HBASE-613
+ */
+public class TestScanMultipleVersions extends HBaseClusterTestCase {
+ private final byte[] TABLE_NAME = Bytes.toBytes("TestScanMultipleVersions");
+ private final HRegionInfo[] INFOS = new HRegionInfo[2];
+ private final HRegion[] REGIONS = new HRegion[2];
+ private final byte[][] ROWS = new byte[][] {
+ Bytes.toBytes("row_0200"),
+ Bytes.toBytes("row_0800")
+ };
+ private final long[] TIMESTAMPS = new long[] {
+ 100L,
+ 1000L
+ };
+ private HTableDescriptor desc = null;
+
+ @Override
+ protected void preHBaseClusterSetup() throws Exception {
+ testDir = new Path(conf.get(HConstants.HBASE_DIR));
+
+ // Create table description
+
+ this.desc = new HTableDescriptor(TABLE_NAME);
+ this.desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
+
+ // Region 0 will contain the key range [,row_0500)
+ INFOS[0] = new HRegionInfo(this.desc, HConstants.EMPTY_START_ROW,
+ Bytes.toBytes("row_0500"));
+ // Region 1 will contain the key range [row_0500,)
+ INFOS[1] = new HRegionInfo(this.desc, Bytes.toBytes("row_0500"),
+ HConstants.EMPTY_END_ROW);
+
+ // Create root and meta regions
+ createRootAndMetaRegions();
+ // Create the regions
+ for (int i = 0; i < REGIONS.length; i++) {
+ REGIONS[i] =
+ HRegion.createHRegion(this.INFOS[i], this.testDir, this.conf);
+ // Insert data
+ for (int j = 0; j < TIMESTAMPS.length; j++) {
+ BatchUpdate b = new BatchUpdate(ROWS[i], TIMESTAMPS[j]);
+ b.put(HConstants.COLUMN_FAMILY, Bytes.toBytes(TIMESTAMPS[j]));
+ REGIONS[i].batchUpdate(b, null);
+ }
+ // Insert the region we created into the meta
+ HRegion.addRegionToMETA(meta, REGIONS[i]);
+ // Close region
+ REGIONS[i].close();
+ REGIONS[i].getLog().closeAndDelete();
+ }
+ // Close root and meta regions
+ closeRootAndMeta();
+ }
+
+ /**
+ * @throws Exception
+ */
+ public void testScanMultipleVersions() throws Exception {
+ // At this point we have created multiple regions and both HDFS and HBase
+ // are running. There are 5 cases we have to test. Each is described below.
+ HTable t = new HTable(conf, TABLE_NAME);
+ for (int i = 0; i < ROWS.length; i++) {
+ for (int j = 0; j < TIMESTAMPS.length; j++) {
+ Cell [] cells =
+ t.get(ROWS[i], HConstants.COLUMN_FAMILY, TIMESTAMPS[j], 1);
+ assertTrue(cells != null && cells.length == 1);
+ System.out.println("Row=" + Bytes.toString(ROWS[i]) + ", cell=" +
+ cells[0]);
+ }
+ }
+
+ // Case 1: scan with LATEST_TIMESTAMP. Should get two rows
+ int count = 0;
+ Scanner s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY);
+ try {
+ for (RowResult rr = null; (rr = s.next()) != null;) {
+ System.out.println(rr.toString());
+ count += 1;
+ }
+ assertEquals("Number of rows should be 2", 2, count);
+ } finally {
+ s.close();
+ }
+
+ // Case 2: Scan with a timestamp greater than most recent timestamp
+ // (in this case > 1000 and < LATEST_TIMESTAMP. Should get 2 rows.
+
+ count = 0;
+ s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW,
+ 10000L);
+ try {
+ while (s.next() != null) {
+ count += 1;
+ }
+ assertEquals("Number of rows should be 2", 2, count);
+ } finally {
+ s.close();
+ }
+
+ // Case 3: scan with timestamp equal to most recent timestamp
+ // (in this case == 1000. Should get 2 rows.
+
+ count = 0;
+ s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW,
+ 1000L);
+ try {
+ while (s.next() != null) {
+ count += 1;
+ }
+ assertEquals("Number of rows should be 2", 2, count);
+ } finally {
+ s.close();
+ }
+
+ // Case 4: scan with timestamp greater than first timestamp but less than
+ // second timestamp (100 < timestamp < 1000). Should get 2 rows.
+
+ count = 0;
+ s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW,
+ 500L);
+ try {
+ while (s.next() != null) {
+ count += 1;
+ }
+ assertEquals("Number of rows should be 2", 2, count);
+ } finally {
+ s.close();
+ }
+
+ // Case 5: scan with timestamp equal to first timestamp (100)
+ // Should get 2 rows.
+
+ count = 0;
+ s = t.getScanner(HConstants.COLUMN_FAMILY_ARRAY, HConstants.EMPTY_START_ROW,
+ 100L);
+ try {
+ while (s.next() != null) {
+ count += 1;
+ }
+ assertEquals("Number of rows should be 2", 2, count);
+ } finally {
+ s.close();
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/TestScannerAPI.java b/src/test/org/apache/hadoop/hbase/TestScannerAPI.java
new file mode 100644
index 0000000..85972a6
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestScannerAPI.java
@@ -0,0 +1,166 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/** test the scanner API at all levels */
+public class TestScannerAPI extends HBaseClusterTestCase {
+ private final byte [][] columns = Bytes.toByteArrays(new String[] {
+ "a:", "b:"
+ });
+ private final byte [] startRow = Bytes.toBytes("0");
+
+ private final TreeMap<byte [], SortedMap<byte [], Cell>> values =
+ new TreeMap<byte [], SortedMap<byte [], Cell>>(Bytes.BYTES_COMPARATOR);
+
+ /**
+ * @throws Exception
+ */
+ public TestScannerAPI() throws Exception {
+ super();
+ try {
+ TreeMap<byte [], Cell> columns =
+ new TreeMap<byte [], Cell>(Bytes.BYTES_COMPARATOR);
+ columns.put(Bytes.toBytes("a:1"),
+ new Cell(Bytes.toBytes("1"), HConstants.LATEST_TIMESTAMP));
+ values.put(Bytes.toBytes("1"), columns);
+ columns = new TreeMap<byte [], Cell>(Bytes.BYTES_COMPARATOR);
+ columns.put(Bytes.toBytes("a:2"),
+ new Cell(Bytes.toBytes("2"), HConstants.LATEST_TIMESTAMP));
+ columns.put(Bytes.toBytes("b:2"),
+ new Cell(Bytes.toBytes("2"), HConstants.LATEST_TIMESTAMP));
+ } catch (Exception e) {
+ e.printStackTrace();
+ throw e;
+ }
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testApi() throws IOException {
+ final String tableName = getName();
+
+ // Create table
+
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ HTableDescriptor tableDesc = new HTableDescriptor(tableName);
+ for (int i = 0; i < columns.length; i++) {
+ tableDesc.addFamily(new HColumnDescriptor(columns[i]));
+ }
+ admin.createTable(tableDesc);
+
+ // Insert values
+
+ HTable table = new HTable(conf, getName());
+
+ for (Map.Entry<byte [], SortedMap<byte [], Cell>> row: values.entrySet()) {
+ BatchUpdate b = new BatchUpdate(row.getKey());
+ for (Map.Entry<byte [], Cell> val: row.getValue().entrySet()) {
+ b.put(val.getKey(), val.getValue().getValue());
+ }
+ table.commit(b);
+ }
+
+ HRegion region = null;
+ try {
+ Collection<HRegion> regions =
+ cluster.getRegionThreads().get(0).getRegionServer().getOnlineRegions();
+ for (HRegion r: regions) {
+ if (!r.getRegionInfo().isMetaRegion()) {
+ region = r;
+ }
+ }
+ } catch (Exception e) {
+ e.printStackTrace();
+ IOException iox = new IOException("error finding region");
+ iox.initCause(e);
+ throw iox;
+ }
+ @SuppressWarnings("null")
+ ScannerIncommon scanner = new InternalScannerIncommon(
+ region.getScanner(columns, startRow, System.currentTimeMillis(), null));
+ try {
+ verify(scanner);
+ } finally {
+ scanner.close();
+ }
+
+ scanner = new ClientScannerIncommon(table.getScanner(columns, startRow));
+ try {
+ verify(scanner);
+ } finally {
+ scanner.close();
+ }
+ Scanner scanner2 = table.getScanner(columns, startRow);
+ try {
+ for (RowResult r : scanner2) {
+ assertTrue("row key", values.containsKey(r.getRow()));
+
+ SortedMap<byte [], Cell> columnValues = values.get(r.getRow());
+ assertEquals(columnValues.size(), r.size());
+ for (Map.Entry<byte [], Cell> e: columnValues.entrySet()) {
+ byte [] column = e.getKey();
+ assertTrue("column", r.containsKey(column));
+ assertTrue("value", Arrays.equals(columnValues.get(column).getValue(),
+ r.get(column).getValue()));
+ }
+ }
+ } finally {
+ scanner.close();
+ }
+ }
+
+ private void verify(ScannerIncommon scanner) throws IOException {
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ while (scanner.next(results)) {
+ assertTrue("row key", values.containsKey(results.get(0).getRow()));
+ // TODO FIX.
+// SortedMap<byte [], Cell> columnValues = values.get(row);
+// assertEquals(columnValues.size(), results.size());
+// for (Map.Entry<byte [], Cell> e: columnValues.entrySet()) {
+// byte [] column = e.getKey();
+// assertTrue("column", results.containsKey(column));
+// assertTrue("value", Arrays.equals(columnValues.get(column).getValue(),
+// results.get(column).getValue()));
+// }
+//
+ results.clear();
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestSerialization.java b/src/test/org/apache/hadoop/hbase/TestSerialization.java
new file mode 100644
index 0000000..3d66b43
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestSerialization.java
@@ -0,0 +1,174 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+import org.apache.hadoop.hbase.io.BatchOperation;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * Test HBase Writables serializations
+ */
+public class TestSerialization extends HBaseTestCase {
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ super.tearDown();
+ }
+
+ public void testKeyValue() throws Exception {
+ byte [] row = Bytes.toBytes(getName());
+ byte [] column = Bytes.toBytes(getName() + ":" + getName());
+ KeyValue original = new KeyValue(row, column);
+ byte [] bytes = Writables.getBytes(original);
+ KeyValue newone = (KeyValue)Writables.getWritable(bytes, new KeyValue());
+ assertTrue(KeyValue.COMPARATOR.compare(original, newone) == 0);
+ }
+
+ public void testHbaseMapWritable() throws Exception {
+ HbaseMapWritable<byte [], byte []> hmw =
+ new HbaseMapWritable<byte[], byte[]>();
+ hmw.put("key".getBytes(), "value".getBytes());
+ byte [] bytes = Writables.getBytes(hmw);
+ hmw = (HbaseMapWritable<byte[], byte[]>)
+ Writables.getWritable(bytes, new HbaseMapWritable<byte [], byte []>());
+ assertTrue(hmw.size() == 1);
+ assertTrue(Bytes.equals("value".getBytes(), hmw.get("key".getBytes())));
+ }
+
+ public void testHMsg() throws Exception {
+ HMsg m = new HMsg(HMsg.Type.MSG_REGIONSERVER_QUIESCE);
+ byte [] mb = Writables.getBytes(m);
+ HMsg deserializedHMsg = (HMsg)Writables.getWritable(mb, new HMsg());
+ assertTrue(m.equals(deserializedHMsg));
+ m = new HMsg(HMsg.Type.MSG_REGIONSERVER_QUIESCE,
+ new HRegionInfo(new HTableDescriptor(getName()),
+ HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY),
+ "Some message".getBytes());
+ mb = Writables.getBytes(m);
+ deserializedHMsg = (HMsg)Writables.getWritable(mb, new HMsg());
+ assertTrue(m.equals(deserializedHMsg));
+ }
+
+ public void testTableDescriptor() throws Exception {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ byte [] mb = Writables.getBytes(htd);
+ HTableDescriptor deserializedHtd =
+ (HTableDescriptor)Writables.getWritable(mb, new HTableDescriptor());
+ assertEquals(htd.getNameAsString(), deserializedHtd.getNameAsString());
+ }
+
+ /**
+ * Test RegionInfo serialization
+ * @throws Exception
+ */
+ public void testRowResult() throws Exception {
+ HbaseMapWritable<byte [], Cell> m = new HbaseMapWritable<byte [], Cell>();
+ byte [] b = Bytes.toBytes(getName());
+ m.put(b, new Cell(b, System.currentTimeMillis()));
+ RowResult rr = new RowResult(b, m);
+ byte [] mb = Writables.getBytes(rr);
+ RowResult deserializedRr =
+ (RowResult)Writables.getWritable(mb, new RowResult());
+ assertTrue(Bytes.equals(rr.getRow(), deserializedRr.getRow()));
+ byte [] one = rr.get(b).getValue();
+ byte [] two = deserializedRr.get(b).getValue();
+ assertTrue(Bytes.equals(one, two));
+ Writables.copyWritable(rr, deserializedRr);
+ one = rr.get(b).getValue();
+ two = deserializedRr.get(b).getValue();
+ assertTrue(Bytes.equals(one, two));
+
+ }
+
+ /**
+ * Test RegionInfo serialization
+ * @throws Exception
+ */
+ public void testRegionInfo() throws Exception {
+ HTableDescriptor htd = new HTableDescriptor(getName());
+ String [] families = new String [] {"info:", "anchor:"};
+ for (int i = 0; i < families.length; i++) {
+ htd.addFamily(new HColumnDescriptor(families[i]));
+ }
+ HRegionInfo hri = new HRegionInfo(htd,
+ HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW);
+ byte [] hrib = Writables.getBytes(hri);
+ HRegionInfo deserializedHri =
+ (HRegionInfo)Writables.getWritable(hrib, new HRegionInfo());
+ assertEquals(hri.getEncodedName(), deserializedHri.getEncodedName());
+ assertEquals(hri.getTableDesc().getFamilies().size(),
+ deserializedHri.getTableDesc().getFamilies().size());
+ }
+
+ /**
+ * Test ServerInfo serialization
+ * @throws Exception
+ */
+ public void testServerInfo() throws Exception {
+ HServerInfo hsi = new HServerInfo(new HServerAddress("0.0.0.0:123"), -1,
+ 1245, "default name");
+ byte [] b = Writables.getBytes(hsi);
+ HServerInfo deserializedHsi =
+ (HServerInfo)Writables.getWritable(b, new HServerInfo());
+ assertTrue(hsi.equals(deserializedHsi));
+ }
+
+ /**
+ * Test BatchUpdate serialization
+ * @throws Exception
+ */
+ public void testBatchUpdate() throws Exception {
+ // Add row named 'testName'.
+ BatchUpdate bu = new BatchUpdate(getName());
+ // Add a column named same as row.
+ bu.put(getName(), getName().getBytes());
+ byte [] b = Writables.getBytes(bu);
+ BatchUpdate bubu =
+ (BatchUpdate)Writables.getWritable(b, new BatchUpdate());
+ // Assert rows are same.
+ assertTrue(Bytes.equals(bu.getRow(), bubu.getRow()));
+ // Assert has same number of BatchOperations.
+ int firstCount = 0;
+ for (BatchOperation bo: bubu) {
+ firstCount++;
+ }
+ // Now deserialize again into same instance to ensure we're not
+ // accumulating BatchOperations on each deserialization.
+ BatchUpdate bububu = (BatchUpdate)Writables.getWritable(b, bubu);
+ // Assert rows are same again.
+ assertTrue(Bytes.equals(bu.getRow(), bububu.getRow()));
+ int secondCount = 0;
+ for (BatchOperation bo: bububu) {
+ secondCount++;
+ }
+ assertEquals(firstCount, secondCount);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestTable.java b/src/test/org/apache/hadoop/hbase/TestTable.java
new file mode 100644
index 0000000..196c069
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestTable.java
@@ -0,0 +1,180 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/** Tests table creation restrictions*/
+public class TestTable extends HBaseClusterTestCase {
+ /**
+ * the test
+ * @throws IOException
+ */
+ public void testCreateTable() throws IOException {
+ final HBaseAdmin admin = new HBaseAdmin(conf);
+ String msg = null;
+ try {
+ admin.createTable(HTableDescriptor.ROOT_TABLEDESC);
+ } catch (IllegalArgumentException e) {
+ msg = e.toString();
+ }
+ assertTrue("Unexcepted exception message " + msg, msg != null &&
+ msg.startsWith(IllegalArgumentException.class.getName()) &&
+ msg.contains(HTableDescriptor.ROOT_TABLEDESC.getNameAsString()));
+
+ msg = null;
+ try {
+ admin.createTable(HTableDescriptor.META_TABLEDESC);
+ } catch(IllegalArgumentException e) {
+ msg = e.toString();
+ }
+ assertTrue("Unexcepted exception message " + msg, msg != null &&
+ msg.startsWith(IllegalArgumentException.class.getName()) &&
+ msg.contains(HTableDescriptor.META_TABLEDESC.getNameAsString()));
+
+ // Try doing a duplicate database create.
+ msg = null;
+ HTableDescriptor desc = new HTableDescriptor(getName());
+ desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
+ admin.createTable(desc);
+ assertTrue("First table creation completed", admin.listTables().length == 1);
+ boolean gotException = false;
+ try {
+ admin.createTable(desc);
+ } catch (TableExistsException e) {
+ gotException = true;
+ msg = e.getMessage();
+ }
+ assertTrue("Didn't get a TableExistsException!", gotException);
+ assertTrue("Unexpected exception message " + msg, msg != null &&
+ msg.contains(getName()));
+
+ // Now try and do concurrent creation with a bunch of threads.
+ final HTableDescriptor threadDesc =
+ new HTableDescriptor("threaded_" + getName());
+ threadDesc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
+ int count = 10;
+ Thread [] threads = new Thread [count];
+ final AtomicInteger successes = new AtomicInteger(0);
+ final AtomicInteger failures = new AtomicInteger(0);
+ for (int i = 0; i < count; i++) {
+ threads[i] = new Thread(Integer.toString(i)) {
+ @Override
+ public void run() {
+ try {
+ admin.createTable(threadDesc);
+ successes.incrementAndGet();
+ } catch (TableExistsException e) {
+ failures.incrementAndGet();
+ } catch (IOException e) {
+ System.out.println("Got an IOException... " + e);
+ fail();
+ }
+ }
+ };
+ }
+ for (int i = 0; i < count; i++) {
+ threads[i].start();
+ }
+ for (int i = 0; i < count; i++) {
+ while(threads[i].isAlive()) {
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ // All threads are now dead. Count up how many tables were created and
+ // how many failed w/ appropriate exception.
+ assertTrue(successes.get() == 1);
+ assertTrue(failures.get() == (count - 1));
+ }
+
+ /**
+ * Test for hadoop-1581 'HBASE: Unopenable tablename bug'.
+ * @throws Exception
+ */
+ public void testTableNameClash() throws Exception {
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(new HTableDescriptor(getName() + "SOMEUPPERCASE"));
+ admin.createTable(new HTableDescriptor(getName()));
+ // Before fix, below would fail throwing a NoServerForRegionException.
+ @SuppressWarnings("unused")
+ HTable table = new HTable(conf, getName());
+ }
+
+ /**
+ * Test read only tables
+ * @throws Exception
+ */
+ public void testReadOnlyTable() throws Exception {
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ HTableDescriptor desc = new HTableDescriptor(getName());
+ byte[] colName = Bytes.toBytes("test:");
+ desc.addFamily(new HColumnDescriptor(colName));
+ desc.setReadOnly(true);
+ admin.createTable(desc);
+ HTable table = new HTable(conf, getName());
+ try {
+ byte[] value = Bytes.toBytes("somedata");
+ BatchUpdate update = new BatchUpdate();
+ update.put(colName, value);
+ table.commit(update);
+ fail("BatchUpdate on read only table succeeded");
+ } catch (Exception e) {
+ // expected
+ }
+ }
+
+ /**
+ * Test that user table names can contain '-' and '.' so long as they do not
+ * start with same. HBASE-771
+ */
+ public void testTableNames() {
+ byte[][] illegalNames = new byte[][] {
+ Bytes.toBytes("-bad"),
+ Bytes.toBytes(".bad"),
+ HConstants.ROOT_TABLE_NAME,
+ HConstants.META_TABLE_NAME
+ };
+ for (int i = 0; i < illegalNames.length; i++) {
+ try {
+ new HTableDescriptor(illegalNames[i]);
+ fail("Did not detect '" + Bytes.toString(illegalNames[i]) +
+ "' as an illegal user table name");
+ } catch (IllegalArgumentException e) {
+ // expected
+ }
+ }
+ byte[] legalName = Bytes.toBytes("g-oo.d");
+ try {
+ new HTableDescriptor(legalName);
+ } catch (IllegalArgumentException e) {
+ fail("Legal user table name: '" + Bytes.toString(legalName) +
+ "' caused IllegalArgumentException: " + e.getMessage());
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/TestZooKeeper.java b/src/test/org/apache/hadoop/hbase/TestZooKeeper.java
new file mode 100644
index 0000000..da49ebf
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TestZooKeeper.java
@@ -0,0 +1,152 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.ZooKeeper;
+
+/**
+ *
+ */
+public class TestZooKeeper extends HBaseClusterTestCase {
+ private static class EmptyWatcher implements Watcher {
+ public EmptyWatcher() {}
+ public void process(WatchedEvent event) {}
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ setOpenMetaTable(false);
+ super.setUp();
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testWritesRootRegionLocation() throws IOException {
+ ZooKeeperWrapper zooKeeper = new ZooKeeperWrapper(conf);
+
+ boolean outOfSafeMode = zooKeeper.checkOutOfSafeMode();
+ assertFalse(outOfSafeMode);
+
+ HServerAddress zooKeeperRootAddress = zooKeeper.readRootRegionLocation();
+ assertNull(zooKeeperRootAddress);
+
+ HMaster master = cluster.getMaster();
+ HServerAddress masterRootAddress = master.getRootRegionLocation();
+ assertNull(masterRootAddress);
+
+ new HTable(conf, HConstants.META_TABLE_NAME);
+
+ outOfSafeMode = zooKeeper.checkOutOfSafeMode();
+ assertTrue(outOfSafeMode);
+
+ zooKeeperRootAddress = zooKeeper.readRootRegionLocation();
+ assertNotNull(zooKeeperRootAddress);
+
+ masterRootAddress = master.getRootRegionLocation();
+ assertEquals(masterRootAddress, zooKeeperRootAddress);
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testParentExists() throws IOException {
+ conf.set("zookeeper.znode.safemode", "/a/b/c/d/e");
+ ZooKeeperWrapper zooKeeper = new ZooKeeperWrapper(conf);
+ assertTrue(zooKeeper.writeOutOfSafeMode());
+ }
+
+ /**
+ * See HBASE-1232 and http://wiki.apache.org/hadoop/ZooKeeper/FAQ#4.
+ * @throws IOException
+ * @throws InterruptedException
+ */
+ public void testClientSessionExpired() throws IOException, InterruptedException {
+ new HTable(conf, HConstants.META_TABLE_NAME);
+
+ String quorumServers = ZooKeeperWrapper.getQuorumServers();
+ int sessionTimeout = conf.getInt("zookeeper.session.timeout", 2 * 1000);
+ Watcher watcher = new EmptyWatcher();
+ HConnection connection = HConnectionManager.getConnection(conf);
+ ZooKeeperWrapper connectionZK = connection.getZooKeeperWrapper();
+ long sessionID = connectionZK.getSessionID();
+ byte[] password = connectionZK.getSessionPassword();
+
+ ZooKeeper zk = new ZooKeeper(quorumServers, sessionTimeout, watcher, sessionID, password);
+ zk.close();
+
+ Thread.sleep(sessionTimeout * 3);
+
+ System.err.println("ZooKeeper should have timed out");
+ connection.relocateRegion(HConstants.ROOT_TABLE_NAME, HConstants.EMPTY_BYTE_ARRAY);
+ }
+
+ /**
+ *
+ */
+ public void testRegionServerSessionExpired() {
+ try {
+ new HTable(conf, HConstants.META_TABLE_NAME);
+
+ String quorumServers = ZooKeeperWrapper.getQuorumServers();
+ int sessionTimeout = conf.getInt("zookeeper.session.timeout", 2 * 1000);
+
+ Watcher watcher = new EmptyWatcher();
+ HRegionServer rs = cluster.getRegionServer(0);
+ ZooKeeperWrapper rsZK = rs.getZooKeeperWrapper();
+ long sessionID = rsZK.getSessionID();
+ byte[] password = rsZK.getSessionPassword();
+
+ ZooKeeper zk = new ZooKeeper(quorumServers, sessionTimeout, watcher, sessionID, password);
+ zk.close();
+
+ Thread.sleep(sessionTimeout * 3);
+
+ new HTable(conf, HConstants.META_TABLE_NAME);
+
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ HTableDescriptor desc = new HTableDescriptor("test");
+ HColumnDescriptor family = new HColumnDescriptor("fam:");
+ desc.addFamily(family);
+ admin.createTable(desc);
+
+ HTable table = new HTable("test");
+ BatchUpdate batchUpdate = new BatchUpdate("testrow");
+ batchUpdate.put("fam:col", Bytes.toBytes("testdata"));
+ table.commit(batchUpdate);
+ } catch (Exception e) {
+ e.printStackTrace();
+ fail();
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/TimestampTestBase.java b/src/test/org/apache/hadoop/hbase/TimestampTestBase.java
new file mode 100644
index 0000000..2ceca09
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/TimestampTestBase.java
@@ -0,0 +1,230 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests user specifiable time stamps putting, getting and scanning. Also
+ * tests same in presence of deletes. Test cores are written so can be
+ * run against an HRegion and against an HTable: i.e. both local and remote.
+ */
+public class TimestampTestBase extends HBaseTestCase {
+ private static final long T0 = 10L;
+ private static final long T1 = 100L;
+ private static final long T2 = 200L;
+
+ private static final String COLUMN_NAME = "contents:";
+
+ private static final byte [] COLUMN = Bytes.toBytes(COLUMN_NAME);
+ private static final byte [] ROW = Bytes.toBytes("row");
+
+ /*
+ * Run test that delete works according to description in <a
+ * href="https://issues.apache.org/jira/browse/HADOOP-1784">hadoop-1784</a>.
+ * @param incommon
+ * @param flusher
+ * @throws IOException
+ */
+ public static void doTestDelete(final Incommon incommon, FlushCache flusher)
+ throws IOException {
+ // Add values at various timestamps (Values are timestampes as bytes).
+ put(incommon, T0);
+ put(incommon, T1);
+ put(incommon, T2);
+ put(incommon);
+ // Verify that returned versions match passed timestamps.
+ assertVersions(incommon, new long [] {HConstants.LATEST_TIMESTAMP, T2, T1});
+ // If I delete w/o specifying a timestamp, this means I'm deleting the
+ // latest.
+ delete(incommon);
+ // Verify that I get back T2 through T1 -- that the latest version has
+ // been deleted.
+ assertVersions(incommon, new long [] {T2, T1, T0});
+
+ // Flush everything out to disk and then retry
+ flusher.flushcache();
+ assertVersions(incommon, new long [] {T2, T1, T0});
+
+ // Now add, back a latest so I can test remove other than the latest.
+ put(incommon);
+ assertVersions(incommon, new long [] {HConstants.LATEST_TIMESTAMP, T2, T1});
+ delete(incommon, T2);
+ assertVersions(incommon, new long [] {HConstants.LATEST_TIMESTAMP, T1, T0});
+ // Flush everything out to disk and then retry
+ flusher.flushcache();
+ assertVersions(incommon, new long [] {HConstants.LATEST_TIMESTAMP, T1, T0});
+
+ // Now try deleting all from T2 back inclusive (We first need to add T2
+ // back into the mix and to make things a little interesting, delete and
+ // then readd T1.
+ put(incommon, T2);
+ delete(incommon, T1);
+ put(incommon, T1);
+ incommon.deleteAll(ROW, COLUMN, T2);
+ // Should only be current value in set. Assert this is so
+ assertOnlyLatest(incommon, HConstants.LATEST_TIMESTAMP);
+
+ // Flush everything out to disk and then redo above tests
+ flusher.flushcache();
+ assertOnlyLatest(incommon, HConstants.LATEST_TIMESTAMP);
+ }
+
+ private static void assertOnlyLatest(final Incommon incommon,
+ final long currentTime)
+ throws IOException {
+ Cell [] cellValues = incommon.get(ROW, COLUMN, 3/*Ask for too much*/);
+ assertEquals(1, cellValues.length);
+ long time = Bytes.toLong(cellValues[0].getValue());
+ assertEquals(time, currentTime);
+ assertNull(incommon.get(ROW, COLUMN, T1, 3 /*Too many*/));
+ assertTrue(assertScanContentTimestamp(incommon, T1) == 0);
+ }
+
+ /*
+ * Assert that returned versions match passed in timestamps and that results
+ * are returned in the right order. Assert that values when converted to
+ * longs match the corresponding passed timestamp.
+ * @param r
+ * @param tss
+ * @throws IOException
+ */
+ public static void assertVersions(final Incommon incommon, final long [] tss)
+ throws IOException {
+ // Assert that 'latest' is what we expect.
+ byte [] bytes = incommon.get(ROW, COLUMN).getValue();
+ assertEquals(Bytes.toLong(bytes), tss[0]);
+ // Now assert that if we ask for multiple versions, that they come out in
+ // order.
+ Cell[] cellValues = incommon.get(ROW, COLUMN, tss.length);
+ assertEquals(tss.length, cellValues.length);
+ for (int i = 0; i < cellValues.length; i++) {
+ long ts = Bytes.toLong(cellValues[i].getValue());
+ assertEquals(ts, tss[i]);
+ }
+ // Specify a timestamp get multiple versions.
+ cellValues = incommon.get(ROW, COLUMN, tss[0], cellValues.length - 1);
+ for (int i = 1; i < cellValues.length; i++) {
+ long ts = Bytes.toLong(cellValues[i].getValue());
+ assertEquals(ts, tss[i]);
+ }
+ // Test scanner returns expected version
+ assertScanContentTimestamp(incommon, tss[0]);
+ }
+
+ /*
+ * Run test scanning different timestamps.
+ * @param incommon
+ * @param flusher
+ * @throws IOException
+ */
+ public static void doTestTimestampScanning(final Incommon incommon,
+ final FlushCache flusher)
+ throws IOException {
+ // Add a couple of values for three different timestamps.
+ put(incommon, T0);
+ put(incommon, T1);
+ put(incommon, HConstants.LATEST_TIMESTAMP);
+ // Get count of latest items.
+ int count = assertScanContentTimestamp(incommon,
+ HConstants.LATEST_TIMESTAMP);
+ // Assert I get same count when I scan at each timestamp.
+ assertEquals(count, assertScanContentTimestamp(incommon, T0));
+ assertEquals(count, assertScanContentTimestamp(incommon, T1));
+ // Flush everything out to disk and then retry
+ flusher.flushcache();
+ assertEquals(count, assertScanContentTimestamp(incommon, T0));
+ assertEquals(count, assertScanContentTimestamp(incommon, T1));
+ }
+
+ /*
+ * Assert that the scan returns only values < timestamp.
+ * @param r
+ * @param ts
+ * @return Count of items scanned.
+ * @throws IOException
+ */
+ public static int assertScanContentTimestamp(final Incommon in, final long ts)
+ throws IOException {
+ ScannerIncommon scanner =
+ in.getScanner(COLUMNS, HConstants.EMPTY_START_ROW, ts);
+ int count = 0;
+ try {
+ // TODO FIX
+// HStoreKey key = new HStoreKey();
+// TreeMap<byte [], Cell>value =
+// new TreeMap<byte [], Cell>(Bytes.BYTES_COMPARATOR);
+// while (scanner.next(key, value)) {
+// assertTrue(key.getTimestamp() <= ts);
+// // Content matches the key or HConstants.LATEST_TIMESTAMP.
+// // (Key does not match content if we 'put' with LATEST_TIMESTAMP).
+// long l = Bytes.toLong(value.get(COLUMN).getValue());
+// assertTrue(key.getTimestamp() == l ||
+// HConstants.LATEST_TIMESTAMP == l);
+// count++;
+// value.clear();
+// }
+ } finally {
+ scanner.close();
+ }
+ return count;
+ }
+
+ public static void put(final Incommon loader, final long ts)
+ throws IOException {
+ put(loader, Bytes.toBytes(ts), ts);
+ }
+
+ public static void put(final Incommon loader)
+ throws IOException {
+ long ts = HConstants.LATEST_TIMESTAMP;
+ put(loader, Bytes.toBytes(ts), ts);
+ }
+
+ /*
+ * Put values.
+ * @param loader
+ * @param bytes
+ * @param ts
+ * @throws IOException
+ */
+ public static void put(final Incommon loader, final byte [] bytes,
+ final long ts)
+ throws IOException {
+ BatchUpdate batchUpdate = ts == HConstants.LATEST_TIMESTAMP ?
+ new BatchUpdate(ROW) : new BatchUpdate(ROW, ts);
+ batchUpdate.put(COLUMN, bytes);
+ loader.commit(batchUpdate);
+ }
+
+ public static void delete(final Incommon loader) throws IOException {
+ delete(loader, HConstants.LATEST_TIMESTAMP);
+ }
+
+ public static void delete(final Incommon loader, final long ts) throws IOException {
+ BatchUpdate batchUpdate = ts == HConstants.LATEST_TIMESTAMP ?
+ new BatchUpdate(ROW) : new BatchUpdate(ROW, ts);
+ batchUpdate.delete(COLUMN);
+ loader.commit(batchUpdate);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/client/TestBatchUpdate.java b/src/test/org/apache/hadoop/hbase/client/TestBatchUpdate.java
new file mode 100644
index 0000000..461bdc3
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/TestBatchUpdate.java
@@ -0,0 +1,214 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test batch updates
+ */
+public class TestBatchUpdate extends HBaseClusterTestCase {
+ private static final String CONTENTS_STR = "contents:";
+ private static final byte [] CONTENTS = Bytes.toBytes(CONTENTS_STR);
+ private static final String SMALLFAM_STR = "smallfam:";
+ private static final byte [] SMALLFAM = Bytes.toBytes(SMALLFAM_STR);
+ private static final int SMALL_LENGTH = 1;
+ private static final int NB_BATCH_ROWS = 10;
+ private byte[] value;
+ private byte[] smallValue;
+
+ private HTableDescriptor desc = null;
+ private HTable table = null;
+
+ /**
+ * @throws UnsupportedEncodingException
+ */
+ public TestBatchUpdate() throws UnsupportedEncodingException {
+ super();
+ value = Bytes.toBytes("abcd");
+ smallValue = Bytes.toBytes("a");
+ }
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ this.desc = new HTableDescriptor("test");
+ desc.addFamily(new HColumnDescriptor(CONTENTS_STR));
+ desc.addFamily(new HColumnDescriptor(SMALLFAM,
+ HColumnDescriptor.DEFAULT_VERSIONS,
+ HColumnDescriptor.DEFAULT_COMPRESSION,
+ HColumnDescriptor.DEFAULT_IN_MEMORY,
+ HColumnDescriptor.DEFAULT_BLOCKCACHE, SMALL_LENGTH,
+ HColumnDescriptor.DEFAULT_TTL, HColumnDescriptor.DEFAULT_BLOOMFILTER));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ table = new HTable(conf, desc.getName());
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testBatchUpdate() throws IOException {
+ BatchUpdate bu = new BatchUpdate("row1");
+ bu.put(CONTENTS, value);
+ bu.delete(CONTENTS);
+ table.commit(bu);
+
+ bu = new BatchUpdate("row2");
+ bu.put(CONTENTS, value);
+ byte[][] getColumns = bu.getColumns();
+ assertEquals(getColumns.length, 1);
+ assertTrue(Arrays.equals(getColumns[0], CONTENTS));
+ assertTrue(bu.hasColumn(CONTENTS));
+ assertFalse(bu.hasColumn(new byte[] {}));
+ byte[] getValue = bu.get(getColumns[0]);
+ assertTrue(Arrays.equals(getValue, value));
+ table.commit(bu);
+
+ byte [][] columns = { CONTENTS };
+ Scanner scanner = table.getScanner(columns, HConstants.EMPTY_START_ROW);
+ for (RowResult r : scanner) {
+ for(Map.Entry<byte [], Cell> e: r.entrySet()) {
+ System.out.println(Bytes.toString(r.getRow()) + ": row: " + e.getKey() + " value: " +
+ new String(e.getValue().getValue(), HConstants.UTF8_ENCODING));
+ }
+ }
+ }
+
+ public void testBatchUpdateMaxLength() {
+ // Test for a single good value
+ BatchUpdate batchUpdate = new BatchUpdate("row1");
+ batchUpdate.put(SMALLFAM, value);
+ try {
+ table.commit(batchUpdate);
+ fail("Value is too long, should throw exception");
+ } catch (IOException e) {
+ // This is expected
+ }
+ // Try to see if it's still inserted
+ try {
+ Cell cell = table.get("row1", SMALLFAM_STR);
+ assertNull(cell);
+ } catch (IOException e) {
+ e.printStackTrace();
+ fail("This is unexpected");
+ }
+ // Try to put a good value
+ batchUpdate = new BatchUpdate("row1");
+ batchUpdate.put(SMALLFAM, smallValue);
+ try {
+ table.commit(batchUpdate);
+ } catch (IOException e) {
+ fail("Value is long enough, should not throw exception");
+ }
+ }
+
+ public void testRowsBatchUpdate() {
+ ArrayList<BatchUpdate> rowsUpdate = new ArrayList<BatchUpdate>();
+ for(int i = 0; i < NB_BATCH_ROWS; i++) {
+ BatchUpdate batchUpdate = new BatchUpdate("row"+i);
+ batchUpdate.put(CONTENTS, value);
+ rowsUpdate.add(batchUpdate);
+ }
+ try {
+ table.commit(rowsUpdate);
+
+ byte [][] columns = { CONTENTS };
+ Scanner scanner = table.getScanner(columns, HConstants.EMPTY_START_ROW);
+ int nbRows = 0;
+ for(@SuppressWarnings("unused") RowResult row : scanner)
+ nbRows++;
+ assertEquals(NB_BATCH_ROWS, nbRows);
+ } catch (IOException e) {
+ fail("This is unexpected : " + e);
+ }
+ }
+
+ public void testRowsBatchUpdateBufferedOneFlush() {
+ table.setAutoFlush(false);
+ ArrayList<BatchUpdate> rowsUpdate = new ArrayList<BatchUpdate>();
+ for(int i = 0; i < NB_BATCH_ROWS*10; i++) {
+ BatchUpdate batchUpdate = new BatchUpdate("row"+i);
+ batchUpdate.put(CONTENTS, value);
+ rowsUpdate.add(batchUpdate);
+ }
+ try {
+ table.commit(rowsUpdate);
+
+ byte [][] columns = { CONTENTS };
+ Scanner scanner = table.getScanner(columns, HConstants.EMPTY_START_ROW);
+ int nbRows = 0;
+ for(@SuppressWarnings("unused") RowResult row : scanner)
+ nbRows++;
+ assertEquals(0, nbRows);
+ scanner.close();
+
+ table.flushCommits();
+
+ scanner = table.getScanner(columns, HConstants.EMPTY_START_ROW);
+ nbRows = 0;
+ for(@SuppressWarnings("unused") RowResult row : scanner)
+ nbRows++;
+ assertEquals(NB_BATCH_ROWS*10, nbRows);
+ } catch (IOException e) {
+ fail("This is unexpected : " + e);
+ }
+ }
+
+ public void testRowsBatchUpdateBufferedManyManyFlushes() {
+ table.setAutoFlush(false);
+ table.setWriteBufferSize(10);
+ ArrayList<BatchUpdate> rowsUpdate = new ArrayList<BatchUpdate>();
+ for(int i = 0; i < NB_BATCH_ROWS*10; i++) {
+ BatchUpdate batchUpdate = new BatchUpdate("row"+i);
+ batchUpdate.put(CONTENTS, value);
+ rowsUpdate.add(batchUpdate);
+ }
+ try {
+ table.commit(rowsUpdate);
+
+ table.flushCommits();
+
+ byte [][] columns = { CONTENTS };
+ Scanner scanner = table.getScanner(columns, HConstants.EMPTY_START_ROW);
+ int nbRows = 0;
+ for(@SuppressWarnings("unused") RowResult row : scanner)
+ nbRows++;
+ assertEquals(NB_BATCH_ROWS*10, nbRows);
+ } catch (IOException e) {
+ fail("This is unexpected : " + e);
+ }
+ }
+
+
+}
diff --git a/src/test/org/apache/hadoop/hbase/client/TestForceSplit.java b/src/test/org/apache/hadoop/hbase/client/TestForceSplit.java
new file mode 100644
index 0000000..28d915e
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/TestForceSplit.java
@@ -0,0 +1,89 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests forced splitting of HTable
+ */
+public class TestForceSplit extends HBaseClusterTestCase {
+ private static final byte[] tableName = Bytes.toBytes("test");
+ private static final byte[] columnName = Bytes.toBytes("a:");
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ this.conf.setInt("hbase.io.index.interval", 32);
+ }
+
+ /**
+ * the test
+ * @throws Exception
+ * @throws IOException
+ */
+ public void testForceSplit() throws Exception {
+ // create the test table
+ HTableDescriptor htd = new HTableDescriptor(tableName);
+ htd.addFamily(new HColumnDescriptor(columnName));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(htd);
+ HTable table = new HTable(conf, tableName);
+ byte[] k = new byte[3];
+ for (byte b1 = 'a'; b1 < 'z'; b1++) {
+ for (byte b2 = 'a'; b2 < 'z'; b2++) {
+ for (byte b3 = 'a'; b3 < 'z'; b3++) {
+ k[0] = b1;
+ k[1] = b2;
+ k[2] = b3;
+ BatchUpdate update = new BatchUpdate(k);
+ update.put(columnName, k);
+ table.commit(update);
+ }
+ }
+ }
+
+ // get the initial layout (should just be one region)
+ Map<HRegionInfo,HServerAddress> m = table.getRegionsInfo();
+ System.out.println("Initial regions (" + m.size() + "): " + m);
+ assertTrue(m.size() == 1);
+
+ // tell the master to split the table
+ admin.split(Bytes.toString(tableName));
+
+ // give some time for the split to happen
+ Thread.sleep(15 * 1000);
+
+ // check again table = new HTable(conf, tableName);
+ m = table.getRegionsInfo();
+ System.out.println("Regions after split (" + m.size() + "): " + m);
+ // should have two regions now
+ assertTrue(m.size() == 2);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/client/TestGetRowVersions.java b/src/test/org/apache/hadoop/hbase/client/TestGetRowVersions.java
new file mode 100644
index 0000000..df48bb3
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/TestGetRowVersions.java
@@ -0,0 +1,104 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.util.Iterator;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ *
+ */
+public class TestGetRowVersions extends HBaseClusterTestCase {
+ private static final Log LOG = LogFactory.getLog(TestGetRowVersions.class);
+ private static final String TABLE_NAME = "test";
+ private static final String CONTENTS_STR = "contents:";
+ private static final String ROW = "row";
+ private static final String COLUMN = "contents:contents";
+ private static final long TIMESTAMP = System.currentTimeMillis();
+ private static final String VALUE1 = "value1";
+ private static final String VALUE2 = "value2";
+ private HBaseAdmin admin = null;
+ private HTable table = null;
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ HTableDescriptor desc = new HTableDescriptor(TABLE_NAME);
+ desc.addFamily(new HColumnDescriptor(CONTENTS_STR));
+ this.admin = new HBaseAdmin(conf);
+ this.admin.createTable(desc);
+ this.table = new HTable(conf, TABLE_NAME);
+ }
+
+ /** @throws Exception */
+ public void testGetRowMultipleVersions() throws Exception {
+ BatchUpdate b = new BatchUpdate(ROW, TIMESTAMP);
+ b.put(COLUMN, Bytes.toBytes(VALUE1));
+ this.table.commit(b);
+ // Shut down and restart the HBase cluster
+ this.cluster.shutdown();
+ this.zooKeeperCluster.shutdown();
+ LOG.debug("HBase cluster shut down -- restarting");
+ this.hBaseClusterSetup();
+ // Make a new connection
+ this.table = new HTable(conf, TABLE_NAME);
+ // Overwrite previous value
+ b = new BatchUpdate(ROW, TIMESTAMP);
+ b.put(COLUMN, Bytes.toBytes(VALUE2));
+ this.table.commit(b);
+ // Now verify that getRow(row, column, latest) works
+ RowResult r = table.getRow(ROW);
+ assertNotNull(r);
+ assertTrue(r.size() != 0);
+ Cell c = r.get(COLUMN);
+ assertNotNull(c);
+ assertTrue(c.getValue().length != 0);
+ String value = Bytes.toString(c.getValue());
+ assertTrue(value.compareTo(VALUE2) == 0);
+ // Now check getRow with multiple versions
+ r = table.getRow(ROW, HConstants.ALL_VERSIONS);
+ for (Map.Entry<byte[], Cell> e: r.entrySet()) {
+ // Column name
+// System.err.print(" " + Bytes.toString(e.getKey()));
+ c = e.getValue();
+
+ // Need to iterate since there may be multiple versions
+ for (Iterator<Map.Entry<Long, byte[]>> it = c.iterator();
+ it.hasNext(); ) {
+ Map.Entry<Long, byte[]> v = it.next();
+ value = Bytes.toString(v.getValue());
+// System.err.println(" = " + value);
+ assertTrue(VALUE2.compareTo(Bytes.toString(v.getValue())) == 0);
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/client/TestHTable.java b/src/test/org/apache/hadoop/hbase/client/TestHTable.java
new file mode 100644
index 0000000..2ac3c39
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/TestHTable.java
@@ -0,0 +1,377 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests HTable
+ */
+public class TestHTable extends HBaseClusterTestCase implements HConstants {
+ private static final HColumnDescriptor column =
+ new HColumnDescriptor(COLUMN_FAMILY);
+
+ private static final byte [] nosuchTable = Bytes.toBytes("nosuchTable");
+ private static final byte [] tableAname = Bytes.toBytes("tableA");
+ private static final byte [] tableBname = Bytes.toBytes("tableB");
+
+ private static final byte [] row = Bytes.toBytes("row");
+
+ private static final byte [] attrName = Bytes.toBytes("TESTATTR");
+ private static final byte [] attrValue = Bytes.toBytes("somevalue");
+
+
+ public void testGetRow() {
+ HTable table = null;
+ try {
+ HColumnDescriptor column2 =
+ new HColumnDescriptor(Bytes.toBytes("info2:"));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ HTableDescriptor testTableADesc =
+ new HTableDescriptor(tableAname);
+ testTableADesc.addFamily(column);
+ testTableADesc.addFamily(column2);
+ admin.createTable(testTableADesc);
+
+ table = new HTable(conf, tableAname);
+ BatchUpdate batchUpdate = new BatchUpdate(row);
+
+ for(int i = 0; i < 5; i++)
+ batchUpdate.put(COLUMN_FAMILY_STR+i, Bytes.toBytes(i));
+
+ table.commit(batchUpdate);
+
+ assertTrue(table.exists(row));
+ for(int i = 0; i < 5; i++)
+ assertTrue(table.exists(row, Bytes.toBytes(COLUMN_FAMILY_STR+i)));
+
+ RowResult result = null;
+ result = table.getRow(row, new byte[][] {COLUMN_FAMILY});
+ for(int i = 0; i < 5; i++)
+ assertTrue(result.containsKey(Bytes.toBytes(COLUMN_FAMILY_STR+i)));
+
+ result = table.getRow(row);
+ for(int i = 0; i < 5; i++)
+ assertTrue(result.containsKey(Bytes.toBytes(COLUMN_FAMILY_STR+i)));
+
+ batchUpdate = new BatchUpdate(row);
+ batchUpdate.put("info2:a", Bytes.toBytes("a"));
+ table.commit(batchUpdate);
+
+ result = table.getRow(row, new byte[][] { COLUMN_FAMILY,
+ Bytes.toBytes("info2:a") });
+ for(int i = 0; i < 5; i++)
+ assertTrue(result.containsKey(Bytes.toBytes(COLUMN_FAMILY_STR+i)));
+ assertTrue(result.containsKey(Bytes.toBytes("info2:a")));
+ } catch (IOException e) {
+ e.printStackTrace();
+ fail("Should not have any exception " +
+ e.getClass());
+ }
+ }
+
+ /**
+ * the test
+ * @throws IOException
+ */
+ public void testHTable() throws IOException {
+ byte[] value = "value".getBytes(UTF8_ENCODING);
+
+ try {
+ new HTable(conf, nosuchTable);
+
+ } catch (TableNotFoundException e) {
+ // expected
+
+ } catch (IOException e) {
+ e.printStackTrace();
+ fail();
+ }
+
+ HTableDescriptor tableAdesc = new HTableDescriptor(tableAname);
+ tableAdesc.addFamily(column);
+
+ HTableDescriptor tableBdesc = new HTableDescriptor(tableBname);
+ tableBdesc.addFamily(column);
+
+ // create a couple of tables
+
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(tableAdesc);
+ admin.createTable(tableBdesc);
+
+ // put some data into table A
+
+ HTable a = new HTable(conf, tableAname);
+
+ // Assert the metadata is good.
+ HTableDescriptor meta =
+ a.getConnection().getHTableDescriptor(tableAdesc.getName());
+ assertTrue(meta.equals(tableAdesc));
+
+ BatchUpdate batchUpdate = new BatchUpdate(row);
+ batchUpdate.put(COLUMN_FAMILY, value);
+ a.commit(batchUpdate);
+
+ // open a new connection to A and a connection to b
+
+ HTable newA = new HTable(conf, tableAname);
+ HTable b = new HTable(conf, tableBname);
+
+ // copy data from A to B
+
+ Scanner s =
+ newA.getScanner(COLUMN_FAMILY_ARRAY, EMPTY_START_ROW);
+
+ try {
+ for (RowResult r : s) {
+ batchUpdate = new BatchUpdate(r.getRow());
+ for(Map.Entry<byte [], Cell> e: r.entrySet()) {
+ batchUpdate.put(e.getKey(), e.getValue().getValue());
+ }
+ b.commit(batchUpdate);
+ }
+ } finally {
+ s.close();
+ }
+
+ // Opening a new connection to A will cause the tables to be reloaded
+
+ try {
+ HTable anotherA = new HTable(conf, tableAname);
+ anotherA.get(row, COLUMN_FAMILY);
+ } catch (Exception e) {
+ e.printStackTrace();
+ fail();
+ }
+
+ // We can still access A through newA because it has the table information
+ // cached. And if it needs to recalibrate, that will cause the information
+ // to be reloaded.
+
+ // Test user metadata
+
+ try {
+ // make a modifiable descriptor
+ HTableDescriptor desc = new HTableDescriptor(a.getTableDescriptor());
+ // offline the table
+ admin.disableTable(tableAname);
+ // add a user attribute to HTD
+ desc.setValue(attrName, attrValue);
+ // add a user attribute to HCD
+ for (HColumnDescriptor c: desc.getFamilies())
+ c.setValue(attrName, attrValue);
+ // update metadata for all regions of this table
+ admin.modifyTable(tableAname, HConstants.MODIFY_TABLE_SET_HTD, desc);
+ // enable the table
+ admin.enableTable(tableAname);
+
+ // test that attribute changes were applied
+ desc = a.getTableDescriptor();
+ if (Bytes.compareTo(desc.getName(), tableAname) != 0)
+ fail("wrong table descriptor returned");
+ // check HTD attribute
+ value = desc.getValue(attrName);
+ if (value == null)
+ fail("missing HTD attribute value");
+ if (Bytes.compareTo(value, attrValue) != 0)
+ fail("HTD attribute value is incorrect");
+ // check HCD attribute
+ for (HColumnDescriptor c: desc.getFamilies()) {
+ value = c.getValue(attrName);
+ if (value == null)
+ fail("missing HCD attribute value");
+ if (Bytes.compareTo(value, attrValue) != 0)
+ fail("HCD attribute value is incorrect");
+ }
+
+ } catch (Exception e) {
+ e.printStackTrace();
+ fail();
+ }
+ }
+
+ public void testCheckAndSave() throws IOException {
+ HTable table = null;
+ HColumnDescriptor column2 =
+ new HColumnDescriptor(Bytes.toBytes("info2:"));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ HTableDescriptor testTableADesc =
+ new HTableDescriptor(tableAname);
+ testTableADesc.addFamily(column);
+ testTableADesc.addFamily(column2);
+ admin.createTable(testTableADesc);
+
+ table = new HTable(conf, tableAname);
+ BatchUpdate batchUpdate = new BatchUpdate(row);
+ BatchUpdate batchUpdate2 = new BatchUpdate(row);
+ BatchUpdate batchUpdate3 = new BatchUpdate(row);
+
+ HbaseMapWritable<byte[],byte[]> expectedValues =
+ new HbaseMapWritable<byte[],byte[]>();
+ HbaseMapWritable<byte[],byte[]> badExpectedValues =
+ new HbaseMapWritable<byte[],byte[]>();
+
+ for(int i = 0; i < 5; i++) {
+ // This batchupdate is our initial batch update,
+ // As such we also set our expected values to the same values
+ // since we will be comparing the two
+ batchUpdate.put(COLUMN_FAMILY_STR+i, Bytes.toBytes(i));
+ expectedValues.put(Bytes.toBytes(COLUMN_FAMILY_STR+i), Bytes.toBytes(i));
+
+ badExpectedValues.put(Bytes.toBytes(COLUMN_FAMILY_STR+i),
+ Bytes.toBytes(500));
+
+ // This is our second batchupdate that we will use to update the initial
+ // batchupdate
+ batchUpdate2.put(COLUMN_FAMILY_STR+i, Bytes.toBytes(i+1));
+
+ // This final batch update is to check that our expected values (which
+ // are now wrong)
+ batchUpdate3.put(COLUMN_FAMILY_STR+i, Bytes.toBytes(i+2));
+ }
+
+ // Initialize rows
+ table.commit(batchUpdate);
+
+ // check if incorrect values are returned false
+ assertFalse(table.checkAndSave(batchUpdate2,badExpectedValues,null));
+
+ // make sure first expected values are correct
+ assertTrue(table.checkAndSave(batchUpdate2, expectedValues,null));
+
+ // make sure check and save truly saves the data after checking the expected
+ // values
+ RowResult r = table.getRow(row);
+ byte[][] columns = batchUpdate2.getColumns();
+ for(int i = 0;i < columns.length;i++) {
+ assertTrue(Bytes.equals(r.get(columns[i]).getValue(),batchUpdate2.get(columns[i])));
+ }
+
+ // make sure that the old expected values fail
+ assertFalse(table.checkAndSave(batchUpdate3, expectedValues,null));
+ }
+
+ /**
+ * For HADOOP-2579
+ */
+ public void testTableNotFoundExceptionWithoutAnyTables() {
+ try {
+ new HTable(conf, "notATable");
+ fail("Should have thrown a TableNotFoundException");
+ } catch (TableNotFoundException e) {
+ // expected
+ } catch (IOException e) {
+ e.printStackTrace();
+ fail("Should have thrown a TableNotFoundException instead of a " +
+ e.getClass());
+ }
+ }
+
+ public void testGetClosestRowBefore() throws IOException {
+ HColumnDescriptor column2 =
+ new HColumnDescriptor(Bytes.toBytes("info2:"));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ HTableDescriptor testTableADesc =
+ new HTableDescriptor(tableAname);
+ testTableADesc.addFamily(column);
+ testTableADesc.addFamily(column2);
+ admin.createTable(testTableADesc);
+
+ byte[] firstRow = Bytes.toBytes("ro");
+ byte[] beforeFirstRow = Bytes.toBytes("rn");
+ byte[] beforeSecondRow = Bytes.toBytes("rov");
+
+ HTable table = new HTable(conf, tableAname);
+ BatchUpdate batchUpdate = new BatchUpdate(firstRow);
+ BatchUpdate batchUpdate2 = new BatchUpdate(row);
+ byte[] zero = new byte[]{0};
+ byte[] one = new byte[]{1};
+ byte[] columnFamilyBytes = Bytes.toBytes(COLUMN_FAMILY_STR);
+
+ batchUpdate.put(COLUMN_FAMILY_STR,zero);
+ batchUpdate2.put(COLUMN_FAMILY_STR,one);
+
+ table.commit(batchUpdate);
+ table.commit(batchUpdate2);
+
+ RowResult result = null;
+
+ // Test before first that null is returned
+ result = table.getClosestRowBefore(beforeFirstRow, columnFamilyBytes);
+ assertTrue(result == null);
+
+ // Test at first that first is returned
+ result = table.getClosestRowBefore(firstRow, columnFamilyBytes);
+ assertTrue(result.containsKey(COLUMN_FAMILY_STR));
+ assertTrue(Bytes.equals(result.get(COLUMN_FAMILY_STR).getValue(), zero));
+
+ // Test inbetween first and second that first is returned
+ result = table.getClosestRowBefore(beforeSecondRow, columnFamilyBytes);
+ assertTrue(result.containsKey(COLUMN_FAMILY_STR));
+ assertTrue(Bytes.equals(result.get(COLUMN_FAMILY_STR).getValue(), zero));
+
+ // Test at second make sure second is returned
+ result = table.getClosestRowBefore(row, columnFamilyBytes);
+ assertTrue(result.containsKey(COLUMN_FAMILY_STR));
+ assertTrue(Bytes.equals(result.get(COLUMN_FAMILY_STR).getValue(), one));
+
+ // Test after second, make sure second is returned
+ result = table.getClosestRowBefore(Bytes.add(row,one), columnFamilyBytes);
+ assertTrue(result.containsKey(COLUMN_FAMILY_STR));
+ assertTrue(Bytes.equals(result.get(COLUMN_FAMILY_STR).getValue(), one));
+ }
+
+ /**
+ * For HADOOP-2579
+ */
+ public void testTableNotFoundExceptionWithATable() {
+ try {
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ HTableDescriptor testTableADesc =
+ new HTableDescriptor("table");
+ testTableADesc.addFamily(column);
+ admin.createTable(testTableADesc);
+
+ // This should throw a TableNotFoundException, it has not been created
+ new HTable(conf, "notATable");
+
+ fail("Should have thrown a TableNotFoundException");
+ } catch (TableNotFoundException e) {
+ // expected
+ } catch (IOException e) {
+ e.printStackTrace();
+ fail("Should have thrown a TableNotFoundException instead of a " +
+ e.getClass());
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/client/TestListTables.java b/src/test/org/apache/hadoop/hbase/client/TestListTables.java
new file mode 100644
index 0000000..9da5ebe
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/TestListTables.java
@@ -0,0 +1,70 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.HashSet;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+
+/**
+ * Tests the listTables client API
+ */
+public class TestListTables extends HBaseClusterTestCase {
+ HBaseAdmin admin = null;
+
+ private static final HTableDescriptor[] TABLES = {
+ new HTableDescriptor("table1"),
+ new HTableDescriptor("table2"),
+ new HTableDescriptor("table3")
+ };
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ admin = new HBaseAdmin(conf);
+ HColumnDescriptor family =
+ new HColumnDescriptor(HConstants.COLUMN_FAMILY_STR);
+ for (int i = 0; i < TABLES.length; i++) {
+ TABLES[i].addFamily(family);
+ admin.createTable(TABLES[i]);
+ }
+ }
+
+ /**
+ * the test
+ * @throws IOException
+ */
+ public void testListTables() throws IOException {
+ HTableDescriptor [] ts = admin.listTables();
+ HashSet<HTableDescriptor> result = new HashSet<HTableDescriptor>(ts.length);
+ for (int i = 0; i < ts.length; i++) {
+ result.add(ts[i]);
+ }
+ int size = result.size();
+ assertEquals(TABLES.length, size);
+ for (int i = 0; i < TABLES.length && i < size; i++) {
+ assertTrue(result.contains(TABLES[i]));
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/client/TestScannerTimes.java b/src/test/org/apache/hadoop/hbase/client/TestScannerTimes.java
new file mode 100644
index 0000000..5f3407a
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/TestScannerTimes.java
@@ -0,0 +1,139 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+
+/**
+ * Test that verifies that scanners return a different timestamp for values that
+ * are not stored at the same time. (HBASE-737)
+ */
+public class TestScannerTimes extends HBaseClusterTestCase {
+ private static final String TABLE_NAME = "hbase737";
+ private static final String FAM1 = "fam1:";
+ private static final String FAM2 = "fam2:";
+ private static final String ROW = "row";
+
+ /**
+ * test for HBASE-737
+ * @throws IOException
+ */
+ public void testHBase737 () throws IOException {
+ HTableDescriptor desc = new HTableDescriptor(TABLE_NAME);
+ desc.addFamily(new HColumnDescriptor(FAM1));
+ desc.addFamily(new HColumnDescriptor(FAM2));
+
+ // Create table
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+
+ // Open table
+ HTable table = new HTable(conf, TABLE_NAME);
+
+ // Insert some values
+ BatchUpdate b = new BatchUpdate(ROW);
+ b.put(FAM1 + "letters", "abcdefg".getBytes(HConstants.UTF8_ENCODING));
+ table.commit(b);
+
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException i) {
+ //ignore
+ }
+
+ b = new BatchUpdate(ROW);
+ b.put(FAM1 + "numbers", "123456".getBytes(HConstants.UTF8_ENCODING));
+ table.commit(b);
+
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException i) {
+ //ignore
+ }
+
+ b = new BatchUpdate(ROW);
+ b.put(FAM2 + "letters", "hijklmnop".getBytes(HConstants.UTF8_ENCODING));
+ table.commit(b);
+
+ long times[] = new long[3];
+ byte[][] columns = new byte[][] {
+ FAM1.getBytes(HConstants.UTF8_ENCODING),
+ FAM2.getBytes(HConstants.UTF8_ENCODING)
+ };
+
+ // First scan the memcache
+
+ Scanner s = table.getScanner(columns);
+ try {
+ int index = 0;
+ RowResult r = null;
+ while ((r = s.next()) != null) {
+ for (Cell c: r.values()) {
+ times[index++] = c.getTimestamp();
+ }
+ }
+ } finally {
+ s.close();
+ }
+ for (int i = 0; i < times.length - 1; i++) {
+ for (int j = i + 1; j < times.length; j++) {
+ assertTrue(times[j] > times[i]);
+ }
+ }
+
+ // Fush data to disk and try again
+
+ cluster.flushcache();
+
+ try {
+ Thread.sleep(1000);
+ } catch (InterruptedException i) {
+ //ignore
+ }
+
+ s = table.getScanner(columns);
+ try {
+ int index = 0;
+ RowResult r = null;
+ while ((r = s.next()) != null) {
+ for (Cell c: r.values()) {
+ times[index++] = c.getTimestamp();
+ }
+ }
+ } finally {
+ s.close();
+ }
+ for (int i = 0; i < times.length - 1; i++) {
+ for (int j = i + 1; j < times.length; j++) {
+ assertTrue(times[j] > times[i]);
+ }
+ }
+
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/client/TestTimestamp.java b/src/test/org/apache/hadoop/hbase/client/TestTimestamp.java
new file mode 100644
index 0000000..3e68bdc
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/TestTimestamp.java
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TimestampTestBase;
+
+/**
+ * Tests user specifiable time stamps putting, getting and scanning. Also
+ * tests same in presence of deletes. Test cores are written so can be
+ * run against an HRegion and against an HTable: i.e. both local and remote.
+ */
+public class TestTimestamp extends HBaseClusterTestCase {
+ private static final String COLUMN_NAME = "contents:";
+
+ /** constructor */
+ public TestTimestamp() {
+ super();
+ }
+
+ /**
+ * Basic test of timestamps.
+ * Do the above tests from client side.
+ * @throws IOException
+ */
+ public void testTimestamps() throws IOException {
+ HTable t = createTable();
+ Incommon incommon = new HTableIncommon(t);
+ TimestampTestBase.doTestDelete(incommon, new FlushCache() {
+ public void flushcache() throws IOException {
+ cluster.flushcache();
+ }
+ });
+
+ // Perhaps drop and readd the table between tests so the former does
+ // not pollute this latter? Or put into separate tests.
+ TimestampTestBase.doTestTimestampScanning(incommon, new FlushCache() {
+ public void flushcache() throws IOException {
+ cluster.flushcache();
+ }
+ });
+ }
+
+ /*
+ * Create a table named TABLE_NAME.
+ * @return An instance of an HTable connected to the created table.
+ * @throws IOException
+ */
+ private HTable createTable() throws IOException {
+ HTableDescriptor desc = new HTableDescriptor(getName());
+ desc.addFamily(new HColumnDescriptor(COLUMN_NAME));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ return new HTable(conf, getName());
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/client/tableindexed/TestIndexedTable.java b/src/test/org/apache/hadoop/hbase/client/tableindexed/TestIndexedTable.java
new file mode 100644
index 0000000..c949d45
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/tableindexed/TestIndexedTable.java
@@ -0,0 +1,131 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.tableindexed;
+
+import java.io.IOException;
+import java.util.Random;
+
+import junit.framework.Assert;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.PerformanceEvaluation;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.regionserver.tableindexed.IndexedRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TestIndexedTable extends HBaseClusterTestCase {
+
+ private static final Log LOG = LogFactory.getLog(TestIndexedTable.class);
+
+ private static final String TABLE_NAME = "table1";
+
+ private static final byte[] FAMILY = Bytes.toBytes("family:");
+ private static final byte[] COL_A = Bytes.toBytes("family:a");
+ private static final String INDEX_COL_A = "A";
+
+ private static final int NUM_ROWS = 10;
+ private static final int MAX_VAL = 10000;
+
+ private IndexedTableAdmin admin;
+ private IndexedTable table;
+ private Random random = new Random();
+
+ /** constructor */
+ public TestIndexedTable() {
+ conf
+ .set(HConstants.REGION_SERVER_IMPL, IndexedRegionServer.class.getName());
+ conf.setInt("hbase.master.info.port", -1);
+ conf.setInt("hbase.regionserver.info.port", -1);
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+
+ HTableDescriptor desc = new HTableDescriptor(TABLE_NAME);
+ desc.addFamily(new HColumnDescriptor(FAMILY));
+
+ // Create a new index that does lexicographic ordering on COL_A
+ IndexSpecification colAIndex = new IndexSpecification(INDEX_COL_A,
+ COL_A);
+ desc.addIndex(colAIndex);
+
+ admin = new IndexedTableAdmin(conf);
+ admin.createTable(desc);
+ table = new IndexedTable(conf, desc.getName());
+ }
+
+ private void writeInitalRows() throws IOException {
+ for (int i = 0; i < NUM_ROWS; i++) {
+ BatchUpdate update = new BatchUpdate(PerformanceEvaluation.format(i));
+ byte[] colA = PerformanceEvaluation.format(random.nextInt(MAX_VAL));
+ update.put(COL_A, colA);
+ table.commit(update);
+ LOG.info("Inserted row [" + Bytes.toString(update.getRow()) + "] val: ["
+ + Bytes.toString(colA) + "]");
+ }
+ }
+
+
+ public void testInitialWrites() throws IOException {
+ writeInitalRows();
+ assertRowsInOrder(NUM_ROWS);
+ }
+
+ private void assertRowsInOrder(int numRowsExpected) throws IndexNotFoundException, IOException {
+ Scanner scanner = table.getIndexedScanner(INDEX_COL_A,
+ HConstants.EMPTY_START_ROW, null, null, null);
+ int numRows = 0;
+ byte[] lastColA = null;
+ for (RowResult rowResult : scanner) {
+ byte[] colA = rowResult.get(COL_A).getValue();
+ LOG.info("index scan : row [" + Bytes.toString(rowResult.getRow())
+ + "] value [" + Bytes.toString(colA) + "]");
+ if (lastColA != null) {
+ Assert.assertTrue(Bytes.compareTo(lastColA, colA) <= 0);
+ }
+ lastColA = colA;
+ numRows++;
+ }
+ Assert.assertEquals(numRowsExpected, numRows);
+ }
+
+ public void testMultipleWrites() throws IOException {
+ writeInitalRows();
+ writeInitalRows(); // Update the rows.
+ assertRowsInOrder(NUM_ROWS);
+ }
+
+ public void testDelete() throws IOException {
+ writeInitalRows();
+ // Delete the first row;
+ table.deleteAll(PerformanceEvaluation.format(0));
+
+ assertRowsInOrder(NUM_ROWS - 1);
+ }
+
+}
diff --git a/src/test/org/apache/hadoop/hbase/client/transactional/DisabledTestTransactions.java b/src/test/org/apache/hadoop/hbase/client/transactional/DisabledTestTransactions.java
new file mode 100644
index 0000000..0b453e0
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/transactional/DisabledTestTransactions.java
@@ -0,0 +1,143 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.ipc.TransactionalRegionInterface;
+import org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test the transaction functionality. This requires to run an
+ * {@link TransactionalRegionServer}.
+ */
+public class DisabledTestTransactions extends HBaseClusterTestCase {
+
+ private static final String TABLE_NAME = "table1";
+
+ private static final byte[] FAMILY = Bytes.toBytes("family:");
+ private static final byte[] COL_A = Bytes.toBytes("family:a");
+
+ private static final byte[] ROW1 = Bytes.toBytes("row1");
+ private static final byte[] ROW2 = Bytes.toBytes("row2");
+ private static final byte[] ROW3 = Bytes.toBytes("row3");
+
+ private HBaseAdmin admin;
+ private TransactionalTable table;
+ private TransactionManager transactionManager;
+
+ /** constructor */
+ public DisabledTestTransactions() {
+ conf.set(HConstants.REGION_SERVER_CLASS, TransactionalRegionInterface.class
+ .getName());
+ conf.set(HConstants.REGION_SERVER_IMPL, TransactionalRegionServer.class
+ .getName());
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+
+ HTableDescriptor desc = new HTableDescriptor(TABLE_NAME);
+ desc.addFamily(new HColumnDescriptor(FAMILY));
+ admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ table = new TransactionalTable(conf, desc.getName());
+
+ transactionManager = new TransactionManager(conf);
+ writeInitalRow();
+ }
+
+ private void writeInitalRow() throws IOException {
+ BatchUpdate update = new BatchUpdate(ROW1);
+ update.put(COL_A, Bytes.toBytes(1));
+ table.commit(update);
+ }
+
+ public void testSimpleTransaction() throws IOException,
+ CommitUnsuccessfulException {
+ TransactionState transactionState = makeTransaction1();
+ transactionManager.tryCommit(transactionState);
+ }
+
+ public void testTwoTransactionsWithoutConflict() throws IOException,
+ CommitUnsuccessfulException {
+ TransactionState transactionState1 = makeTransaction1();
+ TransactionState transactionState2 = makeTransaction2();
+
+ transactionManager.tryCommit(transactionState1);
+ transactionManager.tryCommit(transactionState2);
+ }
+
+ public void TestTwoTransactionsWithConflict() throws IOException,
+ CommitUnsuccessfulException {
+ TransactionState transactionState1 = makeTransaction1();
+ TransactionState transactionState2 = makeTransaction2();
+
+ transactionManager.tryCommit(transactionState2);
+
+ try {
+ transactionManager.tryCommit(transactionState1);
+ fail();
+ } catch (CommitUnsuccessfulException e) {
+ // Good
+ }
+ }
+
+ // Read from ROW1,COL_A and put it in ROW2_COLA and ROW3_COLA
+ private TransactionState makeTransaction1() throws IOException {
+ TransactionState transactionState = transactionManager.beginTransaction();
+
+ Cell row1_A = table.get(transactionState, ROW1, COL_A);
+
+ BatchUpdate write1 = new BatchUpdate(ROW2);
+ write1.put(COL_A, row1_A.getValue());
+ table.commit(transactionState, write1);
+
+ BatchUpdate write2 = new BatchUpdate(ROW3);
+ write2.put(COL_A, row1_A.getValue());
+ table.commit(transactionState, write2);
+
+ return transactionState;
+ }
+
+ // Read ROW1,COL_A, increment its (integer) value, write back
+ private TransactionState makeTransaction2() throws IOException {
+ TransactionState transactionState = transactionManager.beginTransaction();
+
+ Cell row1_A = table.get(transactionState, ROW1, COL_A);
+
+ int value = Bytes.toInt(row1_A.getValue());
+
+ BatchUpdate write = new BatchUpdate(ROW1);
+ write.put(COL_A, Bytes.toBytes(value + 1));
+ table.commit(transactionState, write);
+
+ return transactionState;
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/client/transactional/StressTestTransactions.java b/src/test/org/apache/hadoop/hbase/client/transactional/StressTestTransactions.java
new file mode 100644
index 0000000..92fec03
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/client/transactional/StressTestTransactions.java
@@ -0,0 +1,420 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.transactional;
+
+import java.io.IOException;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import junit.framework.Assert;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.ipc.TransactionalRegionInterface;
+import org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Stress Test the transaction functionality. This requires to run an
+ * {@link TransactionalRegionServer}. We run many threads doing reads/writes
+ * which may conflict with each other. We have two types of transactions, those
+ * which operate on rows of a single table, and those which operate on rows
+ * across multiple tables. Each transaction type has a modification operation
+ * which changes two values while maintaining the sum. Also each transaction
+ * type has a consistency-check operation which sums all rows and verifies that
+ * the sum is as expected.
+ */
+public class StressTestTransactions extends HBaseClusterTestCase {
+ protected static final Log LOG = LogFactory
+ .getLog(StressTestTransactions.class);
+
+ private static final int NUM_TABLES = 3;
+ private static final int NUM_ST_ROWS = 3;
+ private static final int NUM_MT_ROWS = 3;
+ private static final int NUM_TRANSACTIONS_PER_THREAD = 100;
+ private static final int NUM_SINGLE_TABLE_THREADS = 6;
+ private static final int NUM_MULTI_TABLE_THREADS = 6;
+ private static final int PRE_COMMIT_SLEEP = 10;
+ protected static final Random RAND = new Random();
+
+ private static final byte[] FAMILY = Bytes.toBytes("family:");
+ static final byte[] COL = Bytes.toBytes("family:a");
+
+ private HBaseAdmin admin;
+ protected TransactionalTable[] tables;
+ protected TransactionManager transactionManager;
+
+ /** constructor */
+ public StressTestTransactions() {
+ conf.set(HConstants.REGION_SERVER_CLASS, TransactionalRegionInterface.class
+ .getName());
+ conf.set(HConstants.REGION_SERVER_IMPL, TransactionalRegionServer.class
+ .getName());
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+
+ tables = new TransactionalTable[NUM_TABLES];
+
+ for (int i = 0; i < tables.length; i++) {
+ HTableDescriptor desc = new HTableDescriptor(makeTableName(i));
+ desc.addFamily(new HColumnDescriptor(FAMILY));
+ admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ tables[i] = new TransactionalTable(conf, desc.getName());
+ }
+
+ transactionManager = new TransactionManager(conf);
+ }
+
+ private String makeTableName(final int i) {
+ return "table" + i;
+ }
+
+ private void writeInitalValues() throws IOException {
+ for (TransactionalTable table : tables) {
+ for (int i = 0; i < NUM_ST_ROWS; i++) {
+ byte[] row = makeSTRow(i);
+ BatchUpdate b = new BatchUpdate(row);
+ b.put(COL, Bytes.toBytes(SingleTableTransactionThread.INITIAL_VALUE));
+ table.commit(b);
+ }
+ for (int i = 0; i < NUM_MT_ROWS; i++) {
+ byte[] row = makeMTRow(i);
+ BatchUpdate b = new BatchUpdate(row);
+ b.put(COL, Bytes.toBytes(MultiTableTransactionThread.INITIAL_VALUE));
+ table.commit(b);
+ }
+ }
+ }
+
+ protected byte[] makeSTRow(final int i) {
+ return Bytes.toBytes("st" + i);
+ }
+
+ protected byte[] makeMTRow(final int i) {
+ return Bytes.toBytes("mt" + i);
+ }
+
+ static int nextThreadNum = 1;
+ protected static final AtomicBoolean stopRequest = new AtomicBoolean(false);
+ static final AtomicBoolean consistencyFailure = new AtomicBoolean(false);
+
+ // Thread which runs transactions
+ abstract class TransactionThread extends Thread {
+ private int numRuns = 0;
+ private int numAborts = 0;
+ private int numUnknowns = 0;
+
+ public TransactionThread(final String namePrefix) {
+ super.setName(namePrefix + "transaction " + nextThreadNum++);
+ }
+
+ @Override
+ public void run() {
+ for (int i = 0; i < NUM_TRANSACTIONS_PER_THREAD; i++) {
+ if (stopRequest.get()) {
+ return;
+ }
+ try {
+ numRuns++;
+ transaction();
+ } catch (UnknownTransactionException e) {
+ numUnknowns++;
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ } catch (CommitUnsuccessfulException e) {
+ numAborts++;
+ }
+ }
+ }
+
+ protected abstract void transaction() throws IOException,
+ CommitUnsuccessfulException;
+
+ public int getNumAborts() {
+ return numAborts;
+ }
+
+ public int getNumUnknowns() {
+ return numUnknowns;
+ }
+
+ protected void preCommitSleep() {
+ try {
+ Thread.sleep(PRE_COMMIT_SLEEP);
+ } catch (InterruptedException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ protected void consistencyFailure() {
+ LOG.fatal("Consistency failure");
+ stopRequest.set(true);
+ consistencyFailure.set(true);
+ }
+
+ /**
+ * Get the numRuns.
+ *
+ * @return Return the numRuns.
+ */
+ public int getNumRuns() {
+ return numRuns;
+ }
+
+ }
+
+ // Atomically change the value of two rows rows while maintaining the sum.
+ // This should preserve the global sum of the rows, which is also checked
+ // with a transaction.
+ private class SingleTableTransactionThread extends TransactionThread {
+ private static final int INITIAL_VALUE = 10;
+ public static final int TOTAL_SUM = INITIAL_VALUE * NUM_ST_ROWS;
+ private static final int MAX_TRANSFER_AMT = 100;
+
+ private TransactionalTable table;
+ boolean doCheck = false;
+
+ public SingleTableTransactionThread() {
+ super("single table ");
+ }
+
+ @Override
+ protected void transaction() throws IOException,
+ CommitUnsuccessfulException {
+ if (doCheck) {
+ checkTotalSum();
+ } else {
+ doSingleRowChange();
+ }
+ doCheck = !doCheck;
+ }
+
+ private void doSingleRowChange() throws IOException,
+ CommitUnsuccessfulException {
+ table = tables[RAND.nextInt(NUM_TABLES)];
+ int transferAmount = RAND.nextInt(MAX_TRANSFER_AMT * 2)
+ - MAX_TRANSFER_AMT;
+ int row1Index = RAND.nextInt(NUM_ST_ROWS);
+ int row2Index;
+ do {
+ row2Index = RAND.nextInt(NUM_ST_ROWS);
+ } while (row2Index == row1Index);
+ byte[] row1 = makeSTRow(row1Index);
+ byte[] row2 = makeSTRow(row2Index);
+
+ TransactionState transactionState = transactionManager.beginTransaction();
+ int row1Amount = Bytes.toInt(table.get(transactionState, row1, COL)
+ .getValue());
+ int row2Amount = Bytes.toInt(table.get(transactionState, row2, COL)
+ .getValue());
+
+ row1Amount -= transferAmount;
+ row2Amount += transferAmount;
+
+ BatchUpdate update = new BatchUpdate(row1);
+ update.put(COL, Bytes.toBytes(row1Amount));
+ table.commit(transactionState, update);
+ update = new BatchUpdate(row2);
+ update.put(COL, Bytes.toBytes(row2Amount));
+ table.commit(transactionState, update);
+
+ super.preCommitSleep();
+
+ transactionManager.tryCommit(transactionState);
+ LOG.debug("Commited");
+ }
+
+ // Check the table we last mutated
+ private void checkTotalSum() throws IOException,
+ CommitUnsuccessfulException {
+ TransactionState transactionState = transactionManager.beginTransaction();
+ int totalSum = 0;
+ for (int i = 0; i < NUM_ST_ROWS; i++) {
+ totalSum += Bytes.toInt(table.get(transactionState, makeSTRow(i), COL)
+ .getValue());
+ }
+
+ transactionManager.tryCommit(transactionState);
+ if (TOTAL_SUM != totalSum) {
+ super.consistencyFailure();
+ }
+ }
+
+ }
+
+ // Similar to SingleTable, but this time we maintain consistency across tables
+ // rather than rows
+ private class MultiTableTransactionThread extends TransactionThread {
+ private static final int INITIAL_VALUE = 1000;
+ public static final int TOTAL_SUM = INITIAL_VALUE * NUM_TABLES;
+ private static final int MAX_TRANSFER_AMT = 100;
+
+ private byte[] row;
+ boolean doCheck = false;
+
+ public MultiTableTransactionThread() {
+ super("multi table");
+ }
+
+ @Override
+ protected void transaction() throws IOException,
+ CommitUnsuccessfulException {
+ if (doCheck) {
+ checkTotalSum();
+ } else {
+ doSingleRowChange();
+ }
+ doCheck = !doCheck;
+ }
+
+ private void doSingleRowChange() throws IOException,
+ CommitUnsuccessfulException {
+ row = makeMTRow(RAND.nextInt(NUM_MT_ROWS));
+ int transferAmount = RAND.nextInt(MAX_TRANSFER_AMT * 2)
+ - MAX_TRANSFER_AMT;
+ int table1Index = RAND.nextInt(tables.length);
+ int table2Index;
+ do {
+ table2Index = RAND.nextInt(tables.length);
+ } while (table2Index == table1Index);
+
+ TransactionalTable table1 = tables[table1Index];
+ TransactionalTable table2 = tables[table2Index];
+
+ TransactionState transactionState = transactionManager.beginTransaction();
+ int table1Amount = Bytes.toInt(table1.get(transactionState, row, COL)
+ .getValue());
+ int table2Amount = Bytes.toInt(table2.get(transactionState, row, COL)
+ .getValue());
+
+ table1Amount -= transferAmount;
+ table2Amount += transferAmount;
+
+ BatchUpdate update = new BatchUpdate(row);
+ update.put(COL, Bytes.toBytes(table1Amount));
+ table1.commit(transactionState, update);
+
+ update = new BatchUpdate(row);
+ update.put(COL, Bytes.toBytes(table2Amount));
+ table2.commit(transactionState, update);
+
+ super.preCommitSleep();
+
+ transactionManager.tryCommit(transactionState);
+
+ LOG.trace(Bytes.toString(table1.getTableName()) + ": " + table1Amount);
+ LOG.trace(Bytes.toString(table2.getTableName()) + ": " + table2Amount);
+
+ }
+
+ private void checkTotalSum() throws IOException,
+ CommitUnsuccessfulException {
+ TransactionState transactionState = transactionManager.beginTransaction();
+ int totalSum = 0;
+ int[] amounts = new int[tables.length];
+ for (int i = 0; i < tables.length; i++) {
+ int amount = Bytes.toInt(tables[i].get(transactionState, row, COL)
+ .getValue());
+ amounts[i] = amount;
+ totalSum += amount;
+ }
+
+ transactionManager.tryCommit(transactionState);
+
+ for (int i = 0; i < tables.length; i++) {
+ LOG.trace(Bytes.toString(tables[i].getTableName()) + ": " + amounts[i]);
+ }
+
+ if (TOTAL_SUM != totalSum) {
+ super.consistencyFailure();
+ }
+ }
+
+ }
+
+ public void testStressTransactions() throws IOException, InterruptedException {
+ writeInitalValues();
+
+ List<TransactionThread> transactionThreads = new LinkedList<TransactionThread>();
+
+ for (int i = 0; i < NUM_SINGLE_TABLE_THREADS; i++) {
+ TransactionThread transactionThread = new SingleTableTransactionThread();
+ transactionThread.start();
+ transactionThreads.add(transactionThread);
+ }
+
+ for (int i = 0; i < NUM_MULTI_TABLE_THREADS; i++) {
+ TransactionThread transactionThread = new MultiTableTransactionThread();
+ transactionThread.start();
+ transactionThreads.add(transactionThread);
+ }
+
+ for (TransactionThread transactionThread : transactionThreads) {
+ transactionThread.join();
+ }
+
+ for (TransactionThread transactionThread : transactionThreads) {
+ LOG.info(transactionThread.getName() + " done with "
+ + transactionThread.getNumAborts() + " aborts, and "
+ + transactionThread.getNumUnknowns() + " unknown transactions of "
+ + transactionThread.getNumRuns());
+ }
+
+ doFinalConsistencyChecks();
+ }
+
+ private void doFinalConsistencyChecks() throws IOException {
+
+ int[] mtSums = new int[NUM_MT_ROWS];
+ for (int i = 0; i < mtSums.length; i++) {
+ mtSums[i] = 0;
+ }
+
+ for (TransactionalTable table : tables) {
+ int thisTableSum = 0;
+ for (int i = 0; i < NUM_ST_ROWS; i++) {
+ byte[] row = makeSTRow(i);
+ thisTableSum += Bytes.toInt(table.get(row, COL).getValue());
+ }
+ Assert.assertEquals(SingleTableTransactionThread.TOTAL_SUM, thisTableSum);
+
+ for (int i = 0; i < NUM_MT_ROWS; i++) {
+ byte[] row = makeMTRow(i);
+ mtSums[i] += Bytes.toInt(table.get(row, COL).getValue());
+ }
+ }
+
+ for (int mtSum : mtSums) {
+ Assert.assertEquals(MultiTableTransactionThread.TOTAL_SUM, mtSum);
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/filter/DisabledTestInclusiveStopRowFilter.java b/src/test/org/apache/hadoop/hbase/filter/DisabledTestInclusiveStopRowFilter.java
new file mode 100644
index 0000000..7a3656f
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/DisabledTestInclusiveStopRowFilter.java
@@ -0,0 +1,94 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.util.List;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests the inclusive stop row filter
+ */
+public class DisabledTestInclusiveStopRowFilter extends TestCase {
+ private final byte [] STOP_ROW = Bytes.toBytes("stop_row");
+ private final byte [] GOOD_ROW = Bytes.toBytes("good_row");
+ private final byte [] PAST_STOP_ROW = Bytes.toBytes("zzzzzz");
+
+ RowFilterInterface mainFilter;
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ mainFilter = new InclusiveStopRowFilter(STOP_ROW);
+ }
+
+ /**
+ * Tests identification of the stop row
+ * @throws Exception
+ */
+ public void testStopRowIdentification() throws Exception {
+ stopRowTests(mainFilter);
+ }
+
+ /**
+ * Tests serialization
+ * @throws Exception
+ */
+ public void testSerialization() throws Exception {
+ // Decompose mainFilter to bytes.
+ ByteArrayOutputStream stream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(stream);
+ mainFilter.write(out);
+ out.close();
+ byte[] buffer = stream.toByteArray();
+
+ // Recompose mainFilter.
+ DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+ RowFilterInterface newFilter = new InclusiveStopRowFilter();
+ newFilter.readFields(in);
+
+ // Ensure the serialization preserved the filter by running a full test.
+ stopRowTests(newFilter);
+ }
+
+ private void stopRowTests(RowFilterInterface filter) throws Exception {
+ assertFalse("Filtering on " + Bytes.toString(GOOD_ROW), filter.filterRowKey(GOOD_ROW));
+ assertFalse("Filtering on " + Bytes.toString(STOP_ROW), filter.filterRowKey(STOP_ROW));
+ assertTrue("Filtering on " + Bytes.toString(PAST_STOP_ROW), filter.filterRowKey(PAST_STOP_ROW));
+
+ assertFalse("Filtering on " + Bytes.toString(GOOD_ROW), filter.filterColumn(GOOD_ROW, null,
+ null));
+ assertFalse("Filtering on " + Bytes.toString(STOP_ROW), filter.filterColumn(STOP_ROW, null, null));
+ assertTrue("Filtering on " + Bytes.toString(PAST_STOP_ROW), filter.filterColumn(PAST_STOP_ROW,
+ null, null));
+
+ assertFalse("FilterAllRemaining", filter.filterAllRemaining());
+ assertFalse("FilterNotNull", filter.filterRow((List<KeyValue>)null));
+
+ assertFalse("Filter a null", filter.filterRowKey(null));
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/filter/DisabledTestPageRowFilter.java b/src/test/org/apache/hadoop/hbase/filter/DisabledTestPageRowFilter.java
new file mode 100644
index 0000000..3c0fdfb
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/DisabledTestPageRowFilter.java
@@ -0,0 +1,98 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+import junit.framework.TestCase;
+
+/**
+ * Tests for the page row filter
+ */
+public class DisabledTestPageRowFilter extends TestCase {
+
+ RowFilterInterface mainFilter;
+ static final int ROW_LIMIT = 3;
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ mainFilter = new PageRowFilter(ROW_LIMIT);
+ }
+
+ /**
+ * test page size filter
+ * @throws Exception
+ */
+ public void testPageSize() throws Exception {
+ pageSizeTests(mainFilter);
+ }
+
+ /**
+ * Test filter serialization
+ * @throws Exception
+ */
+ public void testSerialization() throws Exception {
+ // Decompose mainFilter to bytes.
+ ByteArrayOutputStream stream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(stream);
+ mainFilter.write(out);
+ out.close();
+ byte[] buffer = stream.toByteArray();
+
+ // Recompose mainFilter.
+ DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+ RowFilterInterface newFilter = new PageRowFilter();
+ newFilter.readFields(in);
+
+ // Ensure the serialization preserved the filter by running a full test.
+ pageSizeTests(newFilter);
+ }
+
+ private void pageSizeTests(RowFilterInterface filter) throws Exception {
+ testFiltersBeyondPageSize(filter, ROW_LIMIT);
+ // Test reset works by going in again.
+ filter.reset();
+ testFiltersBeyondPageSize(filter, ROW_LIMIT);
+ }
+
+ private void testFiltersBeyondPageSize(final RowFilterInterface filter,
+ final int pageSize) {
+ for (int i = 0; i < (pageSize * 2); i++) {
+ byte [] row = Bytes.toBytes(Integer.toString(i));
+ boolean filterOut = filter.filterRowKey(row);
+ if (!filterOut) {
+ assertFalse("Disagrees with 'filter'", filter.filterAllRemaining());
+ } else {
+ // Once we have all for a page, calls to filterAllRemaining should
+ // stay true.
+ assertTrue("Disagrees with 'filter'", filter.filterAllRemaining());
+ assertTrue(i >= pageSize);
+ }
+ filter.rowProcessed(filterOut, row);
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/filter/DisabledTestPrefixRowFilter.java b/src/test/org/apache/hadoop/hbase/filter/DisabledTestPrefixRowFilter.java
new file mode 100644
index 0000000..b6af8f2
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/DisabledTestPrefixRowFilter.java
@@ -0,0 +1,99 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.UnsupportedEncodingException;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests for a prefix row filter
+ */
+public class DisabledTestPrefixRowFilter extends TestCase {
+ RowFilterInterface mainFilter;
+ static final char FIRST_CHAR = 'a';
+ static final char LAST_CHAR = 'e';
+ static final String HOST_PREFIX = "org.apache.site-";
+ static byte [] GOOD_BYTES = null;
+
+ static {
+ try {
+ GOOD_BYTES = "abc".getBytes(HConstants.UTF8_ENCODING);
+ } catch (UnsupportedEncodingException e) {
+ fail();
+ }
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ this.mainFilter = new PrefixRowFilter(Bytes.toBytes(HOST_PREFIX));
+ }
+
+ /**
+ * Tests filtering using a regex on the row key
+ * @throws Exception
+ */
+ public void testPrefixOnRow() throws Exception {
+ prefixRowTests(mainFilter);
+ }
+
+ /**
+ * Test serialization
+ * @throws Exception
+ */
+ public void testSerialization() throws Exception {
+ // Decompose mainFilter to bytes.
+ ByteArrayOutputStream stream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(stream);
+ mainFilter.write(out);
+ out.close();
+ byte[] buffer = stream.toByteArray();
+
+ // Recompose filter.
+ DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+ RowFilterInterface newFilter = new PrefixRowFilter();
+ newFilter.readFields(in);
+
+ // Ensure the serialization preserved the filter by running all test.
+ prefixRowTests(newFilter);
+ }
+
+ private void prefixRowTests(RowFilterInterface filter) throws Exception {
+ for (char c = FIRST_CHAR; c <= LAST_CHAR; c++) {
+ byte [] t = createRow(c);
+ assertFalse("Failed with characer " + c, filter.filterRowKey(t));
+ }
+ String yahooSite = "com.yahoo.www";
+ assertTrue("Failed with character " +
+ yahooSite, filter.filterRowKey(Bytes.toBytes(yahooSite)));
+ }
+
+ private byte [] createRow(final char c) {
+ return Bytes.toBytes(HOST_PREFIX + Character.toString(c));
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/filter/DisabledTestRegExpRowFilter.java b/src/test/org/apache/hadoop/hbase/filter/DisabledTestRegExpRowFilter.java
new file mode 100644
index 0000000..7a9a115
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/DisabledTestRegExpRowFilter.java
@@ -0,0 +1,199 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.UnsupportedEncodingException;
+import java.util.Map;
+import java.util.TreeMap;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests for regular expression row filter
+ */
+public class DisabledTestRegExpRowFilter extends TestCase {
+ TreeMap<byte [], Cell> colvalues;
+ RowFilterInterface mainFilter;
+ static final char FIRST_CHAR = 'a';
+ static final char LAST_CHAR = 'e';
+ static final String HOST_PREFIX = "org.apache.site-";
+ static byte [] GOOD_BYTES = null;
+
+ static {
+ try {
+ GOOD_BYTES = "abc".getBytes(HConstants.UTF8_ENCODING);
+ } catch (UnsupportedEncodingException e) {
+ fail();
+ }
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ this.colvalues = new TreeMap<byte [], Cell>(Bytes.BYTES_COMPARATOR);
+ for (char c = FIRST_CHAR; c < LAST_CHAR; c++) {
+ colvalues.put(Bytes.toBytes(new String(new char [] {c})),
+ new Cell(GOOD_BYTES, HConstants.LATEST_TIMESTAMP));
+ }
+ this.mainFilter = new RegExpRowFilter(HOST_PREFIX + ".*", colvalues);
+ }
+
+ /**
+ * Tests filtering using a regex on the row key
+ * @throws Exception
+ */
+ public void testRegexOnRow() throws Exception {
+ regexRowTests(mainFilter);
+ }
+
+ /**
+ * Tests filtering using a regex on row and colum
+ * @throws Exception
+ */
+ public void testRegexOnRowAndColumn() throws Exception {
+ regexRowColumnTests(mainFilter);
+ }
+
+ /**
+ * Only return values that are not null
+ * @throws Exception
+ */
+ public void testFilterNotNull() throws Exception {
+ filterNotNullTests(mainFilter);
+ }
+
+ /**
+ * Test serialization
+ * @throws Exception
+ */
+ public void testSerialization() throws Exception {
+ // Decompose mainFilter to bytes.
+ ByteArrayOutputStream stream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(stream);
+ mainFilter.write(out);
+ out.close();
+ byte[] buffer = stream.toByteArray();
+
+ // Recompose filter.
+ DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+ RowFilterInterface newFilter = new RegExpRowFilter();
+ newFilter.readFields(in);
+
+ // Ensure the serialization preserved the filter by running all test.
+ regexRowTests(newFilter);
+ newFilter.reset();
+ regexRowColumnTests(newFilter);
+ newFilter.reset();
+ filterNotNullTests(newFilter);
+ }
+
+ private void regexRowTests(RowFilterInterface filter) throws Exception {
+ for (char c = FIRST_CHAR; c <= LAST_CHAR; c++) {
+ byte [] t = createRow(c);
+ assertFalse("Failed with characer " + c, filter.filterRowKey(t));
+ }
+ String yahooSite = "com.yahoo.www";
+ assertTrue("Failed with character " +
+ yahooSite, filter.filterRowKey(Bytes.toBytes(yahooSite)));
+ }
+
+ private void regexRowColumnTests(RowFilterInterface filter)
+ throws UnsupportedEncodingException {
+
+ for (char c = FIRST_CHAR; c <= LAST_CHAR; c++) {
+ byte [] t = createRow(c);
+ for (Map.Entry<byte [], Cell> e: this.colvalues.entrySet()) {
+ assertFalse("Failed on " + c,
+ filter.filterColumn(t, e.getKey(), e.getValue().getValue()));
+ }
+ }
+ // Try a row and column I know will pass.
+ char c = 'c';
+ byte [] r = createRow(c);
+ byte [] col = Bytes.toBytes(Character.toString(c));
+ assertFalse("Failed with character " + c,
+ filter.filterColumn(r, col, GOOD_BYTES));
+
+ // Do same but with bad bytes.
+ assertTrue("Failed with character " + c,
+ filter.filterColumn(r, col, "badbytes".getBytes(HConstants.UTF8_ENCODING)));
+
+ // Do with good bytes but bad column name. Should not filter out.
+ assertFalse("Failed with character " + c,
+ filter.filterColumn(r, Bytes.toBytes("badcolumn"), GOOD_BYTES));
+
+ // Good column, good bytes but bad row.
+ assertTrue("Failed with character " + c,
+ filter.filterColumn(Bytes.toBytes("bad row"),
+ Bytes.toBytes("badcolumn"), GOOD_BYTES));
+ }
+
+ private void filterNotNullTests(RowFilterInterface filter) throws Exception {
+ // Modify the filter to expect certain columns to be null:
+ // Expecting a row WITH columnKeys: a-d, WITHOUT columnKey: e
+ ((RegExpRowFilter)filter).setColumnFilter(new byte [] {LAST_CHAR}, null);
+
+ char secondToLast = (char)(LAST_CHAR - 1);
+ char thirdToLast = (char)(LAST_CHAR - 2);
+
+ // Modify the row to be missing an expected columnKey (d)
+ colvalues.remove(new byte [] {(byte)secondToLast});
+
+ // Try a row that is missing an expected columnKey.
+ // Testing row with columnKeys: a-c
+ assertTrue("Failed with last columnKey " + thirdToLast, filter.
+ filterRow(colvalues));
+
+ // Try a row that has all expected columnKeys, and NO null-expected
+ // columnKeys.
+ // Testing row with columnKeys: a-d
+ colvalues.put(new byte [] {(byte)secondToLast},
+ new Cell(GOOD_BYTES, HConstants.LATEST_TIMESTAMP));
+ assertFalse("Failed with last columnKey " + secondToLast, filter.
+ filterRow(colvalues));
+
+ // Try a row that has all expected columnKeys AND a null-expected columnKey.
+ // Testing row with columnKeys: a-e
+ colvalues.put(new byte [] {LAST_CHAR},
+ new Cell(GOOD_BYTES, HConstants.LATEST_TIMESTAMP));
+ assertTrue("Failed with last columnKey " + LAST_CHAR, filter.
+ filterRow(colvalues));
+
+ // Try a row that has all expected columnKeys and a null-expected columnKey
+ // that maps to a null value.
+ // Testing row with columnKeys: a-e, e maps to null
+// colvalues.put(new byte [] {LAST_CHAR},
+// new Cell(HLogEdit.DELETED_BYTES, HConstants.LATEST_TIMESTAMP));
+// assertFalse("Failed with last columnKey " + LAST_CHAR + " mapping to null.",
+// filter.filterRow(colvalues));
+ }
+
+ private byte [] createRow(final char c) {
+ return Bytes.toBytes(HOST_PREFIX + Character.toString(c));
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterAfterWrite.java b/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterAfterWrite.java
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterAfterWrite.java
diff --git a/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterOnMultipleFamilies.java b/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterOnMultipleFamilies.java
new file mode 100644
index 0000000..b16e3bb
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterOnMultipleFamilies.java
@@ -0,0 +1,128 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import junit.framework.Assert;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test for regexp filters (HBASE-527)
+ */
+public class DisabledTestRowFilterOnMultipleFamilies extends HBaseClusterTestCase {
+ private static final Log LOG = LogFactory.getLog(DisabledTestRowFilterOnMultipleFamilies.class.getName());
+
+ static final String TABLE_NAME = "TestTable";
+ static final String COLUMN1 = "A:col1";
+ static final byte [] TEXT_COLUMN1 = Bytes.toBytes(COLUMN1);
+ static final String COLUMN2 = "B:col2";
+ static final byte [] TEXT_COLUMN2 = Bytes.toBytes(COLUMN2);
+
+ private static final byte [][] columns = {TEXT_COLUMN1, TEXT_COLUMN2};
+
+ private static final int NUM_ROWS = 10;
+ private static final byte[] VALUE = "HELLO".getBytes();
+
+ /** @throws IOException */
+ public void testMultipleFamilies() throws IOException {
+ HTableDescriptor desc = new HTableDescriptor(TABLE_NAME);
+ desc.addFamily(new HColumnDescriptor("A:"));
+ desc.addFamily(new HColumnDescriptor("B:"));
+
+ // Create a table.
+ HBaseAdmin admin = new HBaseAdmin(this.conf);
+ admin.createTable(desc);
+
+ // insert some data into the test table
+ HTable table = new HTable(conf, TABLE_NAME);
+
+ for (int i = 0; i < NUM_ROWS; i++) {
+ BatchUpdate b = new BatchUpdate("row_" + String.format("%1$05d", i));
+ b.put(TEXT_COLUMN1, VALUE);
+ b.put(TEXT_COLUMN2, String.format("%1$05d", i).getBytes());
+ table.commit(b);
+ }
+
+ LOG.info("Print table contents using scanner before map/reduce for " + TABLE_NAME);
+ scanTable(TABLE_NAME, true);
+ LOG.info("Print table contents using scanner+filter before map/reduce for " + TABLE_NAME);
+ scanTableWithRowFilter(TABLE_NAME, true);
+ }
+
+ private void scanTable(final String tableName, final boolean printValues) throws IOException {
+ HTable table = new HTable(conf, tableName);
+ Scanner scanner = table.getScanner(columns, HConstants.EMPTY_START_ROW);
+ int numFound = doScan(scanner, printValues);
+ Assert.assertEquals(NUM_ROWS, numFound);
+ }
+
+ private void scanTableWithRowFilter(final String tableName, final boolean printValues) throws IOException {
+ HTable table = new HTable(conf, tableName);
+ Map<byte [], Cell> columnMap = new HashMap<byte [], Cell>();
+ columnMap.put(TEXT_COLUMN1,
+ new Cell(VALUE, HConstants.LATEST_TIMESTAMP));
+ RegExpRowFilter filter = new RegExpRowFilter(null, columnMap);
+ Scanner scanner = table.getScanner(columns, HConstants.EMPTY_START_ROW, filter);
+ int numFound = doScan(scanner, printValues);
+ Assert.assertEquals(NUM_ROWS, numFound);
+ }
+
+ private int doScan(final Scanner scanner, final boolean printValues) throws IOException {
+ {
+ int count = 0;
+
+ try {
+ for (RowResult result : scanner) {
+ if (printValues) {
+ LOG.info("row: " + Bytes.toString(result.getRow()));
+
+ for (Map.Entry<byte [], Cell> e : result.entrySet()) {
+ LOG.info(" column: " + e.getKey() + " value: "
+ + new String(e.getValue().getValue(), HConstants.UTF8_ENCODING));
+ }
+ }
+ Assert.assertEquals(2, result.size());
+ count++;
+ }
+
+ } finally {
+ scanner.close();
+ }
+ return count;
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterSet.java b/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterSet.java
new file mode 100644
index 0000000..4ef67c5
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/DisabledTestRowFilterSet.java
@@ -0,0 +1,188 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.UnsupportedEncodingException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.TreeMap;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+import junit.framework.TestCase;
+
+/**
+ * Tests filter sets
+ */
+public class DisabledTestRowFilterSet extends TestCase {
+
+ RowFilterInterface filterMPALL;
+ RowFilterInterface filterMPONE;
+ static final int MAX_PAGES = 5;
+ static final char FIRST_CHAR = 'a';
+ static final char LAST_CHAR = 'e';
+ TreeMap<byte [], Cell> colvalues;
+ static byte[] GOOD_BYTES = null;
+ static byte[] BAD_BYTES = null;
+
+ static {
+ try {
+ GOOD_BYTES = "abc".getBytes(HConstants.UTF8_ENCODING);
+ BAD_BYTES = "def".getBytes(HConstants.UTF8_ENCODING);
+ } catch (UnsupportedEncodingException e) {
+ fail();
+ }
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+
+ colvalues = new TreeMap<byte [], Cell>(Bytes.BYTES_COMPARATOR);
+ for (char c = FIRST_CHAR; c < LAST_CHAR; c++) {
+ colvalues.put(new byte [] {(byte)c},
+ new Cell(GOOD_BYTES, HConstants.LATEST_TIMESTAMP));
+ }
+
+ Set<RowFilterInterface> filters = new HashSet<RowFilterInterface>();
+ filters.add(new PageRowFilter(MAX_PAGES));
+ filters.add(new RegExpRowFilter(".*regex.*", colvalues));
+ filters.add(new WhileMatchRowFilter(new StopRowFilter(Bytes.toBytes("yyy"))));
+ filters.add(new WhileMatchRowFilter(new RegExpRowFilter(".*match.*")));
+ filterMPALL = new RowFilterSet(RowFilterSet.Operator.MUST_PASS_ALL,
+ filters);
+ filterMPONE = new RowFilterSet(RowFilterSet.Operator.MUST_PASS_ONE,
+ filters);
+ }
+
+ /**
+ * Test "must pass one"
+ * @throws Exception
+ */
+ public void testMPONE() throws Exception {
+ MPONETests(filterMPONE);
+ }
+
+ /**
+ * Test "must pass all"
+ * @throws Exception
+ */
+ public void testMPALL() throws Exception {
+ MPALLTests(filterMPALL);
+ }
+
+ /**
+ * Test serialization
+ * @throws Exception
+ */
+ public void testSerialization() throws Exception {
+ // Decompose filterMPALL to bytes.
+ ByteArrayOutputStream stream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(stream);
+ filterMPALL.write(out);
+ out.close();
+ byte[] buffer = stream.toByteArray();
+
+ // Recompose filterMPALL.
+ DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+ RowFilterInterface newFilter = new RowFilterSet();
+ newFilter.readFields(in);
+
+ // Ensure the serialization preserved the filter by running a full test.
+ MPALLTests(newFilter);
+ }
+
+ private void MPONETests(RowFilterInterface filter) throws Exception {
+ // A row that shouldn't cause any filters to return true.
+ RFSAssertion(filter, "regex_match", false);
+
+ // A row that should cause the WhileMatchRowFilter to filter all remaining.
+ RFSAssertion(filter, "regex_only", false);
+
+ // Make sure the overall filterAllRemaining is unchanged (correct for
+ // MUST_PASS_ONE).
+ assertFalse(filter.filterAllRemaining());
+
+ // A row that should cause the RegExpRowFilter to fail and the
+ // StopRowFilter to filter all remaining.
+ RFSAssertion(filter, "yyy_match", false);
+
+ // Accept several more rows such that PageRowFilter will exceed its limit.
+ for (int i=0; i<=MAX_PAGES-3; i++)
+ filter.rowProcessed(false, Bytes.toBytes("unimportant_key"));
+
+ // A row that should cause the RegExpRowFilter to filter this row, making
+ // all the filters return true and thus the RowFilterSet as well.
+ RFSAssertion(filter, "bad_column", true);
+
+ // Make sure the overall filterAllRemaining is unchanged (correct for
+ // MUST_PASS_ONE).
+ assertFalse(filter.filterAllRemaining());
+ }
+
+ private void MPALLTests(RowFilterInterface filter) throws Exception {
+ // A row that shouldn't cause any filters to return true.
+ RFSAssertion(filter, "regex_match", false);
+
+ // A row that should cause WhileMatchRowFilter to filter all remaining.
+ RFSAssertion(filter, "regex_only", true);
+
+ // Make sure the overall filterAllRemaining is changed (correct for
+ // MUST_PASS_ALL).
+ RFSAssertReset(filter);
+
+ // A row that should cause the RegExpRowFilter to fail and the
+ // StopRowFilter to filter all remaining.
+ RFSAssertion(filter, "yyy_match", true);
+
+ // Make sure the overall filterAllRemaining is changed (correct for
+ // MUST_PASS_ALL).
+ RFSAssertReset(filter);
+
+ // A row that should cause the RegExpRowFilter to fail.
+ boolean filtered = filter.filterColumn(Bytes.toBytes("regex_match"),
+ new byte [] { FIRST_CHAR }, BAD_BYTES);
+ assertTrue("Filtering on 'regex_match' and bad column data.", filtered);
+ filterMPALL.rowProcessed(filtered, Bytes.toBytes("regex_match"));
+ }
+
+ private void RFSAssertion(RowFilterInterface filter, String toTest,
+ boolean assertTrue) throws Exception {
+ byte [] testText = Bytes.toBytes(toTest);
+ boolean filtered = filter.filterRowKey(testText);
+ assertTrue("Filtering on '" + toTest + "'",
+ assertTrue? filtered : !filtered);
+ filter.rowProcessed(filtered, testText);
+ }
+
+ private void RFSAssertReset(RowFilterInterface filter) throws Exception{
+ assertTrue(filter.filterAllRemaining());
+ // Reset for continued testing
+ filter.reset();
+ assertFalse(filter.filterAllRemaining());
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/filter/TestColumnValueFilter.java b/src/test/org/apache/hadoop/hbase/filter/TestColumnValueFilter.java
new file mode 100755
index 0000000..15ee448
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/TestColumnValueFilter.java
@@ -0,0 +1,153 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.util.List;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests the stop row filter
+ */
+public class TestColumnValueFilter extends TestCase {
+
+ private static final byte[] ROW = Bytes.toBytes("test");
+ private static final byte[] COLUMN = Bytes.toBytes("test:foo");
+ private static final byte[] VAL_1 = Bytes.toBytes("a");
+ private static final byte[] VAL_2 = Bytes.toBytes("ab");
+ private static final byte[] VAL_3 = Bytes.toBytes("abc");
+ private static final byte[] VAL_4 = Bytes.toBytes("abcd");
+ private static final byte[] FULLSTRING_1 =
+ Bytes.toBytes("The quick brown fox jumps over the lazy dog.");
+ private static final byte[] FULLSTRING_2 =
+ Bytes.toBytes("The slow grey fox trips over the lazy dog.");
+ private static final String QUICK_SUBSTR = "quick";
+ private static final String QUICK_REGEX = ".+quick.+";
+
+ private RowFilterInterface basicFilterNew() {
+ return new ColumnValueFilter(COLUMN,
+ ColumnValueFilter.CompareOp.GREATER_OR_EQUAL, VAL_2);
+ }
+
+ private RowFilterInterface substrFilterNew() {
+ return new ColumnValueFilter(COLUMN, ColumnValueFilter.CompareOp.EQUAL,
+ new SubstringComparator(QUICK_SUBSTR));
+ }
+
+ private RowFilterInterface regexFilterNew() {
+ return new ColumnValueFilter(COLUMN, ColumnValueFilter.CompareOp.EQUAL,
+ new RegexStringComparator(QUICK_REGEX));
+ }
+
+ private void basicFilterTests(RowFilterInterface filter)
+ throws Exception {
+ assertTrue("basicFilter1", filter.filterColumn(ROW, 0, ROW.length,
+ COLUMN, 0, COLUMN.length, VAL_1, 0, VAL_1.length));
+ assertFalse("basicFilter2", filter.filterColumn(ROW, 0, ROW.length,
+ COLUMN, 0, COLUMN.length, VAL_2, 0, VAL_2.length));
+ assertFalse("basicFilter3", filter.filterColumn(ROW, 0, ROW.length,
+ COLUMN, 0, COLUMN.length, VAL_3, 0, VAL_3.length));
+ assertFalse("basicFilter4", filter.filterColumn(ROW, 0, ROW.length,
+ COLUMN, 0, COLUMN.length, VAL_4, 0, VAL_4.length));
+ assertFalse("basicFilterAllRemaining", filter.filterAllRemaining());
+ assertFalse("basicFilterNotNull", filter.filterRow((List<KeyValue>)null));
+ }
+
+ private void substrFilterTests(RowFilterInterface filter)
+ throws Exception {
+ assertFalse("substrTrue", filter.filterColumn(ROW, 0, ROW.length,
+ COLUMN, 0, COLUMN.length, FULLSTRING_1, 0, FULLSTRING_1.length));
+ assertTrue("substrFalse", filter.filterColumn(ROW, 0, ROW.length,
+ COLUMN, 0, COLUMN.length, FULLSTRING_2, 0, FULLSTRING_2.length));
+ assertFalse("substrFilterAllRemaining", filter.filterAllRemaining());
+ assertFalse("substrFilterNotNull", filter.filterRow((List<KeyValue>)null));
+ }
+
+ private void regexFilterTests(RowFilterInterface filter)
+ throws Exception {
+ assertFalse("regexTrue", filter.filterColumn(ROW, 0, ROW.length,
+ COLUMN, 0, COLUMN.length, FULLSTRING_1, 0, FULLSTRING_1.length));
+ assertTrue("regexFalse", filter.filterColumn(ROW, 0, ROW.length,
+ COLUMN, 0, COLUMN.length, FULLSTRING_2, 0, FULLSTRING_2.length));
+ assertFalse("regexFilterAllRemaining", filter.filterAllRemaining());
+ assertFalse("regexFilterNotNull", filter.filterRow((List<KeyValue>)null));
+ }
+
+ private RowFilterInterface serializationTest(RowFilterInterface filter)
+ throws Exception {
+ // Decompose filter to bytes.
+ ByteArrayOutputStream stream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(stream);
+ filter.write(out);
+ out.close();
+ byte[] buffer = stream.toByteArray();
+
+ // Recompose filter.
+ DataInputStream in =
+ new DataInputStream(new ByteArrayInputStream(buffer));
+ RowFilterInterface newFilter = new ColumnValueFilter();
+ newFilter.readFields(in);
+
+ return newFilter;
+ }
+
+ RowFilterInterface basicFilter;
+ RowFilterInterface substrFilter;
+ RowFilterInterface regexFilter;
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ basicFilter = basicFilterNew();
+ substrFilter = substrFilterNew();
+ regexFilter = regexFilterNew();
+ }
+
+ /**
+ * Tests identification of the stop row
+ * @throws Exception
+ */
+ public void testStop() throws Exception {
+ basicFilterTests(basicFilter);
+ substrFilterTests(substrFilter);
+ regexFilterTests(regexFilter);
+ }
+
+ /**
+ * Tests serialization
+ * @throws Exception
+ */
+ public void testSerialization() throws Exception {
+ RowFilterInterface newFilter = serializationTest(basicFilter);
+ basicFilterTests(newFilter);
+ newFilter = serializationTest(substrFilter);
+ substrFilterTests(newFilter);
+ newFilter = serializationTest(regexFilter);
+ regexFilterTests(newFilter);
+ }
+
+}
diff --git a/src/test/org/apache/hadoop/hbase/filter/TestStopRowFilter.java b/src/test/org/apache/hadoop/hbase/filter/TestStopRowFilter.java
new file mode 100644
index 0000000..97cc317
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/TestStopRowFilter.java
@@ -0,0 +1,98 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.util.List;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests the stop row filter
+ */
+public class TestStopRowFilter extends TestCase {
+ private final byte [] STOP_ROW = Bytes.toBytes("stop_row");
+ private final byte [] GOOD_ROW = Bytes.toBytes("good_row");
+ private final byte [] PAST_STOP_ROW = Bytes.toBytes("zzzzzz");
+
+ RowFilterInterface mainFilter;
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ mainFilter = new StopRowFilter(STOP_ROW);
+ }
+
+ /**
+ * Tests identification of the stop row
+ * @throws Exception
+ */
+ public void testStopRowIdentification() throws Exception {
+ stopRowTests(mainFilter);
+ }
+
+ /**
+ * Tests serialization
+ * @throws Exception
+ */
+ public void testSerialization() throws Exception {
+ // Decompose mainFilter to bytes.
+ ByteArrayOutputStream stream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(stream);
+ mainFilter.write(out);
+ out.close();
+ byte[] buffer = stream.toByteArray();
+
+ // Recompose mainFilter.
+ DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+ RowFilterInterface newFilter = new StopRowFilter();
+ newFilter.readFields(in);
+
+ // Ensure the serialization preserved the filter by running a full test.
+ stopRowTests(newFilter);
+ }
+
+ private void stopRowTests(RowFilterInterface filter) throws Exception {
+ assertFalse("Filtering on " + Bytes.toString(GOOD_ROW),
+ filter.filterRowKey(GOOD_ROW, 0, GOOD_ROW.length));
+ assertTrue("Filtering on " + Bytes.toString(STOP_ROW),
+ filter.filterRowKey(STOP_ROW, 0, STOP_ROW.length));
+ assertTrue("Filtering on " + Bytes.toString(PAST_STOP_ROW),
+ filter.filterRowKey(PAST_STOP_ROW, 0, PAST_STOP_ROW.length));
+ assertFalse("Filtering on " + Bytes.toString(GOOD_ROW),
+ filter.filterColumn(GOOD_ROW, 0, GOOD_ROW.length, null, 0, 0,
+ null, 0, 0));
+ assertTrue("Filtering on " + Bytes.toString(STOP_ROW),
+ filter.filterColumn(STOP_ROW, 0, STOP_ROW.length, null, 0, 0, null, 0, 0));
+ assertTrue("Filtering on " + Bytes.toString(PAST_STOP_ROW),
+ filter.filterColumn(PAST_STOP_ROW, 0, PAST_STOP_ROW.length, null, 0, 0,
+ null, 0, 0));
+ assertFalse("FilterAllRemaining", filter.filterAllRemaining());
+ assertFalse("FilterNotNull", filter.filterRow((List<KeyValue>)null));
+
+ assertFalse("Filter a null", filter.filterRowKey(null, 0, 0));
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/filter/TestWhileMatchRowFilter.java b/src/test/org/apache/hadoop/hbase/filter/TestWhileMatchRowFilter.java
new file mode 100644
index 0000000..146e474
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/filter/TestWhileMatchRowFilter.java
@@ -0,0 +1,157 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.util.List;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+
+/**
+ * Tests for the while-match filter
+ */
+public class TestWhileMatchRowFilter extends TestCase {
+
+ WhileMatchRowFilter wmStopRowFilter;
+ WhileMatchRowFilter wmRegExpRowFilter;
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ wmStopRowFilter = new WhileMatchRowFilter(new StopRowFilter(
+ Bytes.toBytes("s")));
+ wmRegExpRowFilter = new WhileMatchRowFilter(new RegExpRowFilter(
+ ".*regex.*"));
+ }
+
+ /**
+ * Tests while match stop row
+ * @throws Exception
+ */
+ public void testWhileMatchStopRow() throws Exception {
+ whileMatchStopRowTests(wmStopRowFilter);
+ }
+
+ /**
+ * Tests while match regex
+ * @throws Exception
+ */
+ public void testWhileMatchRegExp() throws Exception {
+ whileMatchRegExpTests(wmRegExpRowFilter);
+ }
+
+ /**
+ * Tests serialization
+ * @throws Exception
+ */
+ public void testSerialization() throws Exception {
+ // Decompose wmRegExpRowFilter to bytes.
+ ByteArrayOutputStream stream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(stream);
+ wmRegExpRowFilter.write(out);
+ out.close();
+ byte[] buffer = stream.toByteArray();
+
+ // Recompose wmRegExpRowFilter.
+ DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+ WhileMatchRowFilter newFilter = new WhileMatchRowFilter();
+ newFilter.readFields(in);
+
+ // Ensure the serialization preserved the filter by running a full test.
+ whileMatchRegExpTests(newFilter);
+ }
+
+ private void whileMatchStopRowTests(WhileMatchRowFilter filter) throws
+ Exception {
+ RowFilterInterface innerFilter = filter.getInternalFilter();
+ String toTest;
+
+ // Test cases that should pass the row
+ toTest = "apples";
+ assertFalse("filter: '" + toTest + "'", filter.filterRowKey(Bytes.toBytes(toTest)));
+ byte [] toTestBytes = Bytes.toBytes(toTest);
+ assertFalse("innerFilter: '" + toTest + "'",
+ innerFilter.filterRowKey(toTestBytes, 0, toTestBytes.length));
+
+ // Test cases that should fail the row
+ toTest = "tuna";
+ toTestBytes = Bytes.toBytes(toTest);
+ assertTrue("filter: '" + toTest + "'", filter.filterRowKey(toTestBytes));
+ assertTrue("innerFilter: '" + toTest + "'",
+ innerFilter.filterRowKey(toTestBytes, 0, toTestBytes.length));
+
+ // The difference in switch
+ assertTrue("filter: filterAllRemaining", filter.filterAllRemaining());
+ assertFalse("innerFilter: filterAllRemaining pre-reset",
+ innerFilter.filterAllRemaining());
+
+ // Test resetting
+ filter.reset();
+ assertFalse("filter: filterAllRemaining post-reset",
+ filter.filterAllRemaining());
+
+ // Test filterNotNull for functionality only (no switch-cases)
+ assertFalse("filter: filterNotNull", filter.filterRow((List<KeyValue>)null));
+ }
+
+ private void whileMatchRegExpTests(WhileMatchRowFilter filter) throws
+ Exception {
+ RowFilterInterface innerFilter = filter.getInternalFilter();
+ String toTest;
+
+ // Test cases that should pass the row
+ toTest = "regex_match";
+ byte [] toTestBytes = Bytes.toBytes(toTest);
+ assertFalse("filter: '" + toTest + "'", filter.filterRowKey(Bytes.toBytes(toTest)));
+ assertFalse("innerFilter: '" + toTest + "'",
+ innerFilter.filterRowKey(toTestBytes, 0, toTestBytes.length));
+
+ // Test cases that should fail the row
+ toTest = "not_a_match";
+ toTestBytes = Bytes.toBytes(toTest);
+ assertTrue("filter: '" + toTest + "'", filter.filterRowKey(Bytes.toBytes(toTest)));
+ assertTrue("innerFilter: '" + toTest + "'",
+ innerFilter.filterRowKey(toTestBytes, 0, toTestBytes.length));
+
+ // The difference in switch
+ assertTrue("filter: filterAllRemaining", filter.filterAllRemaining());
+ assertFalse("innerFilter: filterAllRemaining pre-reset",
+ innerFilter.filterAllRemaining());
+
+ // Test resetting
+ filter.reset();
+ assertFalse("filter: filterAllRemaining post-reset",
+ filter.filterAllRemaining());
+
+ // Test filter(Text, Text, byte[]) for functionality only (no switch-cases)
+ toTest = "asdf_regex_hjkl";
+ toTestBytes = Bytes.toBytes(toTest);
+ assertFalse("filter: '" + toTest + "'",
+ filter.filterColumn(toTestBytes, 0, toTestBytes.length,
+ null, 0, 0, null, 0, 0));
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java b/src/test/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java
new file mode 100644
index 0000000..5a547a6
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java
@@ -0,0 +1,98 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.File;
+import java.io.IOException;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.filter.RowFilterInterface;
+import org.apache.hadoop.hbase.filter.StopRowFilter;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparator;
+
+public class TestHbaseObjectWritable extends TestCase {
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ super.tearDown();
+ }
+
+ @SuppressWarnings("boxing")
+ public void testReadObjectDataInputConfiguration() throws IOException {
+ HBaseConfiguration conf = new HBaseConfiguration();
+ // Do primitive type
+ final int COUNT = 101;
+ assertTrue(doType(conf, COUNT, int.class).equals(COUNT));
+ // Do array
+ final byte [] testing = "testing".getBytes();
+ byte [] result = (byte [])doType(conf, testing, testing.getClass());
+ assertTrue(WritableComparator.compareBytes(testing, 0, testing.length,
+ result, 0, result.length) == 0);
+ // Do unsupported type.
+ boolean exception = false;
+ try {
+ doType(conf, new File("a"), File.class);
+ } catch (UnsupportedOperationException uoe) {
+ exception = true;
+ }
+ assertTrue(exception);
+ // Try odd types
+ final byte A = 'A';
+ byte [] bytes = new byte[1];
+ bytes[0] = A;
+ Object obj = doType(conf, bytes, byte [].class);
+ assertTrue(((byte [])obj)[0] == A);
+ // Do 'known' Writable type.
+ obj = doType(conf, new Text(""), Text.class);
+ assertTrue(obj instanceof Text);
+ // Try type that should get transferred old fashion way.
+ obj = doType(conf, new StopRowFilter(HConstants.EMPTY_BYTE_ARRAY),
+ RowFilterInterface.class);
+ assertTrue(obj instanceof StopRowFilter);
+ }
+
+ private Object doType(final HBaseConfiguration conf, final Object value,
+ final Class<?> clazz)
+ throws IOException {
+ ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
+ DataOutputStream out = new DataOutputStream(byteStream);
+ HbaseObjectWritable.writeObject(out, value, clazz, conf);
+ out.close();
+ ByteArrayInputStream bais =
+ new ByteArrayInputStream(byteStream.toByteArray());
+ DataInputStream dis = new DataInputStream(bais);
+ Object product = HbaseObjectWritable.readObject(dis, conf);
+ dis.close();
+ return product;
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/KVGenerator.java b/src/test/org/apache/hadoop/hbase/io/hfile/KVGenerator.java
new file mode 100644
index 0000000..ca8b80a
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/KVGenerator.java
@@ -0,0 +1,111 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.util.Random;
+
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.WritableComparator;
+
+/**
+ * Generate random <key, value> pairs.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+class KVGenerator {
+ private final Random random;
+ private final byte[][] dict;
+ private final boolean sorted;
+ private final RandomDistribution.DiscreteRNG keyLenRNG, valLenRNG;
+ private BytesWritable lastKey;
+ private static final int MIN_KEY_LEN = 4;
+ private final byte prefix[] = new byte[MIN_KEY_LEN];
+
+ public KVGenerator(Random random, boolean sorted,
+ RandomDistribution.DiscreteRNG keyLenRNG,
+ RandomDistribution.DiscreteRNG valLenRNG,
+ RandomDistribution.DiscreteRNG wordLenRNG, int dictSize) {
+ this.random = random;
+ dict = new byte[dictSize][];
+ this.sorted = sorted;
+ this.keyLenRNG = keyLenRNG;
+ this.valLenRNG = valLenRNG;
+ for (int i = 0; i < dictSize; ++i) {
+ int wordLen = wordLenRNG.nextInt();
+ dict[i] = new byte[wordLen];
+ random.nextBytes(dict[i]);
+ }
+ lastKey = new BytesWritable();
+ fillKey(lastKey);
+ }
+
+ private void fillKey(BytesWritable o) {
+ int len = keyLenRNG.nextInt();
+ if (len < MIN_KEY_LEN) len = MIN_KEY_LEN;
+ o.setSize(len);
+ int n = MIN_KEY_LEN;
+ while (n < len) {
+ byte[] word = dict[random.nextInt(dict.length)];
+ int l = Math.min(word.length, len - n);
+ System.arraycopy(word, 0, o.get(), n, l);
+ n += l;
+ }
+ if (sorted
+ && WritableComparator.compareBytes(lastKey.get(), MIN_KEY_LEN, lastKey
+ .getSize()
+ - MIN_KEY_LEN, o.get(), MIN_KEY_LEN, o.getSize() - MIN_KEY_LEN) > 0) {
+ incrementPrefix();
+ }
+
+ System.arraycopy(prefix, 0, o.get(), 0, MIN_KEY_LEN);
+ lastKey.set(o);
+ }
+
+ private void fillValue(BytesWritable o) {
+ int len = valLenRNG.nextInt();
+ o.setSize(len);
+ int n = 0;
+ while (n < len) {
+ byte[] word = dict[random.nextInt(dict.length)];
+ int l = Math.min(word.length, len - n);
+ System.arraycopy(word, 0, o.get(), n, l);
+ n += l;
+ }
+ }
+
+ private void incrementPrefix() {
+ for (int i = MIN_KEY_LEN - 1; i >= 0; --i) {
+ ++prefix[i];
+ if (prefix[i] != 0) return;
+ }
+
+ throw new RuntimeException("Prefix overflown");
+ }
+
+ public void next(BytesWritable key, BytesWritable value, boolean dupKey) {
+ if (dupKey) {
+ key.set(lastKey);
+ }
+ else {
+ fillKey(key);
+ }
+ fillValue(value);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/KeySampler.java b/src/test/org/apache/hadoop/hbase/io/hfile/KeySampler.java
new file mode 100644
index 0000000..e6cf763
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/KeySampler.java
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.util.Random;
+
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.hbase.io.hfile.RandomDistribution.DiscreteRNG;
+
+/*
+* <p>
+* Copied from
+* <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+* Remove after tfile is committed and use the tfile version of this class
+* instead.</p>
+*/
+class KeySampler {
+ Random random;
+ int min, max;
+ DiscreteRNG keyLenRNG;
+ private static final int MIN_KEY_LEN = 4;
+
+ public KeySampler(Random random, byte [] first, byte [] last,
+ DiscreteRNG keyLenRNG) {
+ this.random = random;
+ min = keyPrefixToInt(first);
+ max = keyPrefixToInt(last);
+ this.keyLenRNG = keyLenRNG;
+ }
+
+ private int keyPrefixToInt(byte [] key) {
+ byte[] b = key;
+ int o = 0;
+ return (b[o] & 0xff) << 24 | (b[o + 1] & 0xff) << 16
+ | (b[o + 2] & 0xff) << 8 | (b[o + 3] & 0xff);
+ }
+
+ public void next(BytesWritable key) {
+ key.setSize(Math.max(MIN_KEY_LEN, keyLenRNG.nextInt()));
+ random.nextBytes(key.get());
+ int n = random.nextInt(max - min) + min;
+ byte[] b = key.get();
+ b[0] = (byte) (n >> 24);
+ b[1] = (byte) (n >> 16);
+ b[2] = (byte) (n >> 8);
+ b[3] = (byte) n;
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/NanoTimer.java b/src/test/org/apache/hadoop/hbase/io/hfile/NanoTimer.java
new file mode 100644
index 0000000..1312da0
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/NanoTimer.java
@@ -0,0 +1,198 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+/**
+ * A nano-second timer.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class NanoTimer {
+ private long last = -1;
+ private boolean started = false;
+ private long cumulate = 0;
+
+ /**
+ * Constructor
+ *
+ * @param start
+ * Start the timer upon construction.
+ */
+ public NanoTimer(boolean start) {
+ if (start) this.start();
+ }
+
+ /**
+ * Start the timer.
+ *
+ * Note: No effect if timer is already started.
+ */
+ public void start() {
+ if (!this.started) {
+ this.last = System.nanoTime();
+ this.started = true;
+ }
+ }
+
+ /**
+ * Stop the timer.
+ *
+ * Note: No effect if timer is already stopped.
+ */
+ public void stop() {
+ if (this.started) {
+ this.started = false;
+ this.cumulate += System.nanoTime() - this.last;
+ }
+ }
+
+ /**
+ * Read the timer.
+ *
+ * @return the elapsed time in nano-seconds. Note: If the timer is never
+ * started before, -1 is returned.
+ */
+ public long read() {
+ if (!readable()) return -1;
+
+ return this.cumulate;
+ }
+
+ /**
+ * Reset the timer.
+ */
+ public void reset() {
+ this.last = -1;
+ this.started = false;
+ this.cumulate = 0;
+ }
+
+ /**
+ * Checking whether the timer is started
+ *
+ * @return true if timer is started.
+ */
+ public boolean isStarted() {
+ return this.started;
+ }
+
+ /**
+ * Format the elapsed time to a human understandable string.
+ *
+ * Note: If timer is never started, "ERR" will be returned.
+ */
+ public String toString() {
+ if (!readable()) {
+ return "ERR";
+ }
+
+ return NanoTimer.nanoTimeToString(this.cumulate);
+ }
+
+ /**
+ * A utility method to format a time duration in nano seconds into a human
+ * understandable stirng.
+ *
+ * @param t
+ * Time duration in nano seconds.
+ * @return String representation.
+ */
+ public static String nanoTimeToString(long t) {
+ if (t < 0) return "ERR";
+
+ if (t == 0) return "0";
+
+ if (t < 1000) {
+ return t + "ns";
+ }
+
+ double us = (double) t / 1000;
+ if (us < 1000) {
+ return String.format("%.2fus", us);
+ }
+
+ double ms = us / 1000;
+ if (ms < 1000) {
+ return String.format("%.2fms", ms);
+ }
+
+ double ss = ms / 1000;
+ if (ss < 1000) {
+ return String.format("%.2fs", ss);
+ }
+
+ long mm = (long) ss / 60;
+ ss -= mm * 60;
+ long hh = mm / 60;
+ mm -= hh * 60;
+ long dd = hh / 24;
+ hh -= dd * 24;
+
+ if (dd > 0) {
+ return String.format("%dd%dh", dd, hh);
+ }
+
+ if (hh > 0) {
+ return String.format("%dh%dm", hh, mm);
+ }
+
+ if (mm > 0) {
+ return String.format("%dm%.1fs", mm, ss);
+ }
+
+ return String.format("%.2fs", ss);
+
+ /**
+ * StringBuilder sb = new StringBuilder(); String sep = "";
+ *
+ * if (dd > 0) { String unit = (dd > 1) ? "days" : "day";
+ * sb.append(String.format("%s%d%s", sep, dd, unit)); sep = " "; }
+ *
+ * if (hh > 0) { String unit = (hh > 1) ? "hrs" : "hr";
+ * sb.append(String.format("%s%d%s", sep, hh, unit)); sep = " "; }
+ *
+ * if (mm > 0) { String unit = (mm > 1) ? "mins" : "min";
+ * sb.append(String.format("%s%d%s", sep, mm, unit)); sep = " "; }
+ *
+ * if (ss > 0) { String unit = (ss > 1) ? "secs" : "sec";
+ * sb.append(String.format("%s%.3f%s", sep, ss, unit)); sep = " "; }
+ *
+ * return sb.toString();
+ */
+ }
+
+ private boolean readable() {
+ return this.last != -1;
+ }
+
+ /**
+ * Simple tester.
+ *
+ * @param args
+ */
+ public static void main(String[] args) {
+ long i = 7;
+
+ for (int x = 0; x < 20; ++x, i *= 7) {
+ System.out.println(NanoTimer.nanoTimeToString(i));
+ }
+ }
+}
+
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/RandomDistribution.java b/src/test/org/apache/hadoop/hbase/io/hfile/RandomDistribution.java
new file mode 100644
index 0000000..3219664
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/RandomDistribution.java
@@ -0,0 +1,271 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Random;
+
+/**
+ * A class that generates random numbers that follow some distribution.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class RandomDistribution {
+ /**
+ * Interface for discrete (integer) random distributions.
+ */
+ public static interface DiscreteRNG {
+ /**
+ * Get the next random number
+ *
+ * @return the next random number.
+ */
+ public int nextInt();
+ }
+
+ /**
+ * P(i)=1/(max-min)
+ */
+ public static final class Flat implements DiscreteRNG {
+ private final Random random;
+ private final int min;
+ private final int max;
+
+ /**
+ * Generate random integers from min (inclusive) to max (exclusive)
+ * following even distribution.
+ *
+ * @param random
+ * The basic random number generator.
+ * @param min
+ * Minimum integer
+ * @param max
+ * maximum integer (exclusive).
+ *
+ */
+ public Flat(Random random, int min, int max) {
+ if (min >= max) {
+ throw new IllegalArgumentException("Invalid range");
+ }
+ this.random = random;
+ this.min = min;
+ this.max = max;
+ }
+
+ /**
+ * @see DiscreteRNG#nextInt()
+ */
+ @Override
+ public int nextInt() {
+ return random.nextInt(max - min) + min;
+ }
+ }
+
+ /**
+ * Zipf distribution. The ratio of the probabilities of integer i and j is
+ * defined as follows:
+ *
+ * P(i)/P(j)=((j-min+1)/(i-min+1))^sigma.
+ */
+ public static final class Zipf implements DiscreteRNG {
+ private static final double DEFAULT_EPSILON = 0.001;
+ private final Random random;
+ private final ArrayList<Integer> k;
+ private final ArrayList<Double> v;
+
+ /**
+ * Constructor
+ *
+ * @param r
+ * The random number generator.
+ * @param min
+ * minimum integer (inclusvie)
+ * @param max
+ * maximum integer (exclusive)
+ * @param sigma
+ * parameter sigma. (sigma > 1.0)
+ */
+ public Zipf(Random r, int min, int max, double sigma) {
+ this(r, min, max, sigma, DEFAULT_EPSILON);
+ }
+
+ /**
+ * Constructor.
+ *
+ * @param r
+ * The random number generator.
+ * @param min
+ * minimum integer (inclusvie)
+ * @param max
+ * maximum integer (exclusive)
+ * @param sigma
+ * parameter sigma. (sigma > 1.0)
+ * @param epsilon
+ * Allowable error percentage (0 < epsilon < 1.0).
+ */
+ public Zipf(Random r, int min, int max, double sigma, double epsilon) {
+ if ((max <= min) || (sigma <= 1) || (epsilon <= 0)
+ || (epsilon >= 0.5)) {
+ throw new IllegalArgumentException("Invalid arguments");
+ }
+ random = r;
+ k = new ArrayList<Integer>();
+ v = new ArrayList<Double>();
+
+ double sum = 0;
+ int last = -1;
+ for (int i = min; i < max; ++i) {
+ sum += Math.exp(-sigma * Math.log(i - min + 1));
+ if ((last == -1) || i * (1 - epsilon) > last) {
+ k.add(i);
+ v.add(sum);
+ last = i;
+ }
+ }
+
+ if (last != max - 1) {
+ k.add(max - 1);
+ v.add(sum);
+ }
+
+ v.set(v.size() - 1, 1.0);
+
+ for (int i = v.size() - 2; i >= 0; --i) {
+ v.set(i, v.get(i) / sum);
+ }
+ }
+
+ /**
+ * @see DiscreteRNG#nextInt()
+ */
+ @Override
+ public int nextInt() {
+ double d = random.nextDouble();
+ int idx = Collections.binarySearch(v, d);
+
+ if (idx > 0) {
+ ++idx;
+ }
+ else {
+ idx = -(idx + 1);
+ }
+
+ if (idx >= v.size()) {
+ idx = v.size() - 1;
+ }
+
+ if (idx == 0) {
+ return k.get(0);
+ }
+
+ int ceiling = k.get(idx);
+ int lower = k.get(idx - 1);
+
+ return ceiling - random.nextInt(ceiling - lower);
+ }
+ }
+
+ /**
+ * Binomial distribution.
+ *
+ * P(k)=select(n, k)*p^k*(1-p)^(n-k) (k = 0, 1, ..., n)
+ *
+ * P(k)=select(max-min-1, k-min)*p^(k-min)*(1-p)^(k-min)*(1-p)^(max-k-1)
+ */
+ public static final class Binomial implements DiscreteRNG {
+ private final Random random;
+ private final int min;
+ private final int n;
+ private final double[] v;
+
+ private static double select(int n, int k) {
+ double ret = 1.0;
+ for (int i = k + 1; i <= n; ++i) {
+ ret *= (double) i / (i - k);
+ }
+ return ret;
+ }
+
+ private static double power(double p, int k) {
+ return Math.exp(k * Math.log(p));
+ }
+
+ /**
+ * Generate random integers from min (inclusive) to max (exclusive)
+ * following Binomial distribution.
+ *
+ * @param random
+ * The basic random number generator.
+ * @param min
+ * Minimum integer
+ * @param max
+ * maximum integer (exclusive).
+ * @param p
+ * parameter.
+ *
+ */
+ public Binomial(Random random, int min, int max, double p) {
+ if (min >= max) {
+ throw new IllegalArgumentException("Invalid range");
+ }
+ this.random = random;
+ this.min = min;
+ this.n = max - min - 1;
+ if (n > 0) {
+ v = new double[n + 1];
+ double sum = 0.0;
+ for (int i = 0; i <= n; ++i) {
+ sum += select(n, i) * power(p, i) * power(1 - p, n - i);
+ v[i] = sum;
+ }
+ for (int i = 0; i <= n; ++i) {
+ v[i] /= sum;
+ }
+ }
+ else {
+ v = null;
+ }
+ }
+
+ /**
+ * @see DiscreteRNG#nextInt()
+ */
+ @Override
+ public int nextInt() {
+ if (v == null) {
+ return min;
+ }
+ double d = random.nextDouble();
+ int idx = Arrays.binarySearch(v, d);
+ if (idx > 0) {
+ ++idx;
+ } else {
+ idx = -(idx + 1);
+ }
+
+ if (idx >= v.length) {
+ idx = v.length - 1;
+ }
+ return idx + min;
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/RandomSeek.java b/src/test/org/apache/hadoop/hbase/io/hfile/RandomSeek.java
new file mode 100644
index 0000000..2845c07
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/RandomSeek.java
@@ -0,0 +1,124 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.BufferedReader;
+import java.io.FileReader;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Random;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Random seek test.
+ */
+public class RandomSeek {
+ private static List<String> slurp(String fname) throws IOException {
+ BufferedReader istream = new BufferedReader(new FileReader(fname));
+ String str;
+ List<String> l = new ArrayList<String>();
+ while ( (str=istream.readLine()) != null) {
+ String [] parts = str.split(",");
+ l.add(parts[0] + ":" + parts[1] + ":" + parts[2]);
+ }
+ return l;
+ }
+ private static String randKey(List<String> keys) {
+ Random r = new Random();
+ //return keys.get(r.nextInt(keys.size()));
+ return "2" + Integer.toString(7+r.nextInt(2)) + Integer.toString(r.nextInt(100));
+ //return new String(r.nextInt(100));
+ }
+
+ public static void main(String [] argv) throws IOException {
+ Configuration conf = new Configuration();
+ conf.setInt("io.file.buffer.size", 64*1024);
+ RawLocalFileSystem rlfs = new RawLocalFileSystem();
+ rlfs.setConf(conf);
+ LocalFileSystem lfs = new LocalFileSystem(rlfs);
+
+ Path path = new Path("/Users/ryan/rfile.big.txt");
+ long start = System.currentTimeMillis();
+ SimpleBlockCache cache = new SimpleBlockCache();
+ //LruBlockCache cache = new LruBlockCache();
+ Reader reader = new HFile.Reader(lfs, path, cache);
+ reader.loadFileInfo();
+ System.out.println(reader.trailer);
+ long end = System.currentTimeMillis();
+
+ System.out.println("Index read time: " + (end - start));
+
+ List<String> keys = slurp("/Users/ryan/xaa.50k");
+
+ HFileScanner scanner = reader.getScanner();
+ int count;
+ long totalBytes = 0;
+ int notFound = 0;
+
+ start = System.nanoTime();
+ for(count = 0; count < 500000; ++count) {
+ String key = randKey(keys);
+ byte [] bkey = Bytes.toBytes(key);
+ int res = scanner.seekTo(bkey);
+ if (res == 0) {
+ ByteBuffer k = scanner.getKey();
+ ByteBuffer v = scanner.getValue();
+ totalBytes += k.limit();
+ totalBytes += v.limit();
+ } else {
+ ++ notFound;
+ }
+ if (res == -1) {
+ scanner.seekTo();
+ }
+ // Scan for another 1000 rows.
+ for (int i = 0; i < 1000; ++i) {
+ if (!scanner.next())
+ break;
+ ByteBuffer k = scanner.getKey();
+ ByteBuffer v = scanner.getValue();
+ totalBytes += k.limit();
+ totalBytes += v.limit();
+ }
+
+ if ( count % 1000 == 0 ) {
+ end = System.nanoTime();
+
+ System.out.println("Cache block count: " + cache.size() + " dumped: "+ cache.dumps);
+ //System.out.println("Cache size: " + cache.heapSize());
+ double msTime = ((end - start) / 1000000.0);
+ System.out.println("Seeked: "+ count + " in " + msTime + " (ms) "
+ + (1000.0 / msTime ) + " seeks/ms "
+ + (msTime / 1000.0) + " ms/seek");
+
+ start = System.nanoTime();
+ }
+ }
+ System.out.println("Total bytes: " + totalBytes + " not found: " + notFound);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/TestHFile.java b/src/test/org/apache/hadoop/hbase/io/hfile/TestHFile.java
new file mode 100644
index 0000000..ed589f8
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/TestHFile.java
@@ -0,0 +1,247 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
+import org.apache.hadoop.hbase.io.hfile.HFile.Writer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.RawComparator;
+
+/**
+ * test hfile features.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class TestHFile extends HBaseTestCase {
+ static final Log LOG = LogFactory.getLog(TestHFile.class);
+
+ private static String ROOT_DIR =
+ System.getProperty("test.build.data", "/tmp/TestHFile");
+ private final int minBlockSize = 512;
+ private static String localFormatter = "%010d";
+
+ // write some records into the tfile
+ // write them twice
+ private int writeSomeRecords(Writer writer, int start, int n)
+ throws IOException {
+ String value = "value";
+ for (int i = start; i < (start + n); i++) {
+ String key = String.format(localFormatter, Integer.valueOf(i));
+ writer.append(Bytes.toBytes(key), Bytes.toBytes(value + key));
+ }
+ return (start + n);
+ }
+
+ private void readAllRecords(HFileScanner scanner) throws IOException {
+ readAndCheckbytes(scanner, 0, 100);
+ }
+
+ // read the records and check
+ private int readAndCheckbytes(HFileScanner scanner, int start, int n)
+ throws IOException {
+ String value = "value";
+ int i = start;
+ for (; i < (start + n); i++) {
+ ByteBuffer key = scanner.getKey();
+ ByteBuffer val = scanner.getValue();
+ String keyStr = String.format(localFormatter, Integer.valueOf(i));
+ String valStr = value + keyStr;
+ byte [] keyBytes = Bytes.toBytes(key);
+ assertTrue("bytes for keys do not match " + keyStr + " " +
+ Bytes.toString(Bytes.toBytes(key)),
+ Arrays.equals(Bytes.toBytes(keyStr), keyBytes));
+ byte [] valBytes = Bytes.toBytes(val);
+ assertTrue("bytes for vals do not match " + valStr + " " +
+ Bytes.toString(valBytes),
+ Arrays.equals(Bytes.toBytes(valStr), valBytes));
+ if (!scanner.next()) {
+ break;
+ }
+ }
+ assertEquals(i, start + n - 1);
+ return (start + n);
+ }
+
+ private byte[] getSomeKey(int rowId) {
+ return String.format(localFormatter, Integer.valueOf(rowId)).getBytes();
+ }
+
+ private void writeRecords(Writer writer) throws IOException {
+ writeSomeRecords(writer, 0, 100);
+ writer.close();
+ }
+
+ private FSDataOutputStream createFSOutput(Path name) throws IOException {
+ if (fs.exists(name)) fs.delete(name, true);
+ FSDataOutputStream fout = fs.create(name);
+ return fout;
+ }
+
+ /**
+ * test none codecs
+ */
+ void basicWithSomeCodec(String codec) throws IOException {
+ Path ncTFile = new Path(ROOT_DIR, "basic.hfile");
+ FSDataOutputStream fout = createFSOutput(ncTFile);
+ Writer writer = new Writer(fout, minBlockSize,
+ Compression.getCompressionAlgorithmByName(codec), null, false);
+ LOG.info(writer);
+ writeRecords(writer);
+ fout.close();
+ FSDataInputStream fin = fs.open(ncTFile);
+ Reader reader = new Reader(fs.open(ncTFile),
+ fs.getFileStatus(ncTFile).getLen(), null);
+ // Load up the index.
+ reader.loadFileInfo();
+ LOG.info(reader);
+ HFileScanner scanner = reader.getScanner();
+ // Align scanner at start of the file.
+ scanner.seekTo();
+ readAllRecords(scanner);
+ scanner.seekTo(getSomeKey(50));
+ assertTrue("location lookup failed", scanner.seekTo(getSomeKey(50)) == 0);
+ // read the key and see if it matches
+ ByteBuffer readKey = scanner.getKey();
+ assertTrue("seeked key does not match", Arrays.equals(getSomeKey(50),
+ Bytes.toBytes(readKey)));
+
+ scanner.seekTo(new byte[0]);
+ ByteBuffer val1 = scanner.getValue();
+ scanner.seekTo(new byte[0]);
+ ByteBuffer val2 = scanner.getValue();
+ assertTrue(Arrays.equals(Bytes.toBytes(val1), Bytes.toBytes(val2)));
+
+ reader.close();
+ fin.close();
+ fs.delete(ncTFile, true);
+ }
+
+ public void testTFileFeatures() throws IOException {
+ basicWithSomeCodec("none");
+ basicWithSomeCodec("gz");
+ }
+
+ private void writeNumMetablocks(Writer writer, int n) {
+ for (int i = 0; i < n; i++) {
+ writer.appendMetaBlock("HFileMeta" + i, ("something to test" + i).getBytes());
+ }
+ }
+
+ private void someTestingWithMetaBlock(Writer writer) {
+ writeNumMetablocks(writer, 10);
+ }
+
+ private void readNumMetablocks(Reader reader, int n) throws IOException {
+ for (int i = 0; i < n; i++) {
+ ByteBuffer b = reader.getMetaBlock("HFileMeta" + i);
+ byte [] found = Bytes.toBytes(b);
+ assertTrue("failed to match metadata", Arrays.equals(
+ ("something to test" + i).getBytes(), found));
+ }
+ }
+
+ private void someReadingWithMetaBlock(Reader reader) throws IOException {
+ readNumMetablocks(reader, 10);
+ }
+
+ private void metablocks(final String compress) throws Exception {
+ Path mFile = new Path(ROOT_DIR, "meta.hfile");
+ FSDataOutputStream fout = createFSOutput(mFile);
+ Writer writer = new Writer(fout, minBlockSize,
+ Compression.getCompressionAlgorithmByName(compress), null, false);
+ someTestingWithMetaBlock(writer);
+ writer.close();
+ fout.close();
+ FSDataInputStream fin = fs.open(mFile);
+ Reader reader = new Reader(fs.open(mFile), this.fs.getFileStatus(mFile)
+ .getLen(), null);
+ reader.loadFileInfo();
+ // No data -- this should return false.
+ assertFalse(reader.getScanner().seekTo());
+ someReadingWithMetaBlock(reader);
+ fs.delete(mFile, true);
+ reader.close();
+ fin.close();
+ }
+
+ // test meta blocks for tfiles
+ public void testMetaBlocks() throws Exception {
+ metablocks("none");
+ metablocks("gz");
+ }
+
+ public void testNullMetaBlocks() throws Exception {
+ Path mFile = new Path(ROOT_DIR, "nometa.hfile");
+ FSDataOutputStream fout = createFSOutput(mFile);
+ Writer writer = new Writer(fout, minBlockSize,
+ Compression.Algorithm.NONE, null, false);
+ writer.append("foo".getBytes(), "value".getBytes());
+ writer.close();
+ fout.close();
+ Reader reader = new Reader(fs, mFile, null);
+ reader.loadFileInfo();
+ assertNull(reader.getMetaBlock("non-existant"));
+ }
+
+ /**
+ * Make sure the orginals for our compression libs doesn't change on us.
+ */
+ public void testCompressionOrdinance() {
+ //assertTrue(Compression.Algorithm.LZO.ordinal() == 0);
+ assertTrue(Compression.Algorithm.GZ.ordinal() == 1);
+ assertTrue(Compression.Algorithm.NONE.ordinal() == 2);
+ }
+
+
+ public void testComparator() throws IOException {
+ Path mFile = new Path(ROOT_DIR, "meta.tfile");
+ FSDataOutputStream fout = createFSOutput(mFile);
+ Writer writer = new Writer(fout, minBlockSize, null,
+ new RawComparator<byte []>() {
+ @Override
+ public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2,
+ int l2) {
+ return -Bytes.compareTo(b1, s1, l1, b2, s2, l2);
+
+ }
+ @Override
+ public int compare(byte[] o1, byte[] o2) {
+ return compare(o1, 0, o1.length, o2, 0, o2.length);
+ }
+ }, false);
+ writer.append("3".getBytes(), "0".getBytes());
+ writer.append("2".getBytes(), "0".getBytes());
+ writer.append("1".getBytes(), "0".getBytes());
+ writer.close();
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java b/src/test/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
new file mode 100644
index 0000000..2ae8824
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
@@ -0,0 +1,384 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.Random;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.GzipCodec;
+
+/**
+ * Set of long-running tests to measure performance of HFile.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class TestHFilePerformance extends TestCase {
+ private static String ROOT_DIR =
+ System.getProperty("test.build.data", "/tmp/TestHFilePerformance");
+ private FileSystem fs;
+ private Configuration conf;
+ private long startTimeEpoch;
+ private long finishTimeEpoch;
+ private DateFormat formatter;
+
+ @Override
+ public void setUp() throws IOException {
+ conf = new Configuration();
+ fs = FileSystem.get(conf);
+ formatter = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
+ }
+
+ public void startTime() {
+ startTimeEpoch = System.currentTimeMillis();
+ System.out.println(formatTime() + " Started timing.");
+ }
+
+ public void stopTime() {
+ finishTimeEpoch = System.currentTimeMillis();
+ System.out.println(formatTime() + " Stopped timing.");
+ }
+
+ public long getIntervalMillis() {
+ return finishTimeEpoch - startTimeEpoch;
+ }
+
+ public void printlnWithTimestamp(String message) {
+ System.out.println(formatTime() + " " + message);
+ }
+
+ /*
+ * Format millis into minutes and seconds.
+ */
+ public String formatTime(long milis){
+ return formatter.format(milis);
+ }
+
+ public String formatTime(){
+ return formatTime(System.currentTimeMillis());
+ }
+
+ private FSDataOutputStream createFSOutput(Path name) throws IOException {
+ if (fs.exists(name))
+ fs.delete(name, true);
+ FSDataOutputStream fout = fs.create(name);
+ return fout;
+ }
+
+ //TODO have multiple ways of generating key/value e.g. dictionary words
+ //TODO to have a sample compressable data, for now, made 1 out of 3 values random
+ // keys are all random.
+
+ private static class KeyValueGenerator {
+ Random keyRandomizer;
+ Random valueRandomizer;
+ long randomValueRatio = 3; // 1 out of randomValueRatio generated values will be random.
+ long valueSequence = 0 ;
+
+
+ KeyValueGenerator() {
+ keyRandomizer = new Random(0L); //TODO with seed zero
+ valueRandomizer = new Random(1L); //TODO with seed one
+ }
+
+ // Key is always random now.
+ void getKey(byte[] key) {
+ keyRandomizer.nextBytes(key);
+ }
+
+ void getValue(byte[] value) {
+ if (valueSequence % randomValueRatio == 0)
+ valueRandomizer.nextBytes(value);
+ valueSequence++;
+ }
+ }
+
+ /**
+ *
+ * @param fileType "HFile" or "SequenceFile"
+ * @param keyLength
+ * @param valueLength
+ * @param codecName "none", "lzo", "gz"
+ * @param rows number of rows to be written.
+ * @param writeMethod used for HFile only.
+ * @param minBlockSize used for HFile only.
+ * @throws IOException
+ */
+ //TODO writeMethod: implement multiple ways of writing e.g. A) known length (no chunk) B) using a buffer and streaming (for many chunks).
+ public void timeWrite(String fileType, int keyLength, int valueLength,
+ String codecName, long rows, String writeMethod, int minBlockSize)
+ throws IOException {
+ System.out.println("File Type: " + fileType);
+ System.out.println("Writing " + fileType + " with codecName: " + codecName);
+ long totalBytesWritten = 0;
+
+
+ //Using separate randomizer for key/value with seeds matching Sequence File.
+ byte[] key = new byte[keyLength];
+ byte[] value = new byte[valueLength];
+ KeyValueGenerator generator = new KeyValueGenerator();
+
+ startTime();
+
+ Path path = new Path(ROOT_DIR, fileType + ".Performance");
+ System.out.println(ROOT_DIR + path.getName());
+ FSDataOutputStream fout = createFSOutput(path);
+
+ if ("HFile".equals(fileType)){
+ System.out.println("HFile write method: ");
+ HFile.Writer writer =
+ new HFile.Writer(fout, minBlockSize, codecName, null);
+
+ // Writing value in one shot.
+ for (long l=0 ; l<rows ; l++ ) {
+ generator.getKey(key);
+ generator.getValue(value);
+ writer.append(key, value);
+ totalBytesWritten += key.length;
+ totalBytesWritten += value.length;
+ }
+ writer.close();
+ } else if ("SequenceFile".equals(fileType)){
+ CompressionCodec codec = null;
+ if ("gz".equals(codecName))
+ codec = new GzipCodec();
+ else if (!"none".equals(codecName))
+ throw new IOException("Codec not supported.");
+
+ SequenceFile.Writer writer;
+
+ //TODO
+ //JobConf conf = new JobConf();
+
+ if (!"none".equals(codecName))
+ writer = SequenceFile.createWriter(conf, fout, BytesWritable.class,
+ BytesWritable.class, SequenceFile.CompressionType.BLOCK, codec);
+ else
+ writer = SequenceFile.createWriter(conf, fout, BytesWritable.class,
+ BytesWritable.class, SequenceFile.CompressionType.NONE, null);
+
+ BytesWritable keyBsw;
+ BytesWritable valBsw;
+ for (long l=0 ; l<rows ; l++ ) {
+
+ generator.getKey(key);
+ keyBsw = new BytesWritable(key);
+ totalBytesWritten += keyBsw.getSize();
+
+ generator.getValue(value);
+ valBsw = new BytesWritable(value);
+ writer.append(keyBsw, valBsw);
+ totalBytesWritten += valBsw.getSize();
+ }
+
+ writer.close();
+ } else
+ throw new IOException("File Type is not supported");
+
+ fout.close();
+ stopTime();
+
+ printlnWithTimestamp("Data written: ");
+ printlnWithTimestamp(" rate = " +
+ totalBytesWritten / getIntervalMillis() * 1000 / 1024 / 1024 + "MB/s");
+ printlnWithTimestamp(" total = " + totalBytesWritten + "B");
+
+ printlnWithTimestamp("File written: ");
+ printlnWithTimestamp(" rate = " +
+ fs.getFileStatus(path).getLen() / getIntervalMillis() * 1000 / 1024 / 1024 + "MB/s");
+ printlnWithTimestamp(" total = " + fs.getFileStatus(path).getLen() + "B");
+ }
+
+ public void timeReading(String fileType, int keyLength, int valueLength,
+ long rows, int method) throws IOException {
+ System.out.println("Reading file of type: " + fileType);
+ Path path = new Path(ROOT_DIR, fileType + ".Performance");
+ System.out.println("Input file size: " + fs.getFileStatus(path).getLen());
+ long totalBytesRead = 0;
+
+
+ ByteBuffer val;
+
+ ByteBuffer key;
+
+ startTime();
+ FSDataInputStream fin = fs.open(path);
+
+ if ("HFile".equals(fileType)){
+ HFile.Reader reader = new HFile.Reader(fs.open(path),
+ fs.getFileStatus(path).getLen(), null);
+ reader.loadFileInfo();
+ System.out.println(reader);
+ switch (method) {
+
+ case 0:
+ case 1:
+ default:
+ {
+ HFileScanner scanner = reader.getScanner();
+ scanner.seekTo();
+ for (long l=0 ; l<rows ; l++ ) {
+ key = scanner.getKey();
+ val = scanner.getValue();
+ totalBytesRead += key.limit() + val.limit();
+ scanner.next();
+ }
+ }
+ break;
+ }
+ } else if("SequenceFile".equals(fileType)){
+
+ SequenceFile.Reader reader;
+ reader = new SequenceFile.Reader(fs, path, new Configuration());
+
+ if (reader.getCompressionCodec() != null) {
+ printlnWithTimestamp("Compression codec class: " + reader.getCompressionCodec().getClass());
+ } else
+ printlnWithTimestamp("Compression codec class: " + "none");
+
+ BytesWritable keyBsw = new BytesWritable();
+ BytesWritable valBsw = new BytesWritable();
+
+ for (long l=0 ; l<rows ; l++ ) {
+ reader.next(keyBsw, valBsw);
+ totalBytesRead += keyBsw.getSize() + valBsw.getSize();
+ }
+ reader.close();
+
+ //TODO make a tests for other types of SequenceFile reading scenarios
+
+ } else {
+ throw new IOException("File Type not supported.");
+ }
+
+
+ //printlnWithTimestamp("Closing reader");
+ fin.close();
+ stopTime();
+ //printlnWithTimestamp("Finished close");
+
+ printlnWithTimestamp("Finished in " + getIntervalMillis() + "ms");
+ printlnWithTimestamp("Data read: ");
+ printlnWithTimestamp(" rate = " +
+ totalBytesRead / getIntervalMillis() * 1000 / 1024 / 1024 + "MB/s");
+ printlnWithTimestamp(" total = " + totalBytesRead + "B");
+
+ printlnWithTimestamp("File read: ");
+ printlnWithTimestamp(" rate = " +
+ fs.getFileStatus(path).getLen() / getIntervalMillis() * 1000 / 1024 / 1024 + "MB/s");
+ printlnWithTimestamp(" total = " + fs.getFileStatus(path).getLen() + "B");
+
+ //TODO uncomment this for final committing so test files is removed.
+ //fs.delete(path, true);
+ }
+
+ public void testRunComparisons() throws IOException {
+
+ int keyLength = 100; // 100B
+ int valueLength = 5*1024; // 5KB
+ int minBlockSize = 10*1024*1024; // 10MB
+ int rows = 10000;
+
+ System.out.println("****************************** Sequence File *****************************");
+
+ timeWrite("SequenceFile", keyLength, valueLength, "none", rows, null, minBlockSize);
+ System.out.println("\n+++++++\n");
+ timeReading("SequenceFile", keyLength, valueLength, rows, -1);
+
+ System.out.println("");
+ System.out.println("----------------------");
+ System.out.println("");
+
+ /* DISABLED LZO
+ timeWrite("SequenceFile", keyLength, valueLength, "lzo", rows, null, minBlockSize);
+ System.out.println("\n+++++++\n");
+ timeReading("SequenceFile", keyLength, valueLength, rows, -1);
+
+ System.out.println("");
+ System.out.println("----------------------");
+ System.out.println("");
+
+ /* Sequence file can only use native hadoop libs gzipping so commenting out.
+ */
+ try {
+ timeWrite("SequenceFile", keyLength, valueLength, "gz", rows, null,
+ minBlockSize);
+ System.out.println("\n+++++++\n");
+ timeReading("SequenceFile", keyLength, valueLength, rows, -1);
+ } catch (IllegalArgumentException e) {
+ System.out.println("Skipping sequencefile gz: " + e.getMessage());
+ }
+
+
+ System.out.println("\n\n\n");
+ System.out.println("****************************** HFile *****************************");
+
+ timeWrite("HFile", keyLength, valueLength, "none", rows, null, minBlockSize);
+ System.out.println("\n+++++++\n");
+ timeReading("HFile", keyLength, valueLength, rows, 0 );
+
+ System.out.println("");
+ System.out.println("----------------------");
+ System.out.println("");
+/* DISABLED LZO
+ timeWrite("HFile", keyLength, valueLength, "lzo", rows, null, minBlockSize);
+ System.out.println("\n+++++++\n");
+ timeReading("HFile", keyLength, valueLength, rows, 0 );
+ System.out.println("\n+++++++\n");
+ timeReading("HFile", keyLength, valueLength, rows, 1 );
+ System.out.println("\n+++++++\n");
+ timeReading("HFile", keyLength, valueLength, rows, 2 );
+
+ System.out.println("");
+ System.out.println("----------------------");
+ System.out.println("");
+*/
+ timeWrite("HFile", keyLength, valueLength, "gz", rows, null, minBlockSize);
+ System.out.println("\n+++++++\n");
+ timeReading("HFile", keyLength, valueLength, rows, 0 );
+
+ System.out.println("\n\n\n\nNotes: ");
+ System.out.println(" * Timing includes open/closing of files.");
+ System.out.println(" * Timing includes reading both Key and Value");
+ System.out.println(" * Data is generated as random bytes. Other methods e.g. using " +
+ "dictionary with care for distributation of words is under development.");
+ System.out.println(" * Timing of write currently, includes random value/key generations. " +
+ "Which is the same for Sequence File and HFile. Another possibility is to generate " +
+ "test data beforehand");
+ System.out.println(" * We need to mitigate cache effect on benchmark. We can apply several " +
+ "ideas, for next step we do a large dummy read between benchmark read to dismantle " +
+ "caching of data. Renaming of file may be helpful. We can have a loop that reads with" +
+ " the same method several times and flood cache every time and average it to get a" +
+ " better number.");
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java b/src/test/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
new file mode 100644
index 0000000..9f44a96
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
@@ -0,0 +1,500 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Random;
+import java.util.StringTokenizer;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
+import org.apache.hadoop.hbase.io.hfile.HFile.Writer;
+import org.apache.hadoop.io.BytesWritable;
+
+/**
+ * test the performance for seek.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class TestHFileSeek extends TestCase {
+ private MyOptions options;
+ private Configuration conf;
+ private Path path;
+ private FileSystem fs;
+ private NanoTimer timer;
+ private Random rng;
+ private RandomDistribution.DiscreteRNG keyLenGen;
+ private KVGenerator kvGen;
+
+ @Override
+ public void setUp() throws IOException {
+ if (options == null) {
+ options = new MyOptions(new String[0]);
+ }
+
+ conf = new Configuration();
+ conf.setInt("tfile.fs.input.buffer.size", options.fsInputBufferSize);
+ conf.setInt("tfile.fs.output.buffer.size", options.fsOutputBufferSize);
+ path = new Path(new Path(options.rootDir), options.file);
+ fs = path.getFileSystem(conf);
+ timer = new NanoTimer(false);
+ rng = new Random(options.seed);
+ keyLenGen =
+ new RandomDistribution.Zipf(new Random(rng.nextLong()),
+ options.minKeyLen, options.maxKeyLen, 1.2);
+ RandomDistribution.DiscreteRNG valLenGen =
+ new RandomDistribution.Flat(new Random(rng.nextLong()),
+ options.minValLength, options.maxValLength);
+ RandomDistribution.DiscreteRNG wordLenGen =
+ new RandomDistribution.Flat(new Random(rng.nextLong()),
+ options.minWordLen, options.maxWordLen);
+ kvGen =
+ new KVGenerator(rng, true, keyLenGen, valLenGen, wordLenGen,
+ options.dictSize);
+ }
+
+ @Override
+ public void tearDown() {
+ try {
+ fs.close();
+ }
+ catch (Exception e) {
+ // Nothing
+ }
+ }
+
+ private static FSDataOutputStream createFSOutput(Path name, FileSystem fs)
+ throws IOException {
+ if (fs.exists(name)) {
+ fs.delete(name, true);
+ }
+ FSDataOutputStream fout = fs.create(name);
+ return fout;
+ }
+
+ private void createTFile() throws IOException {
+ long totalBytes = 0;
+ FSDataOutputStream fout = createFSOutput(path, fs);
+ try {
+ Writer writer =
+ new Writer(fout, options.minBlockSize, options.compress, null);
+ try {
+ BytesWritable key = new BytesWritable();
+ BytesWritable val = new BytesWritable();
+ timer.start();
+ for (long i = 0; true; ++i) {
+ if (i % 1000 == 0) { // test the size for every 1000 rows.
+ if (fs.getFileStatus(path).getLen() >= options.fileSize) {
+ break;
+ }
+ }
+ kvGen.next(key, val, false);
+ byte [] k = new byte [key.getSize()];
+ System.arraycopy(key.get(), 0, k, 0, key.getSize());
+ byte [] v = new byte [val.getSize()];
+ System.arraycopy(val.get(), 0, v, 0, key.getSize());
+ writer.append(k, v);
+ totalBytes += key.getSize();
+ totalBytes += val.getSize();
+ }
+ timer.stop();
+ }
+ finally {
+ writer.close();
+ }
+ }
+ finally {
+ fout.close();
+ }
+ double duration = (double)timer.read()/1000; // in us.
+ long fsize = fs.getFileStatus(path).getLen();
+
+ System.out.printf(
+ "time: %s...uncompressed: %.2fMB...raw thrpt: %.2fMB/s\n",
+ timer.toString(), (double) totalBytes / 1024 / 1024, totalBytes
+ / duration);
+ System.out.printf("time: %s...file size: %.2fMB...disk thrpt: %.2fMB/s\n",
+ timer.toString(), (double) fsize / 1024 / 1024, fsize / duration);
+ }
+
+ public void seekTFile() throws IOException {
+ int miss = 0;
+ long totalBytes = 0;
+ FSDataInputStream fsdis = fs.open(path);
+ Reader reader =
+ new Reader(fsdis, fs.getFileStatus(path).getLen(), null);
+ reader.loadFileInfo();
+ System.out.println(reader);
+ KeySampler kSampler =
+ new KeySampler(rng, reader.getFirstKey(), reader.getLastKey(),
+ keyLenGen);
+ HFileScanner scanner = reader.getScanner();
+ BytesWritable key = new BytesWritable();
+ timer.reset();
+ timer.start();
+ for (int i = 0; i < options.seekCount; ++i) {
+ kSampler.next(key);
+ byte [] k = new byte [key.getSize()];
+ System.arraycopy(key.get(), 0, k, 0, key.getSize());
+ if (scanner.seekTo(k) >= 0) {
+ ByteBuffer bbkey = scanner.getKey();
+ ByteBuffer bbval = scanner.getValue();
+ totalBytes += bbkey.limit();
+ totalBytes += bbval.limit();
+ }
+ else {
+ ++miss;
+ }
+ }
+ timer.stop();
+ System.out.printf(
+ "time: %s...avg seek: %s...%d hit...%d miss...avg I/O size: %.2fKB\n",
+ timer.toString(), NanoTimer.nanoTimeToString(timer.read()
+ / options.seekCount), options.seekCount - miss, miss,
+ (double) totalBytes / 1024 / (options.seekCount - miss));
+
+ }
+
+ public void testSeeks() throws IOException {
+ if (options.doCreate()) {
+ createTFile();
+ }
+
+ if (options.doRead()) {
+ seekTFile();
+ }
+
+ if (options.doCreate()) {
+ fs.delete(path, true);
+ }
+ }
+
+ private static class IntegerRange {
+ private final int from, to;
+
+ public IntegerRange(int from, int to) {
+ this.from = from;
+ this.to = to;
+ }
+
+ public static IntegerRange parse(String s) throws ParseException {
+ StringTokenizer st = new StringTokenizer(s, " \t,");
+ if (st.countTokens() != 2) {
+ throw new ParseException("Bad integer specification: " + s);
+ }
+ int from = Integer.parseInt(st.nextToken());
+ int to = Integer.parseInt(st.nextToken());
+ return new IntegerRange(from, to);
+ }
+
+ public int from() {
+ return from;
+ }
+
+ public int to() {
+ return to;
+ }
+ }
+
+ private static class MyOptions {
+ // hard coded constants
+ int dictSize = 1000;
+ int minWordLen = 5;
+ int maxWordLen = 20;
+
+ String rootDir =
+ System.getProperty("test.build.data", "/tmp/TestTFileSeek");
+ String file = "TestTFileSeek";
+ // String compress = "lzo"; DISABLED
+ String compress = "none";
+ int minKeyLen = 10;
+ int maxKeyLen = 50;
+ int minValLength = 1024;
+ int maxValLength = 2 * 1024;
+ int minBlockSize = 1 * 1024 * 1024;
+ int fsOutputBufferSize = 1;
+ int fsInputBufferSize = 0;
+ // Default writing 10MB.
+ long fileSize = 10 * 1024 * 1024;
+ long seekCount = 1000;
+ long seed;
+
+ static final int OP_CREATE = 1;
+ static final int OP_READ = 2;
+ int op = OP_CREATE | OP_READ;
+
+ boolean proceed = false;
+
+ public MyOptions(String[] args) {
+ seed = System.nanoTime();
+
+ try {
+ Options opts = buildOptions();
+ CommandLineParser parser = new GnuParser();
+ CommandLine line = parser.parse(opts, args, true);
+ processOptions(line, opts);
+ validateOptions();
+ }
+ catch (ParseException e) {
+ System.out.println(e.getMessage());
+ System.out.println("Try \"--help\" option for details.");
+ setStopProceed();
+ }
+ }
+
+ public boolean proceed() {
+ return proceed;
+ }
+
+ private Options buildOptions() {
+ Option compress =
+ OptionBuilder.withLongOpt("compress").withArgName("[none|lzo|gz]")
+ .hasArg().withDescription("compression scheme").create('c');
+
+ Option fileSize =
+ OptionBuilder.withLongOpt("file-size").withArgName("size-in-MB")
+ .hasArg().withDescription("target size of the file (in MB).")
+ .create('s');
+
+ Option fsInputBufferSz =
+ OptionBuilder.withLongOpt("fs-input-buffer").withArgName("size")
+ .hasArg().withDescription(
+ "size of the file system input buffer (in bytes).").create(
+ 'i');
+
+ Option fsOutputBufferSize =
+ OptionBuilder.withLongOpt("fs-output-buffer").withArgName("size")
+ .hasArg().withDescription(
+ "size of the file system output buffer (in bytes).").create(
+ 'o');
+
+ Option keyLen =
+ OptionBuilder
+ .withLongOpt("key-length")
+ .withArgName("min,max")
+ .hasArg()
+ .withDescription(
+ "the length range of the key (in bytes)")
+ .create('k');
+
+ Option valueLen =
+ OptionBuilder
+ .withLongOpt("value-length")
+ .withArgName("min,max")
+ .hasArg()
+ .withDescription(
+ "the length range of the value (in bytes)")
+ .create('v');
+
+ Option blockSz =
+ OptionBuilder.withLongOpt("block").withArgName("size-in-KB").hasArg()
+ .withDescription("minimum block size (in KB)").create('b');
+
+ Option seed =
+ OptionBuilder.withLongOpt("seed").withArgName("long-int").hasArg()
+ .withDescription("specify the seed").create('S');
+
+ Option operation =
+ OptionBuilder.withLongOpt("operation").withArgName("r|w|rw").hasArg()
+ .withDescription(
+ "action: seek-only, create-only, seek-after-create").create(
+ 'x');
+
+ Option rootDir =
+ OptionBuilder.withLongOpt("root-dir").withArgName("path").hasArg()
+ .withDescription(
+ "specify root directory where files will be created.")
+ .create('r');
+
+ Option file =
+ OptionBuilder.withLongOpt("file").withArgName("name").hasArg()
+ .withDescription("specify the file name to be created or read.")
+ .create('f');
+
+ Option seekCount =
+ OptionBuilder
+ .withLongOpt("seek")
+ .withArgName("count")
+ .hasArg()
+ .withDescription(
+ "specify how many seek operations we perform (requires -x r or -x rw.")
+ .create('n');
+
+ Option help =
+ OptionBuilder.withLongOpt("help").hasArg(false).withDescription(
+ "show this screen").create("h");
+
+ return new Options().addOption(compress).addOption(fileSize).addOption(
+ fsInputBufferSz).addOption(fsOutputBufferSize).addOption(keyLen)
+ .addOption(blockSz).addOption(rootDir).addOption(valueLen).addOption(
+ operation).addOption(seekCount).addOption(file).addOption(help);
+
+ }
+
+ private void processOptions(CommandLine line, Options opts)
+ throws ParseException {
+ // --help -h and --version -V must be processed first.
+ if (line.hasOption('h')) {
+ HelpFormatter formatter = new HelpFormatter();
+ System.out.println("TFile and SeqFile benchmark.");
+ System.out.println();
+ formatter.printHelp(100,
+ "java ... TestTFileSeqFileComparison [options]",
+ "\nSupported options:", opts, "");
+ return;
+ }
+
+ if (line.hasOption('c')) {
+ compress = line.getOptionValue('c');
+ }
+
+ if (line.hasOption('d')) {
+ dictSize = Integer.parseInt(line.getOptionValue('d'));
+ }
+
+ if (line.hasOption('s')) {
+ fileSize = Long.parseLong(line.getOptionValue('s')) * 1024 * 1024;
+ }
+
+ if (line.hasOption('i')) {
+ fsInputBufferSize = Integer.parseInt(line.getOptionValue('i'));
+ }
+
+ if (line.hasOption('o')) {
+ fsOutputBufferSize = Integer.parseInt(line.getOptionValue('o'));
+ }
+
+ if (line.hasOption('n')) {
+ seekCount = Integer.parseInt(line.getOptionValue('n'));
+ }
+
+ if (line.hasOption('k')) {
+ IntegerRange ir = IntegerRange.parse(line.getOptionValue('k'));
+ minKeyLen = ir.from();
+ maxKeyLen = ir.to();
+ }
+
+ if (line.hasOption('v')) {
+ IntegerRange ir = IntegerRange.parse(line.getOptionValue('v'));
+ minValLength = ir.from();
+ maxValLength = ir.to();
+ }
+
+ if (line.hasOption('b')) {
+ minBlockSize = Integer.parseInt(line.getOptionValue('b')) * 1024;
+ }
+
+ if (line.hasOption('r')) {
+ rootDir = line.getOptionValue('r');
+ }
+
+ if (line.hasOption('f')) {
+ file = line.getOptionValue('f');
+ }
+
+ if (line.hasOption('S')) {
+ seed = Long.parseLong(line.getOptionValue('S'));
+ }
+
+ if (line.hasOption('x')) {
+ String strOp = line.getOptionValue('x');
+ if (strOp.equals("r")) {
+ op = OP_READ;
+ }
+ else if (strOp.equals("w")) {
+ op = OP_CREATE;
+ }
+ else if (strOp.equals("rw")) {
+ op = OP_CREATE | OP_READ;
+ }
+ else {
+ throw new ParseException("Unknown action specifier: " + strOp);
+ }
+ }
+
+ proceed = true;
+ }
+
+ private void validateOptions() throws ParseException {
+ if (!compress.equals("none") && !compress.equals("lzo")
+ && !compress.equals("gz")) {
+ throw new ParseException("Unknown compression scheme: " + compress);
+ }
+
+ if (minKeyLen >= maxKeyLen) {
+ throw new ParseException(
+ "Max key length must be greater than min key length.");
+ }
+
+ if (minValLength >= maxValLength) {
+ throw new ParseException(
+ "Max value length must be greater than min value length.");
+ }
+
+ if (minWordLen >= maxWordLen) {
+ throw new ParseException(
+ "Max word length must be greater than min word length.");
+ }
+ return;
+ }
+
+ private void setStopProceed() {
+ proceed = false;
+ }
+
+ public boolean doCreate() {
+ return (op & OP_CREATE) != 0;
+ }
+
+ public boolean doRead() {
+ return (op & OP_READ) != 0;
+ }
+ }
+
+ public static void main(String[] argv) throws IOException {
+ TestHFileSeek testCase = new TestHFileSeek();
+ MyOptions options = new MyOptions(argv);
+
+ if (options.proceed == false) {
+ return;
+ }
+
+ testCase.options = options;
+ testCase.setUp();
+ testCase.testSeeks();
+ testCase.tearDown();
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java b/src/test/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
new file mode 100644
index 0000000..5fa80a1
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
@@ -0,0 +1,123 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test {@link HFileScanner#seekTo(byte[])} and its variants.
+ */
+public class TestSeekTo extends HBaseTestCase {
+
+ Path makeNewFile() throws IOException {
+ Path ncTFile = new Path(this.testDir, "basic.hfile");
+ FSDataOutputStream fout = this.fs.create(ncTFile);
+ HFile.Writer writer = new HFile.Writer(fout, 40, "none", null);
+ // 4 bytes * 3 * 2 for each key/value +
+ // 3 for keys, 15 for values = 42 (woot)
+ writer.append(Bytes.toBytes("c"), Bytes.toBytes("value"));
+ writer.append(Bytes.toBytes("e"), Bytes.toBytes("value"));
+ writer.append(Bytes.toBytes("g"), Bytes.toBytes("value"));
+ // block transition
+ writer.append(Bytes.toBytes("i"), Bytes.toBytes("value"));
+ writer.append(Bytes.toBytes("k"), Bytes.toBytes("value"));
+ writer.close();
+ fout.close();
+ return ncTFile;
+ }
+ public void testSeekBefore() throws Exception {
+ Path p = makeNewFile();
+ HFile.Reader reader = new HFile.Reader(fs, p, null);
+ reader.loadFileInfo();
+ HFileScanner scanner = reader.getScanner();
+ assertEquals(false, scanner.seekBefore(Bytes.toBytes("a")));
+
+ assertEquals(false, scanner.seekBefore(Bytes.toBytes("c")));
+
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("d")));
+ assertEquals("c", scanner.getKeyString());
+
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("e")));
+ assertEquals("c", scanner.getKeyString());
+
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("f")));
+ assertEquals("e", scanner.getKeyString());
+
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("g")));
+ assertEquals("e", scanner.getKeyString());
+
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("h")));
+ assertEquals("g", scanner.getKeyString());
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("i")));
+ assertEquals("g", scanner.getKeyString());
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("j")));
+ assertEquals("i", scanner.getKeyString());
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("k")));
+ assertEquals("i", scanner.getKeyString());
+ assertEquals(true, scanner.seekBefore(Bytes.toBytes("l")));
+ assertEquals("k", scanner.getKeyString());
+ }
+
+ public void testSeekTo() throws Exception {
+ Path p = makeNewFile();
+ HFile.Reader reader = new HFile.Reader(fs, p, null);
+ reader.loadFileInfo();
+ assertEquals(2, reader.blockIndex.count);
+ HFileScanner scanner = reader.getScanner();
+ // lies before the start of the file.
+ assertEquals(-1, scanner.seekTo(Bytes.toBytes("a")));
+
+ assertEquals(1, scanner.seekTo(Bytes.toBytes("d")));
+ assertEquals("c", scanner.getKeyString());
+
+ // Across a block boundary now.
+ assertEquals(1, scanner.seekTo(Bytes.toBytes("h")));
+ assertEquals("g", scanner.getKeyString());
+
+ assertEquals(1, scanner.seekTo(Bytes.toBytes("l")));
+ assertEquals("k", scanner.getKeyString());
+ }
+
+ public void testBlockContainingKey() throws Exception {
+ Path p = makeNewFile();
+ HFile.Reader reader = new HFile.Reader(fs, p, null);
+ reader.loadFileInfo();
+ System.out.println(reader.blockIndex.toString());
+ // falls before the start of the file.
+ assertEquals(-1, reader.blockIndex.blockContainingKey(Bytes.toBytes("a"), 0, 1));
+ assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("c"), 0, 1));
+ assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("d"), 0, 1));
+ assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("e"), 0, 1));
+ assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("g"), 0, 1));
+ assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("h"), 0, 1));
+ assertEquals(1, reader.blockIndex.blockContainingKey(Bytes.toBytes("i"), 0, 1));
+ assertEquals(1, reader.blockIndex.blockContainingKey(Bytes.toBytes("j"), 0, 1));
+ assertEquals(1, reader.blockIndex.blockContainingKey(Bytes.toBytes("k"), 0, 1));
+ assertEquals(1, reader.blockIndex.blockContainingKey(Bytes.toBytes("l"), 0, 1));
+
+
+
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/mapred/TestTableIndex.java b/src/test/org/apache/hadoop/hbase/mapred/TestTableIndex.java
new file mode 100644
index 0000000..b484284
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/mapred/TestTableIndex.java
@@ -0,0 +1,261 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Map;
+import java.util.Random;
+
+import junit.framework.TestSuite;
+import junit.textui.TestRunner;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MultiRegionTable;
+import org.apache.hadoop.mapred.FileOutputFormat;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MiniMRCluster;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.lucene.index.Term;
+import org.apache.lucene.search.IndexSearcher;
+import org.apache.lucene.search.MultiSearcher;
+import org.apache.lucene.search.Searchable;
+import org.apache.lucene.search.Searcher;
+import org.apache.lucene.search.TermQuery;
+
+/**
+ * Test Map/Reduce job to build index over HBase table
+ */
+public class TestTableIndex extends MultiRegionTable {
+ private static final Log LOG = LogFactory.getLog(TestTableIndex.class);
+
+ static final String TABLE_NAME = "moretest";
+ static final String INPUT_COLUMN = "contents:";
+ static final byte [] TEXT_INPUT_COLUMN = Bytes.toBytes(INPUT_COLUMN);
+ static final String OUTPUT_COLUMN = "text:";
+ static final byte [] TEXT_OUTPUT_COLUMN = Bytes.toBytes(OUTPUT_COLUMN);
+ static final String ROWKEY_NAME = "key";
+ static final String INDEX_DIR = "testindex";
+ private static final byte [][] columns = new byte [][] {
+ TEXT_INPUT_COLUMN,
+ TEXT_OUTPUT_COLUMN
+ };
+
+ private JobConf jobConf = null;
+
+ /** default constructor */
+ public TestTableIndex() {
+ super(INPUT_COLUMN);
+ desc = new HTableDescriptor(TABLE_NAME);
+ desc.addFamily(new HColumnDescriptor(INPUT_COLUMN));
+ desc.addFamily(new HColumnDescriptor(OUTPUT_COLUMN));
+ }
+
+ @Override
+ public void tearDown() throws Exception {
+ if (jobConf != null) {
+ FileUtil.fullyDelete(new File(jobConf.get("hadoop.tmp.dir")));
+ }
+ }
+
+ /**
+ * Test HBase map/reduce
+ *
+ * @throws IOException
+ */
+ public void testTableIndex() throws IOException {
+ boolean printResults = false;
+ if (printResults) {
+ LOG.info("Print table contents before map/reduce");
+ }
+ scanTable(printResults);
+
+ MiniMRCluster mrCluster = new MiniMRCluster(2, fs.getUri().toString(), 1);
+
+ // set configuration parameter for index build
+ conf.set("hbase.index.conf", createIndexConfContent());
+
+ try {
+ jobConf = new JobConf(conf, TestTableIndex.class);
+ jobConf.setJobName("index column contents");
+ jobConf.setNumMapTasks(2);
+ // number of indexes to partition into
+ jobConf.setNumReduceTasks(1);
+
+ // use identity map (a waste, but just as an example)
+ IdentityTableMap.initJob(TABLE_NAME, INPUT_COLUMN,
+ IdentityTableMap.class, jobConf);
+
+ // use IndexTableReduce to build a Lucene index
+ jobConf.setReducerClass(IndexTableReduce.class);
+ FileOutputFormat.setOutputPath(jobConf, new Path(INDEX_DIR));
+ jobConf.setOutputFormat(IndexOutputFormat.class);
+
+ JobClient.runJob(jobConf);
+
+ } finally {
+ mrCluster.shutdown();
+ }
+
+ if (printResults) {
+ LOG.info("Print table contents after map/reduce");
+ }
+ scanTable(printResults);
+
+ // verify index results
+ verify();
+ }
+
+ private String createIndexConfContent() {
+ StringBuffer buffer = new StringBuffer();
+ buffer.append("<configuration><column><property>" +
+ "<name>hbase.column.name</name><value>" + INPUT_COLUMN +
+ "</value></property>");
+ buffer.append("<property><name>hbase.column.store</name> " +
+ "<value>true</value></property>");
+ buffer.append("<property><name>hbase.column.index</name>" +
+ "<value>true</value></property>");
+ buffer.append("<property><name>hbase.column.tokenize</name>" +
+ "<value>false</value></property>");
+ buffer.append("<property><name>hbase.column.boost</name>" +
+ "<value>3</value></property>");
+ buffer.append("<property><name>hbase.column.omit.norms</name>" +
+ "<value>false</value></property></column>");
+ buffer.append("<property><name>hbase.index.rowkey.name</name><value>" +
+ ROWKEY_NAME + "</value></property>");
+ buffer.append("<property><name>hbase.index.max.buffered.docs</name>" +
+ "<value>500</value></property>");
+ buffer.append("<property><name>hbase.index.max.field.length</name>" +
+ "<value>10000</value></property>");
+ buffer.append("<property><name>hbase.index.merge.factor</name>" +
+ "<value>10</value></property>");
+ buffer.append("<property><name>hbase.index.use.compound.file</name>" +
+ "<value>true</value></property>");
+ buffer.append("<property><name>hbase.index.optimize</name>" +
+ "<value>true</value></property></configuration>");
+
+ IndexConfiguration c = new IndexConfiguration();
+ c.addFromXML(buffer.toString());
+ return c.toString();
+ }
+
+ private void scanTable(boolean printResults)
+ throws IOException {
+ HTable table = new HTable(conf, TABLE_NAME);
+ Scanner scanner = table.getScanner(columns,
+ HConstants.EMPTY_START_ROW);
+ try {
+ for (RowResult r : scanner) {
+ if (printResults) {
+ LOG.info("row: " + r.getRow());
+ }
+ for (Map.Entry<byte [], Cell> e : r.entrySet()) {
+ if (printResults) {
+ LOG.info(" column: " + e.getKey() + " value: "
+ + new String(e.getValue().getValue(), HConstants.UTF8_ENCODING));
+ }
+ }
+ }
+ } finally {
+ scanner.close();
+ }
+ }
+
+ private void verify() throws IOException {
+ // Force a cache flush for every online region to ensure that when the
+ // scanner takes its snapshot, all the updates have made it into the cache.
+ for (HRegion r : cluster.getRegionThreads().get(0).getRegionServer().
+ getOnlineRegions()) {
+ HRegionIncommon region = new HRegionIncommon(r);
+ region.flushcache();
+ }
+
+ Path localDir = new Path(getUnitTestdir(getName()), "index_" +
+ Integer.toString(new Random().nextInt()));
+ this.fs.copyToLocalFile(new Path(INDEX_DIR), localDir);
+ FileSystem localfs = FileSystem.getLocal(conf);
+ FileStatus [] indexDirs = localfs.listStatus(localDir);
+ Searcher searcher = null;
+ Scanner scanner = null;
+ try {
+ if (indexDirs.length == 1) {
+ searcher = new IndexSearcher((new File(indexDirs[0].getPath().
+ toUri())).getAbsolutePath());
+ } else if (indexDirs.length > 1) {
+ Searchable[] searchers = new Searchable[indexDirs.length];
+ for (int i = 0; i < indexDirs.length; i++) {
+ searchers[i] = new IndexSearcher((new File(indexDirs[i].getPath().
+ toUri()).getAbsolutePath()));
+ }
+ searcher = new MultiSearcher(searchers);
+ } else {
+ throw new IOException("no index directory found");
+ }
+
+ HTable table = new HTable(conf, TABLE_NAME);
+ scanner = table.getScanner(columns, HConstants.EMPTY_START_ROW);
+
+ IndexConfiguration indexConf = new IndexConfiguration();
+ String content = conf.get("hbase.index.conf");
+ if (content != null) {
+ indexConf.addFromXML(content);
+ }
+ String rowkeyName = indexConf.getRowkeyName();
+
+ int count = 0;
+ for (RowResult r : scanner) {
+ String value = Bytes.toString(r.getRow());
+ Term term = new Term(rowkeyName, value);
+ int hitCount = searcher.search(new TermQuery(term)).length();
+ assertEquals("check row " + value, 1, hitCount);
+ count++;
+ }
+ LOG.debug("Searcher.maxDoc: " + searcher.maxDoc());
+ LOG.debug("IndexReader.numDocs: " + ((IndexSearcher)searcher).getIndexReader().numDocs());
+ int maxDoc = ((IndexSearcher)searcher).getIndexReader().numDocs();
+ assertEquals("check number of rows", maxDoc, count);
+ } finally {
+ if (null != searcher)
+ searcher.close();
+ if (null != scanner)
+ scanner.close();
+ }
+ }
+ /**
+ * @param args unused
+ */
+ public static void main(String[] args) {
+ TestRunner.run(new TestSuite(TestTableIndex.class));
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java b/src/test/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java
new file mode 100644
index 0000000..220c1c9
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java
@@ -0,0 +1,244 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MultiRegionTable;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.MiniMRCluster;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+
+/**
+ * Test Map/Reduce job over HBase tables. The map/reduce process we're testing
+ * on our tables is simple - take every row in the table, reverse the value of
+ * a particular cell, and write it back to the table.
+ */
+public class TestTableMapReduce extends MultiRegionTable {
+ private static final Log LOG =
+ LogFactory.getLog(TestTableMapReduce.class.getName());
+
+ static final String MULTI_REGION_TABLE_NAME = "mrtest";
+ static final String INPUT_COLUMN = "contents:";
+ static final String OUTPUT_COLUMN = "text:";
+
+ private static final byte [][] columns = new byte [][] {
+ Bytes.toBytes(INPUT_COLUMN),
+ Bytes.toBytes(OUTPUT_COLUMN)
+ };
+
+ /** constructor */
+ public TestTableMapReduce() {
+ super(INPUT_COLUMN);
+ desc = new HTableDescriptor(MULTI_REGION_TABLE_NAME);
+ desc.addFamily(new HColumnDescriptor(INPUT_COLUMN));
+ desc.addFamily(new HColumnDescriptor(OUTPUT_COLUMN));
+ }
+
+ /**
+ * Pass the given key and processed record reduce
+ */
+ public static class ProcessContentsMapper
+ extends MapReduceBase
+ implements TableMap<ImmutableBytesWritable, BatchUpdate> {
+ /**
+ * Pass the key, and reversed value to reduce
+ * @param key
+ * @param value
+ * @param output
+ * @param reporter
+ * @throws IOException
+ */
+ public void map(ImmutableBytesWritable key, RowResult value,
+ OutputCollector<ImmutableBytesWritable, BatchUpdate> output,
+ Reporter reporter)
+ throws IOException {
+ if (value.size() != 1) {
+ throw new IOException("There should only be one input column");
+ }
+ byte [][] keys = value.keySet().toArray(new byte [value.size()][]);
+ if(!Bytes.equals(keys[0], Bytes.toBytes(INPUT_COLUMN))) {
+ throw new IOException("Wrong input column. Expected: " + INPUT_COLUMN
+ + " but got: " + keys[0]);
+ }
+
+ // Get the original value and reverse it
+
+ String originalValue =
+ new String(value.get(keys[0]).getValue(), HConstants.UTF8_ENCODING);
+ StringBuilder newValue = new StringBuilder();
+ for(int i = originalValue.length() - 1; i >= 0; i--) {
+ newValue.append(originalValue.charAt(i));
+ }
+
+ // Now set the value to be collected
+
+ BatchUpdate outval = new BatchUpdate(key.get());
+ outval.put(OUTPUT_COLUMN, Bytes.toBytes(newValue.toString()));
+ output.collect(key, outval);
+ }
+ }
+
+ /**
+ * Test a map/reduce against a multi-region table
+ * @throws IOException
+ */
+ public void testMultiRegionTable() throws IOException {
+ runTestOnTable(new HTable(conf, MULTI_REGION_TABLE_NAME));
+ }
+
+ private void runTestOnTable(HTable table) throws IOException {
+ MiniMRCluster mrCluster = new MiniMRCluster(2, fs.getUri().toString(), 1);
+
+ JobConf jobConf = null;
+ try {
+ LOG.info("Before map/reduce startup");
+ jobConf = new JobConf(conf, TestTableMapReduce.class);
+ jobConf.setJobName("process column contents");
+ jobConf.setNumReduceTasks(1);
+ TableMapReduceUtil.initTableMapJob(Bytes.toString(table.getTableName()),
+ INPUT_COLUMN, ProcessContentsMapper.class,
+ ImmutableBytesWritable.class, BatchUpdate.class, jobConf);
+ TableMapReduceUtil.initTableReduceJob(Bytes.toString(table.getTableName()),
+ IdentityTableReduce.class, jobConf);
+
+ LOG.info("Started " + Bytes.toString(table.getTableName()));
+ JobClient.runJob(jobConf);
+ LOG.info("After map/reduce completion");
+
+ // verify map-reduce results
+ verify(Bytes.toString(table.getTableName()));
+ } finally {
+ mrCluster.shutdown();
+ if (jobConf != null) {
+ FileUtil.fullyDelete(new File(jobConf.get("hadoop.tmp.dir")));
+ }
+ }
+ }
+
+ private void verify(String tableName) throws IOException {
+ HTable table = new HTable(conf, tableName);
+ boolean verified = false;
+ long pause = conf.getLong("hbase.client.pause", 5 * 1000);
+ int numRetries = conf.getInt("hbase.client.retries.number", 5);
+ for (int i = 0; i < numRetries; i++) {
+ try {
+ LOG.info("Verification attempt #" + i);
+ verifyAttempt(table);
+ verified = true;
+ break;
+ } catch (NullPointerException e) {
+ // If here, a cell was empty. Presume its because updates came in
+ // after the scanner had been opened. Wait a while and retry.
+ LOG.debug("Verification attempt failed: " + e.getMessage());
+ }
+ try {
+ Thread.sleep(pause);
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ assertTrue(verified);
+ }
+
+ /**
+ * Looks at every value of the mapreduce output and verifies that indeed
+ * the values have been reversed.
+ * @param table Table to scan.
+ * @throws IOException
+ * @throws NullPointerException if we failed to find a cell value
+ */
+ private void verifyAttempt(final HTable table) throws IOException, NullPointerException {
+ Scanner scanner =
+ table.getScanner(columns, HConstants.EMPTY_START_ROW);
+ try {
+ for (RowResult r : scanner) {
+ if (LOG.isDebugEnabled()) {
+ if (r.size() > 2 ) {
+ throw new IOException("Too many results, expected 2 got " +
+ r.size());
+ }
+ }
+ byte[] firstValue = null;
+ byte[] secondValue = null;
+ int count = 0;
+ for(Map.Entry<byte [], Cell> e: r.entrySet()) {
+ if (count == 0) {
+ firstValue = e.getValue().getValue();
+ }
+ if (count == 1) {
+ secondValue = e.getValue().getValue();
+ }
+ count++;
+ if (count == 2) {
+ break;
+ }
+ }
+
+ String first = "";
+ if (firstValue == null) {
+ throw new NullPointerException(Bytes.toString(r.getRow()) +
+ ": first value is null");
+ }
+ first = new String(firstValue, HConstants.UTF8_ENCODING);
+
+ String second = "";
+ if (secondValue == null) {
+ throw new NullPointerException(Bytes.toString(r.getRow()) +
+ ": second value is null");
+ }
+ byte[] secondReversed = new byte[secondValue.length];
+ for (int i = 0, j = secondValue.length - 1; j >= 0; j--, i++) {
+ secondReversed[i] = secondValue[j];
+ }
+ second = new String(secondReversed, HConstants.UTF8_ENCODING);
+
+ if (first.compareTo(second) != 0) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("second key is not the reverse of first. row=" +
+ r.getRow() + ", first value=" + first + ", second value=" +
+ second);
+ }
+ fail();
+ }
+ }
+ } finally {
+ scanner.close();
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/master/OOMEHMaster.java b/src/test/org/apache/hadoop/hbase/master/OOMEHMaster.java
new file mode 100644
index 0000000..b67592a
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/master/OOMEHMaster.java
@@ -0,0 +1,61 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+
+/**
+ * An HMaster that runs out of memory.
+ * Everytime a region server reports in, add to the retained heap of memory.
+ * Needs to be started manually as in
+ * <code>${HBASE_HOME}/bin/hbase ./bin/hbase org.apache.hadoop.hbase.OOMEHMaster start/code>.
+ */
+public class OOMEHMaster extends HMaster {
+ private List<byte []> retainer = new ArrayList<byte[]>();
+
+ public OOMEHMaster(HBaseConfiguration conf) throws IOException {
+ super(conf);
+ }
+
+ @Override
+ public HMsg[] regionServerReport(HServerInfo serverInfo, HMsg[] msgs,
+ HRegionInfo[] mostLoadedRegions)
+ throws IOException {
+ // Retain 1M.
+ this.retainer.add(new byte [1024 * 1024]);
+ return super.regionServerReport(serverInfo, msgs, mostLoadedRegions);
+ }
+
+ /**
+ * @param args
+ */
+ public static void main(String[] args) {
+ doMain(args, OOMEHMaster.class);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/DisabledTestRegionServerExit.java b/src/test/org/apache/hadoop/hbase/regionserver/DisabledTestRegionServerExit.java
new file mode 100644
index 0000000..99140cc
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/DisabledTestRegionServerExit.java
@@ -0,0 +1,207 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.LocalHBaseCluster;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.RowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests region server failover when a region server exits both cleanly and
+ * when it aborts.
+ */
+public class DisabledTestRegionServerExit extends HBaseClusterTestCase {
+ final Log LOG = LogFactory.getLog(this.getClass().getName());
+ HTable table;
+
+ /** constructor */
+ public DisabledTestRegionServerExit() {
+ super(2);
+ conf.setInt("ipc.client.connect.max.retries", 5); // reduce ipc retries
+ conf.setInt("ipc.client.timeout", 10000); // and ipc timeout
+ conf.setInt("hbase.client.pause", 10000); // increase client timeout
+ conf.setInt("hbase.client.retries.number", 10); // increase HBase retries
+ }
+
+ /**
+ * Test abort of region server.
+ * @throws IOException
+ */
+ public void testAbort() throws IOException {
+ // When the META table can be opened, the region servers are running
+ new HTable(conf, HConstants.META_TABLE_NAME);
+ // Create table and add a row.
+ final String tableName = getName();
+ byte [] row = createTableAndAddRow(tableName);
+ // Start up a new region server to take over serving of root and meta
+ // after we shut down the current meta/root host.
+ this.cluster.startRegionServer();
+ // Now abort the meta region server and wait for it to go down and come back
+ stopOrAbortMetaRegionServer(true);
+ // Verify that everything is back up.
+ LOG.info("Starting up the verification thread for " + getName());
+ Thread t = startVerificationThread(tableName, row);
+ t.start();
+ threadDumpingJoin(t);
+ }
+
+ /**
+ * Test abort of region server.
+ * Test is flakey up on hudson. Needs work.
+ * @throws IOException
+ */
+ public void testCleanExit() throws IOException {
+ // When the META table can be opened, the region servers are running
+ new HTable(this.conf, HConstants.META_TABLE_NAME);
+ // Create table and add a row.
+ final String tableName = getName();
+ byte [] row = createTableAndAddRow(tableName);
+ // Start up a new region server to take over serving of root and meta
+ // after we shut down the current meta/root host.
+ this.cluster.startRegionServer();
+ // Now abort the meta region server and wait for it to go down and come back
+ stopOrAbortMetaRegionServer(false);
+ // Verify that everything is back up.
+ LOG.info("Starting up the verification thread for " + getName());
+ Thread t = startVerificationThread(tableName, row);
+ t.start();
+ threadDumpingJoin(t);
+ }
+
+ private byte [] createTableAndAddRow(final String tableName)
+ throws IOException {
+ HTableDescriptor desc = new HTableDescriptor(tableName);
+ desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ // put some values in the table
+ this.table = new HTable(conf, tableName);
+ byte [] row = Bytes.toBytes("row1");
+ BatchUpdate b = new BatchUpdate(row);
+ b.put(HConstants.COLUMN_FAMILY, Bytes.toBytes(tableName));
+ table.commit(b);
+ return row;
+ }
+
+ /*
+ * Stop the region server serving the meta region and wait for the meta region
+ * to get reassigned. This is always the most problematic case.
+ *
+ * @param abort set to true if region server should be aborted, if false it
+ * is just shut down.
+ */
+ private void stopOrAbortMetaRegionServer(boolean abort) {
+ List<LocalHBaseCluster.RegionServerThread> regionThreads =
+ cluster.getRegionThreads();
+
+ int server = -1;
+ for (int i = 0; i < regionThreads.size() && server == -1; i++) {
+ HRegionServer s = regionThreads.get(i).getRegionServer();
+ Collection<HRegion> regions = s.getOnlineRegions();
+ for (HRegion r : regions) {
+ if (Bytes.equals(r.getTableDesc().getName(),
+ HConstants.META_TABLE_NAME)) {
+ server = i;
+ }
+ }
+ }
+ if (server == -1) {
+ LOG.fatal("could not find region server serving meta region");
+ fail();
+ }
+ if (abort) {
+ this.cluster.abortRegionServer(server);
+
+ } else {
+ this.cluster.stopRegionServer(server);
+ }
+ LOG.info(this.cluster.waitOnRegionServer(server) + " has been " +
+ (abort ? "aborted" : "shut down"));
+ }
+
+ /*
+ * Run verification in a thread so I can concurrently run a thread-dumper
+ * while we're waiting (because in this test sometimes the meta scanner
+ * looks to be be stuck).
+ * @param tableName Name of table to find.
+ * @param row Row we expect to find.
+ * @return Verification thread. Caller needs to calls start on it.
+ */
+ private Thread startVerificationThread(final String tableName,
+ final byte [] row) {
+ Runnable runnable = new Runnable() {
+ public void run() {
+ try {
+ // Now try to open a scanner on the meta table. Should stall until
+ // meta server comes back up.
+ HTable t = new HTable(conf, HConstants.META_TABLE_NAME);
+ Scanner s =
+ t.getScanner(HConstants.COLUMN_FAMILY_ARRAY,
+ HConstants.EMPTY_START_ROW);
+ s.close();
+
+ } catch (IOException e) {
+ LOG.fatal("could not re-open meta table because", e);
+ fail();
+ }
+ Scanner scanner = null;
+ try {
+ // Verify that the client can find the data after the region has moved
+ // to a different server
+ scanner =
+ table.getScanner(HConstants.COLUMN_FAMILY_ARRAY,
+ HConstants.EMPTY_START_ROW);
+ LOG.info("Obtained scanner " + scanner);
+ for (RowResult r : scanner) {
+ assertTrue(Bytes.equals(r.getRow(), row));
+ assertEquals(1, r.size());
+ byte[] bytes = r.get(HConstants.COLUMN_FAMILY).getValue();
+ assertNotNull(bytes);
+ assertTrue(tableName.equals(Bytes.toString(bytes)));
+ }
+ LOG.info("Success!");
+ } catch (Exception e) {
+ e.printStackTrace();
+ fail();
+ } finally {
+ if (scanner != null) {
+ LOG.info("Closing scanner " + scanner);
+ scanner.close();
+ }
+ }
+ }
+ };
+ return new Thread(runnable);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/OOMERegionServer.java b/src/test/org/apache/hadoop/hbase/regionserver/OOMERegionServer.java
new file mode 100644
index 0000000..d130baf
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/OOMERegionServer.java
@@ -0,0 +1,61 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HServerAddress;
+
+/**
+ * A region server that will OOME.
+ * Everytime {@link #batchUpdate(Text, long, BatchUpdate)} is called, we add
+ * keep around a reference to the batch. Use this class to test OOME extremes.
+ * Needs to be started manually as in
+ * <code>${HBASE_HOME}/bin/hbase ./bin/hbase org.apache.hadoop.hbase.OOMERegionServer start</code>.
+ */
+public class OOMERegionServer extends HRegionServer {
+ private List<BatchUpdate> retainer = new ArrayList<BatchUpdate>();
+
+ public OOMERegionServer(HBaseConfiguration conf) throws IOException {
+ super(conf);
+ }
+
+ public OOMERegionServer(HServerAddress address, HBaseConfiguration conf)
+ throws IOException {
+ super(address, conf);
+ }
+
+ public void batchUpdate(byte [] regionName, BatchUpdate b)
+ throws IOException {
+ super.batchUpdate(regionName, b, -1L);
+ for (int i = 0; i < 30; i++) {
+ // Add the batch update 30 times to bring on the OOME faster.
+ this.retainer.add(b);
+ }
+ }
+
+ public static void main(String[] args) {
+ HRegionServer.doMain(args, OOMERegionServer.class);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestAtomicIncrement.java b/src/test/org/apache/hadoop/hbase/regionserver/TestAtomicIncrement.java
new file mode 100644
index 0000000..651daf1
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestAtomicIncrement.java
@@ -0,0 +1,121 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TestAtomicIncrement extends HBaseClusterTestCase {
+ static final Log LOG = LogFactory.getLog(TestAtomicIncrement.class);
+
+ private static final byte [] CONTENTS = Bytes.toBytes("contents:");
+
+ public void testIncrement() throws IOException {
+ try {
+ HTable table = null;
+
+ // Setup
+
+ HTableDescriptor desc = new HTableDescriptor(getName());
+ desc.addFamily(
+ new HColumnDescriptor(CONTENTS, // Column name
+ 1, // Max versions
+ HColumnDescriptor.DEFAULT_COMPRESSION, // no compression
+ HColumnDescriptor.DEFAULT_IN_MEMORY, // not in memory
+ HColumnDescriptor.DEFAULT_BLOCKCACHE,
+ HColumnDescriptor.DEFAULT_LENGTH,
+ HColumnDescriptor.DEFAULT_TTL,
+ false
+ )
+ );
+
+ // Create the table
+
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+
+ try {
+ // Give cache flusher and log roller a chance to run
+ // Otherwise we'll never hit the bloom filter, just the memcache
+ Thread.sleep(conf.getLong(HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000) * 10);
+
+ } catch (InterruptedException e) {
+ // ignore
+ }
+ // Open table
+
+ table = new HTable(conf, desc.getName());
+
+ byte [] row = Bytes.toBytes("foo");
+ byte [] column = "contents:1".getBytes(HConstants.UTF8_ENCODING);
+ // increment by 1:
+ assertEquals(1L, table.incrementColumnValue(row, column, 1));
+
+ // set a weird value, then increment:
+ row = Bytes.toBytes("foo2");
+ byte [] value = {0,0,2};
+ BatchUpdate bu = new BatchUpdate(row);
+ bu.put(column, value);
+ table.commit(bu);
+
+ assertEquals(3L, table.incrementColumnValue(row, column, 1));
+
+ assertEquals(-2L, table.incrementColumnValue(row, column, -5));
+ assertEquals(-502L, table.incrementColumnValue(row, column, -500));
+ assertEquals(1500L, table.incrementColumnValue(row, column, 2002));
+ assertEquals(1501L, table.incrementColumnValue(row, column, 1));
+
+ row = Bytes.toBytes("foo3");
+ byte[] value2 = {1,2,3,4,5,6,7,8,9};
+ bu = new BatchUpdate(row);
+ bu.put(column, value2);
+ table.commit(bu);
+
+ try {
+ table.incrementColumnValue(row, column, 1);
+ fail();
+ } catch (IOException e) {
+ System.out.println("Expected exception: " + e);
+ // expected exception.
+ }
+
+
+ } catch (Exception e) {
+ e.printStackTrace();
+ if (e instanceof IOException) {
+ IOException i = (IOException) e;
+ throw i;
+ }
+ fail();
+ }
+
+ }
+
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestBloomFilters.java b/src/test/org/apache/hadoop/hbase/regionserver/TestBloomFilters.java
new file mode 100644
index 0000000..91073e2
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestBloomFilters.java
@@ -0,0 +1,247 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/** Tests per-column bloom filters */
+public class TestBloomFilters extends HBaseClusterTestCase {
+ static final Log LOG = LogFactory.getLog(TestBloomFilters.class);
+
+ private static final byte [] CONTENTS = Bytes.toBytes("contents:");
+
+ private static final byte [][] rows = {
+ Bytes.toBytes("wmjwjzyv"),
+ Bytes.toBytes("baietibz"),
+ Bytes.toBytes("guhsgxnv"),
+ Bytes.toBytes("mhnqycto"),
+ Bytes.toBytes("xcyqafgz"),
+ Bytes.toBytes("zidoamgb"),
+ Bytes.toBytes("tftfirzd"),
+ Bytes.toBytes("okapqlrg"),
+ Bytes.toBytes("yccwzwsq"),
+ Bytes.toBytes("qmonufqu"),
+ Bytes.toBytes("wlsctews"),
+ Bytes.toBytes("mksdhqri"),
+ Bytes.toBytes("wxxllokj"),
+ Bytes.toBytes("eviuqpls"),
+ Bytes.toBytes("bavotqmj"),
+ Bytes.toBytes("yibqzhdl"),
+ Bytes.toBytes("csfqmsyr"),
+ Bytes.toBytes("guxliyuh"),
+ Bytes.toBytes("pzicietj"),
+ Bytes.toBytes("qdwgrqwo"),
+ Bytes.toBytes("ujfzecmi"),
+ Bytes.toBytes("dzeqfvfi"),
+ Bytes.toBytes("phoegsij"),
+ Bytes.toBytes("bvudfcou"),
+ Bytes.toBytes("dowzmciz"),
+ Bytes.toBytes("etvhkizp"),
+ Bytes.toBytes("rzurqycg"),
+ Bytes.toBytes("krqfxuge"),
+ Bytes.toBytes("gflcohtd"),
+ Bytes.toBytes("fcrcxtps"),
+ Bytes.toBytes("qrtovxdq"),
+ Bytes.toBytes("aypxwrwi"),
+ Bytes.toBytes("dckpyznr"),
+ Bytes.toBytes("mdaawnpz"),
+ Bytes.toBytes("pakdfvca"),
+ Bytes.toBytes("xjglfbez"),
+ Bytes.toBytes("xdsecofi"),
+ Bytes.toBytes("sjlrfcab"),
+ Bytes.toBytes("ebcjawxv"),
+ Bytes.toBytes("hkafkjmy"),
+ Bytes.toBytes("oimmwaxo"),
+ Bytes.toBytes("qcuzrazo"),
+ Bytes.toBytes("nqydfkwk"),
+ Bytes.toBytes("frybvmlb"),
+ Bytes.toBytes("amxmaqws"),
+ Bytes.toBytes("gtkovkgx"),
+ Bytes.toBytes("vgwxrwss"),
+ Bytes.toBytes("xrhzmcep"),
+ Bytes.toBytes("tafwziil"),
+ Bytes.toBytes("erjmncnv"),
+ Bytes.toBytes("heyzqzrn"),
+ Bytes.toBytes("sowvyhtu"),
+ Bytes.toBytes("heeixgzy"),
+ Bytes.toBytes("ktcahcob"),
+ Bytes.toBytes("ljhbybgg"),
+ Bytes.toBytes("jiqfcksl"),
+ Bytes.toBytes("anjdkjhm"),
+ Bytes.toBytes("uzcgcuxp"),
+ Bytes.toBytes("vzdhjqla"),
+ Bytes.toBytes("svhgwwzq"),
+ Bytes.toBytes("zhswvhbp"),
+ Bytes.toBytes("ueceybwy"),
+ Bytes.toBytes("czkqykcw"),
+ Bytes.toBytes("ctisayir"),
+ Bytes.toBytes("hppbgciu"),
+ Bytes.toBytes("nhzgljfk"),
+ Bytes.toBytes("vaziqllf"),
+ Bytes.toBytes("narvrrij"),
+ Bytes.toBytes("kcevbbqi"),
+ Bytes.toBytes("qymuaqnp"),
+ Bytes.toBytes("pwqpfhsr"),
+ Bytes.toBytes("peyeicuk"),
+ Bytes.toBytes("kudlwihi"),
+ Bytes.toBytes("pkmqejlm"),
+ Bytes.toBytes("ylwzjftl"),
+ Bytes.toBytes("rhqrlqar"),
+ Bytes.toBytes("xmftvzsp"),
+ Bytes.toBytes("iaemtihk"),
+ Bytes.toBytes("ymsbrqcu"),
+ Bytes.toBytes("yfnlcxto"),
+ Bytes.toBytes("nluqopqh"),
+ Bytes.toBytes("wmrzhtox"),
+ Bytes.toBytes("qnffhqbl"),
+ Bytes.toBytes("zypqpnbw"),
+ Bytes.toBytes("oiokhatd"),
+ Bytes.toBytes("mdraddiu"),
+ Bytes.toBytes("zqoatltt"),
+ Bytes.toBytes("ewhulbtm"),
+ Bytes.toBytes("nmswpsdf"),
+ Bytes.toBytes("xsjeteqe"),
+ Bytes.toBytes("ufubcbma"),
+ Bytes.toBytes("phyxvrds"),
+ Bytes.toBytes("vhnfldap"),
+ Bytes.toBytes("zrrlycmg"),
+ Bytes.toBytes("becotcjx"),
+ Bytes.toBytes("wvbubokn"),
+ Bytes.toBytes("avkgiopr"),
+ Bytes.toBytes("mbqqxmrv"),
+ Bytes.toBytes("ibplgvuu"),
+ Bytes.toBytes("dghvpkgc")
+ };
+
+ private static final byte [][] testKeys = {
+ Bytes.toBytes("abcdefgh"),
+ Bytes.toBytes("ijklmnop"),
+ Bytes.toBytes("qrstuvwx"),
+ Bytes.toBytes("yzabcdef")
+ };
+
+ /**
+ * Test that uses automatic bloom filter
+ * @throws IOException
+ */
+ @SuppressWarnings("null")
+ public void testComputedParameters() throws IOException {
+ try {
+ HTable table = null;
+
+ // Setup
+
+ HTableDescriptor desc = new HTableDescriptor(getName());
+ desc.addFamily(
+ new HColumnDescriptor(CONTENTS, // Column name
+ 1, // Max versions
+ HColumnDescriptor.DEFAULT_COMPRESSION, // no compression
+ HColumnDescriptor.DEFAULT_IN_MEMORY, // not in memory
+ HColumnDescriptor.DEFAULT_BLOCKCACHE,
+ HColumnDescriptor.DEFAULT_LENGTH,
+ HColumnDescriptor.DEFAULT_TTL,
+ true
+ )
+ );
+
+ // Create the table
+
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+
+ // Open table
+
+ table = new HTable(conf, desc.getName());
+
+ // Store some values
+
+ for(int i = 0; i < 100; i++) {
+ byte [] row = rows[i];
+ String value = Bytes.toString(row);
+ BatchUpdate b = new BatchUpdate(row);
+ b.put(CONTENTS, value.getBytes(HConstants.UTF8_ENCODING));
+ table.commit(b);
+ }
+
+ // Get HRegionInfo for our table
+ Map<HRegionInfo, HServerAddress> regions = table.getRegionsInfo();
+ assertEquals(1, regions.size());
+ HRegionInfo info = null;
+ for (HRegionInfo hri: regions.keySet()) {
+ info = hri;
+ break;
+ }
+
+ // Request a cache flush
+ HRegionServer hrs = cluster.getRegionServer(0);
+
+ hrs.getFlushRequester().request(hrs.getOnlineRegion(info.getRegionName()));
+
+ try {
+ // Give cache flusher and log roller a chance to run
+ // Otherwise we'll never hit the bloom filter, just the memcache
+ Thread.sleep(conf.getLong(HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000) * 10);
+
+ } catch (InterruptedException e) {
+ // ignore
+ }
+
+ for(int i = 0; i < testKeys.length; i++) {
+ Cell value = table.get(testKeys[i], CONTENTS);
+ if(value != null && value.getValue().length != 0) {
+ LOG.error("non existant key: " + Bytes.toString(testKeys[i]) + " returned value: " +
+ Bytes.toString(value.getValue()));
+ fail();
+ }
+ }
+
+ for (int i = 0; i < rows.length; i++) {
+ Cell value = table.get(rows[i], CONTENTS);
+ if (value == null || value.getValue().length == 0) {
+ LOG.error("No value returned for row " + Bytes.toString(rows[i]));
+ fail();
+ }
+ }
+ } catch (Exception e) {
+ e.printStackTrace();
+ if (e instanceof IOException) {
+ IOException i = (IOException) e;
+ throw i;
+ }
+ fail();
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestCompaction.java b/src/test/org/apache/hadoop/hbase/regionserver/TestCompaction.java
new file mode 100644
index 0000000..8e2acdb
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestCompaction.java
@@ -0,0 +1,193 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test compactions
+ */
+public class TestCompaction extends HBaseTestCase {
+ static final Log LOG = LogFactory.getLog(TestCompaction.class.getName());
+ private HRegion r = null;
+ private static final byte [] COLUMN_FAMILY = COLFAMILY_NAME1;
+ private final byte [] STARTROW = Bytes.toBytes(START_KEY);
+ private static final byte [] COLUMN_FAMILY_TEXT = COLUMN_FAMILY;
+ private static final int COMPACTION_THRESHOLD = MAXVERSIONS;
+
+ private MiniDFSCluster cluster;
+
+ /** constructor */
+ public TestCompaction() {
+ super();
+
+ // Set cache flush size to 1MB
+ conf.setInt("hbase.hregion.memcache.flush.size", 1024*1024);
+ conf.setInt("hbase.hregion.memcache.block.multiplier", 10);
+ this.cluster = null;
+ }
+
+ @Override
+ public void setUp() throws Exception {
+ this.cluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+ // Make the hbase rootdir match the minidfs we just span up
+ this.conf.set(HConstants.HBASE_DIR,
+ this.cluster.getFileSystem().getHomeDirectory().toString());
+ super.setUp();
+ HTableDescriptor htd = createTableDescriptor(getName());
+ this.r = createNewHRegion(htd, null, null);
+ }
+
+ @Override
+ public void tearDown() throws Exception {
+ HLog hlog = r.getLog();
+ this.r.close();
+ hlog.closeAndDelete();
+ if (this.cluster != null) {
+ shutdownDfs(cluster);
+ }
+ super.tearDown();
+ }
+
+ /**
+ * Run compaction and flushing memcache
+ * Assert deletes get cleaned up.
+ * @throws Exception
+ */
+ public void testCompaction() throws Exception {
+ createStoreFile(r);
+ for (int i = 0; i < COMPACTION_THRESHOLD; i++) {
+ createStoreFile(r);
+ }
+ // Add more content. Now there are about 5 versions of each column.
+ // Default is that there only 3 (MAXVERSIONS) versions allowed per column.
+ // Assert == 3 when we ask for versions.
+ addContent(new HRegionIncommon(r), Bytes.toString(COLUMN_FAMILY));
+ // FIX!!
+ Cell[] cellValues =
+ Cell.createSingleCellArray(r.get(STARTROW, COLUMN_FAMILY_TEXT, -1, 100 /*Too many*/));
+ // Assert that I can get 3 versions since it is the max I should get
+ assertEquals(cellValues.length, 3);
+ r.flushcache();
+ r.compactStores();
+ // Always 3 versions if that is what max versions is.
+ byte [] secondRowBytes = START_KEY.getBytes(HConstants.UTF8_ENCODING);
+ // Increment the least significant character so we get to next row.
+ secondRowBytes[START_KEY_BYTES.length - 1]++;
+ // FIX
+ cellValues = Cell.createSingleCellArray(r.get(secondRowBytes, COLUMN_FAMILY_TEXT, -1, 100/*Too many*/));
+ LOG.info("Count of " + Bytes.toString(secondRowBytes) + ": " +
+ cellValues.length);
+ assertTrue(cellValues.length == 3);
+
+ // Now add deletes to memcache and then flush it. That will put us over
+ // the compaction threshold of 3 store files. Compacting these store files
+ // should result in a compacted store file that has no references to the
+ // deleted row.
+ r.deleteAll(secondRowBytes, COLUMN_FAMILY_TEXT, System.currentTimeMillis(),
+ null);
+ // Assert deleted.
+ assertNull(r.get(secondRowBytes, COLUMN_FAMILY_TEXT, -1, 100 /*Too many*/));
+ r.flushcache();
+ assertNull(r.get(secondRowBytes, COLUMN_FAMILY_TEXT, -1, 100 /*Too many*/));
+ // Add a bit of data and flush. Start adding at 'bbb'.
+ createSmallerStoreFile(this.r);
+ r.flushcache();
+ // Assert that the second row is still deleted.
+ cellValues = Cell.createSingleCellArray(r.get(secondRowBytes,
+ COLUMN_FAMILY_TEXT, -1, 100 /*Too many*/));
+ assertNull(r.get(secondRowBytes, COLUMN_FAMILY_TEXT, -1, 100 /*Too many*/));
+ // Force major compaction.
+ r.compactStores(true);
+ assertEquals(r.getStore(COLUMN_FAMILY_TEXT).getStorefiles().size(), 1);
+ assertNull(r.get(secondRowBytes, COLUMN_FAMILY_TEXT, -1, 100 /*Too many*/));
+ // Make sure the store files do have some 'aaa' keys in them -- exactly 3.
+ // Also, that compacted store files do not have any secondRowBytes because
+ // they were deleted.
+ int count = 0;
+ boolean containsStartRow = false;
+ for (StoreFile f: this.r.stores.get(COLUMN_FAMILY_TEXT).getStorefiles().
+ values()) {
+ HFileScanner scanner = f.getReader().getScanner();
+ scanner.seekTo();
+ do {
+ byte [] row = scanner.getKeyValue().getRow();
+ if (Bytes.equals(row, STARTROW)) {
+ containsStartRow = true;
+ count++;
+ } else {
+ // After major compaction, should be none of these rows in compacted
+ // file.
+ assertFalse(Bytes.equals(row, secondRowBytes));
+ }
+ } while(scanner.next());
+ }
+ assertTrue(containsStartRow);
+ assertTrue(count == 3);
+ // Do a simple TTL test.
+ final int ttlInSeconds = 1;
+ for (Store store: this.r.stores.values()) {
+ store.ttl = ttlInSeconds * 1000;
+ }
+ Thread.sleep(ttlInSeconds * 1000);
+ r.compactStores(true);
+ count = count();
+ assertTrue(count == 0);
+ }
+
+ private int count() throws IOException {
+ int count = 0;
+ for (StoreFile f: this.r.stores.
+ get(COLUMN_FAMILY_TEXT).getStorefiles().values()) {
+ HFileScanner scanner = f.getReader().getScanner();
+ if (!scanner.seekTo()) {
+ continue;
+ }
+ do {
+ count++;
+ } while(scanner.next());
+ }
+ return count;
+ }
+
+ private void createStoreFile(final HRegion region) throws IOException {
+ HRegionIncommon loader = new HRegionIncommon(region);
+ addContent(loader, Bytes.toString(COLUMN_FAMILY));
+ loader.flushcache();
+ }
+
+ private void createSmallerStoreFile(final HRegion region) throws IOException {
+ HRegionIncommon loader = new HRegionIncommon(region);
+ addContent(loader, Bytes.toString(COLUMN_FAMILY),
+ ("bbb").getBytes(), null);
+ loader.flushcache();
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestDeleteAll.java b/src/test/org/apache/hadoop/hbase/regionserver/TestDeleteAll.java
new file mode 100644
index 0000000..d7e41d3
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestDeleteAll.java
@@ -0,0 +1,234 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test the functionality of deleteAll.
+ */
+public class TestDeleteAll extends HBaseTestCase {
+ static final Log LOG = LogFactory.getLog(TestDeleteAll.class);
+
+ private final String COLUMN_REGEX = "[a-zA-Z0-9]*:[b|c]?";
+
+ private MiniDFSCluster miniHdfs;
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ try {
+ this.miniHdfs = new MiniDFSCluster(this.conf, 1, true, null);
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR,
+ this.miniHdfs.getFileSystem().getHomeDirectory().toString());
+ } catch (Exception e) {
+ LOG.fatal("error starting MiniDFSCluster", e);
+ throw e;
+ }
+ }
+
+ /**
+ * Tests for HADOOP-1550.
+ * @throws Exception
+ */
+ public void testDeleteAll() throws Exception {
+ HRegion region = null;
+ HRegionIncommon region_incommon = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+ region_incommon = new HRegionIncommon(region);
+
+ // test memcache
+ makeSureItWorks(region, region_incommon, false);
+ // test hstore
+ makeSureItWorks(region, region_incommon, true);
+
+ // regex test memcache
+ makeSureRegexWorks(region, region_incommon, false);
+ // regex test hstore
+ makeSureRegexWorks(region, region_incommon, true);
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ private void makeSureItWorks(HRegion region, HRegionIncommon region_incommon,
+ boolean flush)
+ throws Exception{
+ // insert a few versions worth of data for a row
+ byte [] row = Bytes.toBytes("test_row");
+ long now = System.currentTimeMillis();
+ long past = now - 100;
+ long future = now + 100;
+ Thread.sleep(100);
+ LOG.info("now=" + now + ", past=" + past + ", future=" + future);
+
+ byte [] colA = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "a");
+ byte [] colB = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "b");
+ byte [] colC = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "c");
+ byte [] colD = Bytes.toBytes(Bytes.toString(COLUMNS[0]));
+
+ BatchUpdate batchUpdate = new BatchUpdate(row, now);
+ batchUpdate.put(colA, cellData(0, flush).getBytes());
+ batchUpdate.put(colB, cellData(0, flush).getBytes());
+ batchUpdate.put(colC, cellData(0, flush).getBytes());
+ batchUpdate.put(colD, cellData(0, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(row, past);
+ batchUpdate.put(colA, cellData(1, flush).getBytes());
+ batchUpdate.put(colB, cellData(1, flush).getBytes());
+ batchUpdate.put(colC, cellData(1, flush).getBytes());
+ batchUpdate.put(colD, cellData(1, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(row, future);
+ batchUpdate.put(colA, cellData(2, flush).getBytes());
+ batchUpdate.put(colB, cellData(2, flush).getBytes());
+ batchUpdate.put(colC, cellData(2, flush).getBytes());
+ batchUpdate.put(colD, cellData(2, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ if (flush) {region_incommon.flushcache();}
+
+ // call delete all at a timestamp, make sure only the most recent stuff is left behind
+ region.deleteAll(row, now, null);
+ if (flush) {region_incommon.flushcache();}
+ assertCellEquals(region, row, colA, future, cellData(2, flush));
+ assertCellEquals(region, row, colA, past, null);
+ assertCellEquals(region, row, colA, now, null);
+ assertCellEquals(region, row, colD, future, cellData(2, flush));
+ assertCellEquals(region, row, colD, past, null);
+ assertCellEquals(region, row, colD, now, null);
+
+ // call delete all w/o a timestamp, make sure nothing is left.
+ region.deleteAll(row, HConstants.LATEST_TIMESTAMP, null);
+ if (flush) {region_incommon.flushcache();}
+ assertCellEquals(region, row, colA, now, null);
+ assertCellEquals(region, row, colA, past, null);
+ assertCellEquals(region, row, colA, future, null);
+ assertCellEquals(region, row, colD, now, null);
+ assertCellEquals(region, row, colD, past, null);
+ assertCellEquals(region, row, colD, future, null);
+
+ }
+
+ private void makeSureRegexWorks(HRegion region, HRegionIncommon region_incommon,
+ boolean flush)
+ throws Exception{
+ // insert a few versions worth of data for a row
+ byte [] row = Bytes.toBytes("test_row");
+ long t0 = System.currentTimeMillis();
+ long t1 = t0 - 15000;
+ long t2 = t1 - 15000;
+
+ byte [] colA = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "a");
+ byte [] colB = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "b");
+ byte [] colC = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "c");
+ byte [] colD = Bytes.toBytes(Bytes.toString(COLUMNS[0]));
+
+ BatchUpdate batchUpdate = new BatchUpdate(row, t0);
+ batchUpdate.put(colA, cellData(0, flush).getBytes());
+ batchUpdate.put(colB, cellData(0, flush).getBytes());
+ batchUpdate.put(colC, cellData(0, flush).getBytes());
+ batchUpdate.put(colD, cellData(0, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(row, t1);
+ batchUpdate.put(colA, cellData(1, flush).getBytes());
+ batchUpdate.put(colB, cellData(1, flush).getBytes());
+ batchUpdate.put(colC, cellData(1, flush).getBytes());
+ batchUpdate.put(colD, cellData(1, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(row, t2);
+ batchUpdate.put(colA, cellData(2, flush).getBytes());
+ batchUpdate.put(colB, cellData(2, flush).getBytes());
+ batchUpdate.put(colC, cellData(2, flush).getBytes());
+ batchUpdate.put(colD, cellData(2, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ if (flush) {region_incommon.flushcache();}
+
+ // call delete the matching columns at a timestamp,
+ // make sure only the most recent stuff is left behind
+ region.deleteAllByRegex(row, COLUMN_REGEX, t1, null);
+ if (flush) {region_incommon.flushcache();}
+ assertCellEquals(region, row, colA, t0, cellData(0, flush));
+ assertCellEquals(region, row, colA, t1, cellData(1, flush));
+ assertCellEquals(region, row, colA, t2, cellData(2, flush));
+ assertCellEquals(region, row, colB, t0, cellData(0, flush));
+ assertCellEquals(region, row, colB, t1, null);
+ assertCellEquals(region, row, colB, t2, null);
+ assertCellEquals(region, row, colC, t0, cellData(0, flush));
+ assertCellEquals(region, row, colC, t1, null);
+ assertCellEquals(region, row, colC, t2, null);
+ assertCellEquals(region, row, colD, t0, cellData(0, flush));
+ assertCellEquals(region, row, colD, t1, null);
+ assertCellEquals(region, row, colD, t2, null);
+
+ // call delete all w/o a timestamp, make sure nothing is left.
+ region.deleteAllByRegex(row, COLUMN_REGEX,
+ HConstants.LATEST_TIMESTAMP, null);
+ if (flush) {region_incommon.flushcache();}
+ assertCellEquals(region, row, colA, t0, cellData(0, flush));
+ assertCellEquals(region, row, colA, t1, cellData(1, flush));
+ assertCellEquals(region, row, colA, t2, cellData(2, flush));
+ assertCellEquals(region, row, colB, t0, null);
+ assertCellEquals(region, row, colB, t1, null);
+ assertCellEquals(region, row, colB, t2, null);
+ assertCellEquals(region, row, colC, t0, null);
+ assertCellEquals(region, row, colC, t1, null);
+ assertCellEquals(region, row, colC, t2, null);
+ assertCellEquals(region, row, colD, t0, null);
+ assertCellEquals(region, row, colD, t1, null);
+ assertCellEquals(region, row, colD, t2, null);
+
+ }
+
+ private String cellData(int tsNum, boolean flush){
+ return "t" + tsNum + " data" + (flush ? " - with flush" : "");
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ if (this.miniHdfs != null) {
+ shutdownDfs(this.miniHdfs);
+ }
+ super.tearDown();
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestDeleteFamily.java b/src/test/org/apache/hadoop/hbase/regionserver/TestDeleteFamily.java
new file mode 100644
index 0000000..81f6076
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestDeleteFamily.java
@@ -0,0 +1,225 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test the functionality of deleteFamily.
+ */
+public class TestDeleteFamily extends HBaseTestCase {
+ static final Log LOG = LogFactory.getLog(TestDeleteFamily.class);
+ private MiniDFSCluster miniHdfs;
+
+ //for family regex deletion test
+ protected static final String COLFAMILY_REGEX = "col[a-zA-Z]*1";
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ this.miniHdfs = new MiniDFSCluster(this.conf, 1, true, null);
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR,
+ this.miniHdfs.getFileSystem().getHomeDirectory().toString());
+ }
+
+ /**
+ * Tests for HADOOP-2384.
+ * @throws Exception
+ */
+ public void testDeleteFamily() throws Exception {
+ HRegion region = null;
+ HRegionIncommon region_incommon = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+ region_incommon = new HRegionIncommon(region);
+
+ // test memcache
+ makeSureItWorks(region, region_incommon, false);
+ // test hstore
+ makeSureItWorks(region, region_incommon, true);
+ // family regex test memcache
+ makeSureRegexWorks(region, region_incommon, false);
+ // family regex test hstore
+ makeSureRegexWorks(region, region_incommon, true);
+
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ private void makeSureItWorks(HRegion region, HRegionIncommon region_incommon,
+ boolean flush)
+ throws Exception{
+ // insert a few versions worth of data for a row
+ byte [] row = Bytes.toBytes("test_row");
+ long t0 = System.currentTimeMillis();
+ long t1 = t0 - 15000;
+ long t2 = t1 - 15000;
+
+ byte [] colA = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "a");
+ byte [] colB = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "b");
+ byte [] colC = Bytes.toBytes(Bytes.toString(COLUMNS[1]) + "c");
+
+ BatchUpdate batchUpdate = null;
+ batchUpdate = new BatchUpdate(row, t0);
+ batchUpdate.put(colA, cellData(0, flush).getBytes());
+ batchUpdate.put(colB, cellData(0, flush).getBytes());
+ batchUpdate.put(colC, cellData(0, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(row, t1);
+ batchUpdate.put(colA, cellData(1, flush).getBytes());
+ batchUpdate.put(colB, cellData(1, flush).getBytes());
+ batchUpdate.put(colC, cellData(1, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(row, t2);
+ batchUpdate.put(colA, cellData(2, flush).getBytes());
+ batchUpdate.put(colB, cellData(2, flush).getBytes());
+ batchUpdate.put(colC, cellData(2, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ if (flush) {region_incommon.flushcache();}
+
+ // call delete family at a timestamp, make sure only the most recent stuff
+ // for column c is left behind
+ region.deleteFamily(row, COLUMNS[0], t1, null);
+ if (flush) {region_incommon.flushcache();}
+ // most recent for A,B,C should be fine
+ // A,B at older timestamps should be gone
+ // C should be fine for older timestamps
+ assertCellEquals(region, row, colA, t0, cellData(0, flush));
+ assertCellEquals(region, row, colA, t1, null);
+ assertCellEquals(region, row, colA, t2, null);
+ assertCellEquals(region, row, colB, t0, cellData(0, flush));
+ assertCellEquals(region, row, colB, t1, null);
+ assertCellEquals(region, row, colB, t2, null);
+ assertCellEquals(region, row, colC, t0, cellData(0, flush));
+ assertCellEquals(region, row, colC, t1, cellData(1, flush));
+ assertCellEquals(region, row, colC, t2, cellData(2, flush));
+
+ // call delete family w/o a timestamp, make sure nothing is left except for
+ // column C.
+ region.deleteFamily(row, COLUMNS[0], HConstants.LATEST_TIMESTAMP, null);
+ if (flush) {region_incommon.flushcache();}
+ // A,B for latest timestamp should be gone
+ // C should still be fine
+ assertCellEquals(region, row, colA, t0, null);
+ assertCellEquals(region, row, colB, t0, null);
+ assertCellEquals(region, row, colC, t0, cellData(0, flush));
+ assertCellEquals(region, row, colC, t1, cellData(1, flush));
+ assertCellEquals(region, row, colC, t2, cellData(2, flush));
+
+ }
+
+ private void makeSureRegexWorks(HRegion region, HRegionIncommon region_incommon,
+ boolean flush)
+ throws Exception{
+ // insert a few versions worth of data for a row
+ byte [] row = Bytes.toBytes("test_row");
+ long t0 = System.currentTimeMillis();
+ long t1 = t0 - 15000;
+ long t2 = t1 - 15000;
+
+ byte [] colA = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "a");
+ byte [] colB = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "b");
+ byte [] colC = Bytes.toBytes(Bytes.toString(COLUMNS[1]) + "c");
+
+ BatchUpdate batchUpdate = null;
+ batchUpdate = new BatchUpdate(row, t0);
+ batchUpdate.put(colA, cellData(0, flush).getBytes());
+ batchUpdate.put(colB, cellData(0, flush).getBytes());
+ batchUpdate.put(colC, cellData(0, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(row, t1);
+ batchUpdate.put(colA, cellData(1, flush).getBytes());
+ batchUpdate.put(colB, cellData(1, flush).getBytes());
+ batchUpdate.put(colC, cellData(1, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(row, t2);
+ batchUpdate.put(colA, cellData(2, flush).getBytes());
+ batchUpdate.put(colB, cellData(2, flush).getBytes());
+ batchUpdate.put(colC, cellData(2, flush).getBytes());
+ region_incommon.commit(batchUpdate);
+
+ if (flush) {region_incommon.flushcache();}
+
+ // call delete family at a timestamp, make sure only the most recent stuff
+ // for column c is left behind
+ region.deleteFamilyByRegex(row, COLFAMILY_REGEX, t1, null);
+ if (flush) {region_incommon.flushcache();}
+ // most recent for A,B,C should be fine
+ // A,B at older timestamps should be gone
+ // C should be fine for older timestamps
+ assertCellEquals(region, row, colA, t0, cellData(0, flush));
+ assertCellEquals(region, row, colA, t1, null);
+ assertCellEquals(region, row, colA, t2, null);
+ assertCellEquals(region, row, colB, t0, cellData(0, flush));
+ assertCellEquals(region, row, colB, t1, null);
+ assertCellEquals(region, row, colB, t2, null);
+ assertCellEquals(region, row, colC, t0, cellData(0, flush));
+ assertCellEquals(region, row, colC, t1, cellData(1, flush));
+ assertCellEquals(region, row, colC, t2, cellData(2, flush));
+
+ // call delete family w/o a timestamp, make sure nothing is left except for
+ // column C.
+ region.deleteFamilyByRegex(row, COLFAMILY_REGEX, HConstants.LATEST_TIMESTAMP, null);
+ if (flush) {region_incommon.flushcache();}
+ // A,B for latest timestamp should be gone
+ // C should still be fine
+ assertCellEquals(region, row, colA, t0, null);
+ assertCellEquals(region, row, colB, t0, null);
+ assertCellEquals(region, row, colC, t0, cellData(0, flush));
+ assertCellEquals(region, row, colC, t1, cellData(1, flush));
+ assertCellEquals(region, row, colC, t2, cellData(2, flush));
+
+ }
+
+ private String cellData(int tsNum, boolean flush){
+ return "t" + tsNum + " data" + (flush ? " - with flush" : "");
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ if (this.miniHdfs != null) {
+ this.miniHdfs.shutdown();
+ }
+ super.tearDown();
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestGet.java b/src/test/org/apache/hadoop/hbase/regionserver/TestGet.java
new file mode 100644
index 0000000..b95514b
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestGet.java
@@ -0,0 +1,166 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/** Test case for get */
+public class TestGet extends HBaseTestCase {
+ private static final Log LOG = LogFactory.getLog(TestGet.class.getName());
+
+ private static final byte [] CONTENTS = Bytes.toBytes("contents:");
+ private static final byte [] ROW_KEY =
+ HRegionInfo.ROOT_REGIONINFO.getRegionName();
+ private static final String SERVER_ADDRESS = "foo.bar.com:1234";
+
+
+
+ private void verifyGet(final HRegionIncommon r, final String expectedServer)
+ throws IOException {
+ // This should return a value because there is only one family member
+ Cell value = r.get(ROW_KEY, CONTENTS);
+ assertNotNull(value);
+
+ // This should not return a value because there are multiple family members
+ value = r.get(ROW_KEY, HConstants.COLUMN_FAMILY);
+ assertNull(value);
+
+ // Find out what getFull returns
+ Map<byte [], Cell> values = r.getFull(ROW_KEY);
+
+ // assertEquals(4, values.keySet().size());
+ for (Map.Entry<byte[], Cell> entry : values.entrySet()) {
+ byte[] column = entry.getKey();
+ Cell cell = entry.getValue();
+ if (Bytes.equals(column, HConstants.COL_SERVER)) {
+ String server = Writables.cellToString(cell);
+ assertEquals(expectedServer, server);
+ LOG.info(server);
+ }
+ }
+ }
+
+ /**
+ * the test
+ * @throws IOException
+ */
+ public void testGet() throws IOException {
+ MiniDFSCluster cluster = null;
+ HRegion region = null;
+
+ try {
+
+ // Initialization
+
+ cluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR,
+ cluster.getFileSystem().getHomeDirectory().toString());
+
+ HTableDescriptor desc = new HTableDescriptor("test");
+ desc.addFamily(new HColumnDescriptor(CONTENTS));
+ desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
+
+ region = createNewHRegion(desc, null, null);
+ HRegionIncommon r = new HRegionIncommon(region);
+
+ // Write information to the table
+
+ BatchUpdate batchUpdate = null;
+ batchUpdate = new BatchUpdate(ROW_KEY, System.currentTimeMillis());
+ batchUpdate.put(CONTENTS, CONTENTS);
+ batchUpdate.put(HConstants.COL_REGIONINFO,
+ Writables.getBytes(HRegionInfo.ROOT_REGIONINFO));
+ r.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(ROW_KEY, System.currentTimeMillis());
+ batchUpdate.put(HConstants.COL_SERVER,
+ Bytes.toBytes(new HServerAddress(SERVER_ADDRESS).toString()));
+ batchUpdate.put(HConstants.COL_STARTCODE, Bytes.toBytes(12345));
+ batchUpdate.put(Bytes.toString(HConstants.COLUMN_FAMILY) +
+ "region", Bytes.toBytes("region"));
+ r.commit(batchUpdate);
+
+ // Verify that get works the same from memcache as when reading from disk
+ // NOTE dumpRegion won't work here because it only reads from disk.
+
+ verifyGet(r, SERVER_ADDRESS);
+
+ // Close and re-open region, forcing updates to disk
+
+ region.close();
+ region = openClosedRegion(region);
+ r = new HRegionIncommon(region);
+
+ // Read it back
+
+ verifyGet(r, SERVER_ADDRESS);
+
+ // Update one family member and add a new one
+
+ batchUpdate = new BatchUpdate(ROW_KEY, System.currentTimeMillis());
+ batchUpdate.put(Bytes.toString(HConstants.COLUMN_FAMILY) + "region",
+ "region2".getBytes(HConstants.UTF8_ENCODING));
+ String otherServerName = "bar.foo.com:4321";
+ batchUpdate.put(HConstants.COL_SERVER,
+ Bytes.toBytes(new HServerAddress(otherServerName).toString()));
+ batchUpdate.put(Bytes.toString(HConstants.COLUMN_FAMILY) + "junk",
+ "junk".getBytes(HConstants.UTF8_ENCODING));
+ r.commit(batchUpdate);
+
+ verifyGet(r, otherServerName);
+
+ // Close region and re-open it
+
+ region.close();
+ region = openClosedRegion(region);
+ r = new HRegionIncommon(region);
+
+ // Read it back
+
+ verifyGet(r, otherServerName);
+
+ } finally {
+ if (region != null) {
+ // Close region once and for all
+ region.close();
+ region.getLog().closeAndDelete();
+ }
+ if (cluster != null) {
+ shutdownDfs(cluster);
+ }
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestGet2.java b/src/test/org/apache/hadoop/hbase/regionserver/TestGet2.java
new file mode 100644
index 0000000..547358f
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestGet2.java
@@ -0,0 +1,721 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.TreeSet;
+
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.StopRowFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * {@link TestGet} is a medley of tests of get all done up as a single test.
+ * This class
+ */
+public class TestGet2 extends HBaseTestCase implements HConstants {
+ private MiniDFSCluster miniHdfs;
+
+ private static final String T00 = "000";
+ private static final String T10 = "010";
+ private static final String T11 = "011";
+ private static final String T12 = "012";
+ private static final String T20 = "020";
+ private static final String T30 = "030";
+ private static final String T31 = "031";
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ this.miniHdfs = new MiniDFSCluster(this.conf, 1, true, null);
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR,
+ this.miniHdfs.getFileSystem().getHomeDirectory().toString());
+ }
+
+
+ public void testGetFullMultiMapfile() throws IOException {
+ HRegion region = null;
+ BatchUpdate batchUpdate = null;
+ Map<byte [], Cell> results = null;
+
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+
+ // Test ordering issue
+ //
+ byte [] row = Bytes.toBytes("row1");
+
+ // write some data
+ batchUpdate = new BatchUpdate(row);
+ batchUpdate.put(COLUMNS[0], "olderValue".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ // flush
+ region.flushcache();
+
+ // assert that getFull gives us the older value
+ results = region.getFull(row, (NavigableSet<byte []>)null,
+ LATEST_TIMESTAMP, 1, null);
+ assertEquals("olderValue",
+ new String(results.get(COLUMNS[0]).getValue()));
+
+ // write a new value for the cell
+ batchUpdate = new BatchUpdate(row);
+ batchUpdate.put(COLUMNS[0], "newerValue".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ // flush
+ region.flushcache();
+
+ // assert that getFull gives us the later value
+ results = region.getFull(row, (NavigableSet<byte []>)null,
+ LATEST_TIMESTAMP, 1, null);
+ assertEquals("newerValue", new String(results.get(COLUMNS[0]).getValue()));
+
+ //
+ // Test the delete masking issue
+ //
+ byte [] row2 = Bytes.toBytes("row2");
+ byte [] cell1 = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "a");
+ byte [] cell2 = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "b");
+ byte [] cell3 = Bytes.toBytes(Bytes.toString(COLUMNS[0]) + "c");
+
+ long now = System.currentTimeMillis();
+
+ // write some data at two columns
+ batchUpdate = new BatchUpdate(row2, now);
+ batchUpdate.put(cell1, "column0 value".getBytes());
+ batchUpdate.put(cell2, "column1 value".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ // flush
+ region.flushcache();
+
+ // assert i get both columns
+ results = region.getFull(row2,
+ (NavigableSet<byte []>)null, LATEST_TIMESTAMP, 1, null);
+ assertEquals("Should have two columns in the results map", 2, results.size());
+ assertEquals("column0 value", new String(results.get(cell1).getValue()));
+ assertEquals("column1 value", new String(results.get(cell2).getValue()));
+
+ // write a delete for the first column
+ batchUpdate = new BatchUpdate(row2, now);
+ batchUpdate.delete(cell1);
+ batchUpdate.put(cell2, "column1 new value".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ // flush
+ region.flushcache();
+
+ // assert i get the second column only
+ results = region.getFull(row2, (NavigableSet<byte []>)null,
+ LATEST_TIMESTAMP, 1, null);
+ System.out.println(Bytes.toString(results.keySet().iterator().next()));
+ assertEquals("Should have one column in the results map", 1, results.size());
+ assertNull("column0 value", results.get(cell1));
+ assertEquals("column1 new value", new String(results.get(cell2).getValue()));
+
+ //
+ // Include a delete and value from the memcache in the mix
+ //
+ batchUpdate = new BatchUpdate(row2, now);
+ batchUpdate.delete(cell2);
+ batchUpdate.put(cell3, "column3 value!".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ // assert i get the third column only
+ results = region.getFull(row2, (NavigableSet<byte []>)null, LATEST_TIMESTAMP, 1, null);
+ assertEquals("Should have one column in the results map", 1, results.size());
+ assertNull("column0 value", results.get(cell1));
+ assertNull("column1 value", results.get(cell2));
+ assertEquals("column3 value!", new String(results.get(cell3).getValue()));
+
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ /**
+ * Test for HBASE-808 and HBASE-809.
+ * @throws Exception
+ */
+ public void testMaxVersionsAndDeleting() throws Exception {
+ HRegion region = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+
+ byte [] column = COLUMNS[0];
+ for (int i = 0; i < 100; i++) {
+ addToRow(region, T00, column, i, T00.getBytes());
+ }
+ checkVersions(region, T00, column);
+ // Flush and retry.
+ region.flushcache();
+ checkVersions(region, T00, column);
+
+ // Now delete all then retry
+ region.deleteAll(Bytes.toBytes(T00), System.currentTimeMillis(), null);
+ Cell [] cells = Cell.createSingleCellArray(region.get(Bytes.toBytes(T00), column, -1,
+ HColumnDescriptor.DEFAULT_VERSIONS));
+ assertTrue(cells == null);
+ region.flushcache();
+ cells = Cell.createSingleCellArray(region.get(Bytes.toBytes(T00), column, -1,
+ HColumnDescriptor.DEFAULT_VERSIONS));
+ assertTrue(cells == null);
+
+ // Now add back the rows
+ for (int i = 0; i < 100; i++) {
+ addToRow(region, T00, column, i, T00.getBytes());
+ }
+ // Run same verifications.
+ checkVersions(region, T00, column);
+ // Flush and retry.
+ region.flushcache();
+ checkVersions(region, T00, column);
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ /** For HBASE-694
+ * @throws IOException
+ */
+ public void testGetClosestRowBefore2() throws IOException {
+
+ HRegion region = null;
+ BatchUpdate batchUpdate = null;
+
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+
+ // set up some test data
+ String t10 = "010";
+ String t20 = "020";
+ String t30 = "030";
+ String t40 = "040";
+
+ batchUpdate = new BatchUpdate(t10);
+ batchUpdate.put(COLUMNS[0], "t10 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(t30);
+ batchUpdate.put(COLUMNS[0], "t30 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(t40);
+ batchUpdate.put(COLUMNS[0], "t40 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ // try finding "035"
+ String t35 = "035";
+ Map<byte [], Cell> results =
+ region.getClosestRowBefore(Bytes.toBytes(t35), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t30 bytes");
+
+ region.flushcache();
+
+ // try finding "035"
+ results = region.getClosestRowBefore(Bytes.toBytes(t35), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t30 bytes");
+
+ batchUpdate = new BatchUpdate(t20);
+ batchUpdate.put(COLUMNS[0], "t20 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ // try finding "035"
+ results = region.getClosestRowBefore(Bytes.toBytes(t35), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t30 bytes");
+
+ region.flushcache();
+
+ // try finding "035"
+ results = region.getClosestRowBefore(Bytes.toBytes(t35), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t30 bytes");
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ private void addToRow(final HRegion r, final String row, final byte [] column,
+ final long ts, final byte [] bytes)
+ throws IOException {
+ BatchUpdate batchUpdate = new BatchUpdate(row, ts);
+ batchUpdate.put(column, bytes);
+ r.batchUpdate(batchUpdate, null);
+ }
+
+ private void checkVersions(final HRegion region, final String row,
+ final byte [] column)
+ throws IOException {
+ byte [] r = Bytes.toBytes(row);
+ Cell [] cells = Cell.createSingleCellArray(region.get(r, column, -1, 100));
+ assertTrue(cells.length == HColumnDescriptor.DEFAULT_VERSIONS);
+ cells = Cell.createSingleCellArray(region.get(r, column, -1, 1));
+ assertTrue(cells.length == 1);
+ cells = Cell.createSingleCellArray(region.get(r, column, -1, 10000));
+ assertTrue(cells.length == HColumnDescriptor.DEFAULT_VERSIONS);
+ }
+
+ /**
+ * Test file of multiple deletes and with deletes as final key.
+ * @throws IOException
+ * @see <a href="https://issues.apache.org/jira/browse/HBASE-751">HBASE-751</a>
+ */
+ public void testGetClosestRowBefore3() throws IOException {
+ HRegion region = null;
+ BatchUpdate batchUpdate = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+
+ batchUpdate = new BatchUpdate(T00);
+ batchUpdate.put(COLUMNS[0], T00.getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(T10);
+ batchUpdate.put(COLUMNS[0], T10.getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(T20);
+ batchUpdate.put(COLUMNS[0], T20.getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ Map<byte [], Cell> results =
+ region.getClosestRowBefore(Bytes.toBytes(T20), COLUMNS[0]);
+ assertEquals(T20, new String(results.get(COLUMNS[0]).getValue()));
+
+ batchUpdate = new BatchUpdate(T20);
+ batchUpdate.delete(COLUMNS[0]);
+ region.batchUpdate(batchUpdate, null);
+
+ results = region.getClosestRowBefore(Bytes.toBytes(T20), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+
+ batchUpdate = new BatchUpdate(T30);
+ batchUpdate.put(COLUMNS[0], T30.getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ results = region.getClosestRowBefore(Bytes.toBytes(T30), COLUMNS[0]);
+ assertEquals(T30, new String(results.get(COLUMNS[0]).getValue()));
+
+ batchUpdate = new BatchUpdate(T30);
+ batchUpdate.delete(COLUMNS[0]);
+ region.batchUpdate(batchUpdate, null);
+
+ results = region.getClosestRowBefore(Bytes.toBytes(T30), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+ results = region.getClosestRowBefore(Bytes.toBytes(T31), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+
+ region.flushcache();
+
+ // try finding "010" after flush
+ results = region.getClosestRowBefore(Bytes.toBytes(T30), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+ results = region.getClosestRowBefore(Bytes.toBytes(T31), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+
+ // Put into a different column family. Should make it so I still get t10
+ batchUpdate = new BatchUpdate(T20);
+ batchUpdate.put(COLUMNS[1], T20.getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ results = region.getClosestRowBefore(Bytes.toBytes(T30), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+ results = region.getClosestRowBefore(Bytes.toBytes(T31), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+ region.flushcache();
+ results = region.getClosestRowBefore(Bytes.toBytes(T30), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+ results = region.getClosestRowBefore(Bytes.toBytes(T31), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+
+ // Now try combo of memcache and mapfiles. Delete the t20 COLUMS[1]
+ // in memory; make sure we get back t10 again.
+ batchUpdate = new BatchUpdate(T20);
+ batchUpdate.delete(COLUMNS[1]);
+ region.batchUpdate(batchUpdate, null);
+ results = region.getClosestRowBefore(Bytes.toBytes(T30), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+
+ // Ask for a value off the end of the file. Should return t10.
+ results = region.getClosestRowBefore(Bytes.toBytes(T31), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+ region.flushcache();
+ results = region.getClosestRowBefore(Bytes.toBytes(T31), COLUMNS[0]);
+ assertEquals(T10, new String(results.get(COLUMNS[0]).getValue()));
+
+ // Ok. Let the candidate come out of mapfiles but have delete of
+ // the candidate be in memory.
+ batchUpdate = new BatchUpdate(T11);
+ batchUpdate.put(COLUMNS[0], T11.getBytes());
+ region.batchUpdate(batchUpdate, null);
+ batchUpdate = new BatchUpdate(T10);
+ batchUpdate.delete(COLUMNS[0]);
+ region.batchUpdate(batchUpdate, null);
+ results = region.getClosestRowBefore(Bytes.toBytes(T12), COLUMNS[0]);
+ assertEquals(T11, new String(results.get(COLUMNS[0]).getValue()));
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ /**
+ * Tests for HADOOP-2161.
+ * @throws Exception
+ */
+ public void testGetFull() throws Exception {
+ HRegion region = null;
+ InternalScanner scanner = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+ for (int i = 0; i < COLUMNS.length; i++) {
+ addContent(region, COLUMNS[i]);
+ }
+ // Find two rows to use doing getFull.
+ final byte [] arbitraryStartRow = Bytes.toBytes("b");
+ byte [] actualStartRow = null;
+ final byte [] arbitraryStopRow = Bytes.toBytes("c");
+ byte [] actualStopRow = null;
+ byte [][] columns = {COLFAMILY_NAME1};
+ scanner = region.getScanner(columns,
+ arbitraryStartRow, HConstants.LATEST_TIMESTAMP,
+ new WhileMatchRowFilter(new StopRowFilter(arbitraryStopRow)));
+ List<KeyValue> value = new ArrayList<KeyValue>();
+ while (scanner.next(value)) {
+ if (actualStartRow == null) {
+ actualStartRow = value.get(0).getRow();
+ } else {
+ actualStopRow = value.get(0).getRow();
+ }
+ }
+ // Assert I got all out.
+ assertColumnsPresent(region, actualStartRow);
+ assertColumnsPresent(region, actualStopRow);
+ // Force a flush so store files come into play.
+ region.flushcache();
+ // Assert I got all out.
+ assertColumnsPresent(region, actualStartRow);
+ assertColumnsPresent(region, actualStopRow);
+ } finally {
+ if (scanner != null) {
+ scanner.close();
+ }
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testGetAtTimestamp() throws IOException{
+ HRegion region = null;
+ HRegionIncommon region_incommon = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+ region_incommon = new HRegionIncommon(region);
+
+ long right_now = System.currentTimeMillis();
+ long one_second_ago = right_now - 1000;
+
+ String t = "test_row";
+ BatchUpdate batchUpdate = new BatchUpdate(t, one_second_ago);
+ batchUpdate.put(COLUMNS[0], "old text".getBytes());
+ region_incommon.commit(batchUpdate);
+
+ batchUpdate = new BatchUpdate(t, right_now);
+ batchUpdate.put(COLUMNS[0], "new text".getBytes());
+ region_incommon.commit(batchUpdate);
+
+ assertCellEquals(region, Bytes.toBytes(t), COLUMNS[0],
+ right_now, "new text");
+ assertCellEquals(region, Bytes.toBytes(t), COLUMNS[0],
+ one_second_ago, "old text");
+
+ // Force a flush so store files come into play.
+ region_incommon.flushcache();
+
+ assertCellEquals(region, Bytes.toBytes(t), COLUMNS[0], right_now, "new text");
+ assertCellEquals(region, Bytes.toBytes(t), COLUMNS[0], one_second_ago, "old text");
+
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ /**
+ * For HADOOP-2443
+ * @throws IOException
+ */
+ public void testGetClosestRowBefore() throws IOException{
+
+ HRegion region = null;
+ BatchUpdate batchUpdate = null;
+
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+
+ // set up some test data
+ String t10 = "010";
+ String t20 = "020";
+ String t30 = "030";
+ String t35 = "035";
+ String t40 = "040";
+
+ batchUpdate = new BatchUpdate(t10);
+ batchUpdate.put(COLUMNS[0], "t10 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(t20);
+ batchUpdate.put(COLUMNS[0], "t20 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(t30);
+ batchUpdate.put(COLUMNS[0], "t30 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(t35);
+ batchUpdate.put(COLUMNS[0], "t35 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(t35);
+ batchUpdate.delete(COLUMNS[0]);
+ region.batchUpdate(batchUpdate, null);
+
+ batchUpdate = new BatchUpdate(t40);
+ batchUpdate.put(COLUMNS[0], "t40 bytes".getBytes());
+ region.batchUpdate(batchUpdate, null);
+
+ // try finding "015"
+ String t15 = "015";
+ Map<byte [], Cell> results =
+ region.getClosestRowBefore(Bytes.toBytes(t15), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t10 bytes");
+
+ // try "020", we should get that row exactly
+ results = region.getClosestRowBefore(Bytes.toBytes(t20), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t20 bytes");
+
+ // try "038", should skip deleted "035" and get "030"
+ String t38 = "038";
+ results = region.getClosestRowBefore(Bytes.toBytes(t38), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t30 bytes");
+
+ // try "050", should get stuff from "040"
+ String t50 = "050";
+ results = region.getClosestRowBefore(Bytes.toBytes(t50), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t40 bytes");
+
+ // force a flush
+ region.flushcache();
+
+ // try finding "015"
+ results = region.getClosestRowBefore(Bytes.toBytes(t15), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t10 bytes");
+
+ // try "020", we should get that row exactly
+ results = region.getClosestRowBefore(Bytes.toBytes(t20), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t20 bytes");
+
+ // try "038", should skip deleted "035" and get "030"
+ results = region.getClosestRowBefore(Bytes.toBytes(t38), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t30 bytes");
+
+ // try "050", should get stuff from "040"
+ results = region.getClosestRowBefore(Bytes.toBytes(t50), COLUMNS[0]);
+ assertEquals(new String(results.get(COLUMNS[0]).getValue()), "t40 bytes");
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ /**
+ * For HBASE-40
+ * @throws IOException
+ */
+ public void testGetFullWithSpecifiedColumns() throws IOException {
+ HRegion region = null;
+ HRegionIncommon region_incommon = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ region = createNewHRegion(htd, null, null);
+ region_incommon = new HRegionIncommon(region);
+
+ // write a row with a bunch of columns
+ byte [] row = Bytes.toBytes("some_row");
+ BatchUpdate bu = new BatchUpdate(row);
+ bu.put(COLUMNS[0], "column 0".getBytes());
+ bu.put(COLUMNS[1], "column 1".getBytes());
+ bu.put(COLUMNS[2], "column 2".getBytes());
+ region.batchUpdate(bu, null);
+
+ assertSpecifiedColumns(region, row);
+ // try it again with a cache flush to involve the store, not just the
+ // memcache.
+ region_incommon.flushcache();
+ assertSpecifiedColumns(region, row);
+
+ } finally {
+ if (region != null) {
+ try {
+ region.close();
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ private void assertSpecifiedColumns(final HRegion region, final byte [] row)
+ throws IOException {
+ TreeSet<byte []> all = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ TreeSet<byte []> one = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ TreeSet<byte []> none = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+
+ all.add(COLUMNS[0]);
+ all.add(COLUMNS[1]);
+ all.add(COLUMNS[2]);
+ one.add(COLUMNS[0]);
+
+ // make sure we get all of them with standard getFull
+ Map<byte [], Cell> result = region.getFull(row, null,
+ HConstants.LATEST_TIMESTAMP, 1, null);
+ assertEquals(new String(result.get(COLUMNS[0]).getValue()), "column 0");
+ assertEquals(new String(result.get(COLUMNS[1]).getValue()), "column 1");
+ assertEquals(new String(result.get(COLUMNS[2]).getValue()), "column 2");
+
+ // try to get just one
+ result = region.getFull(row, one, HConstants.LATEST_TIMESTAMP, 1, null);
+ assertEquals(new String(result.get(COLUMNS[0]).getValue()), "column 0");
+ assertNull(result.get(COLUMNS[1]));
+ assertNull(result.get(COLUMNS[2]));
+
+ // try to get all of them (specified)
+ result = region.getFull(row, all, HConstants.LATEST_TIMESTAMP, 1, null);
+ assertEquals(new String(result.get(COLUMNS[0]).getValue()), "column 0");
+ assertEquals(new String(result.get(COLUMNS[1]).getValue()), "column 1");
+ assertEquals(new String(result.get(COLUMNS[2]).getValue()), "column 2");
+
+ // try to get none with empty column set
+ result = region.getFull(row, none, HConstants.LATEST_TIMESTAMP, 1, null);
+ assertNull(result.get(COLUMNS[0]));
+ assertNull(result.get(COLUMNS[1]));
+ assertNull(result.get(COLUMNS[2]));
+ }
+
+ private void assertColumnsPresent(final HRegion r, final byte [] row)
+ throws IOException {
+ Map<byte [], Cell> result =
+ r.getFull(row, null, HConstants.LATEST_TIMESTAMP, 1, null);
+ int columnCount = 0;
+ for (Map.Entry<byte [], Cell> e: result.entrySet()) {
+ columnCount++;
+ byte [] column = e.getKey();
+ boolean legitColumn = false;
+ for (int i = 0; i < COLUMNS.length; i++) {
+ // Assert value is same as row. This is 'nature' of the data added.
+ assertTrue(Bytes.equals(row, e.getValue().getValue()));
+ if (Bytes.equals(COLUMNS[i], column)) {
+ legitColumn = true;
+ break;
+ }
+ }
+ assertTrue("is legit column name", legitColumn);
+ }
+ assertEquals("count of columns", columnCount, COLUMNS.length);
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ if (this.miniHdfs != null) {
+ this.miniHdfs.shutdown();
+ }
+ super.tearDown();
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestHLog.java b/src/test/org/apache/hadoop/hbase/regionserver/TestHLog.java
new file mode 100644
index 0000000..e2abe57
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestHLog.java
@@ -0,0 +1,151 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.io.SequenceFile.Reader;
+
+/** JUnit test case for HLog */
+public class TestHLog extends HBaseTestCase implements HConstants {
+ private Path dir;
+ private MiniDFSCluster cluster;
+
+ @Override
+ public void setUp() throws Exception {
+ cluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR,
+ this.cluster.getFileSystem().getHomeDirectory().toString());
+ super.setUp();
+ this.dir = new Path("/hbase", getName());
+ if (fs.exists(dir)) {
+ fs.delete(dir, true);
+ }
+ }
+
+ @Override
+ public void tearDown() throws Exception {
+ if (this.fs.exists(this.dir)) {
+ this.fs.delete(this.dir, true);
+ }
+ shutdownDfs(cluster);
+ super.tearDown();
+ }
+
+ /**
+ * Just write multiple logs then split. Before fix for HADOOP-2283, this
+ * would fail.
+ * @throws IOException
+ */
+ public void testSplit() throws IOException {
+ final byte [] tableName = Bytes.toBytes(getName());
+ final byte [] rowName = tableName;
+ HLog log = new HLog(this.fs, this.dir, this.conf, null);
+ // Add edits for three regions.
+ try {
+ for (int ii = 0; ii < 3; ii++) {
+ for (int i = 0; i < 3; i++) {
+ for (int j = 0; j < 3; j++) {
+ List<KeyValue> edit = new ArrayList<KeyValue>();
+ byte [] column = Bytes.toBytes("column:" + Integer.toString(j));
+ edit.add(new KeyValue(rowName, column, System.currentTimeMillis(),
+ column));
+ log.append(Bytes.toBytes(Integer.toString(i)), tableName, edit, false);
+ }
+ }
+ log.rollWriter();
+ }
+ HLog.splitLog(this.testDir, this.dir, this.fs, this.conf);
+ log = null;
+ } finally {
+ if (log != null) {
+ log.closeAndDelete();
+ }
+ }
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testAppend() throws IOException {
+ final int COL_COUNT = 10;
+ final byte [] regionName = Bytes.toBytes("regionname");
+ final byte [] tableName = Bytes.toBytes("tablename");
+ final byte [] row = Bytes.toBytes("row");
+ Reader reader = null;
+ HLog log = new HLog(fs, dir, this.conf, null);
+ try {
+ // Write columns named 1, 2, 3, etc. and then values of single byte
+ // 1, 2, 3...
+ long timestamp = System.currentTimeMillis();
+ List<KeyValue> cols = new ArrayList<KeyValue>();
+ for (int i = 0; i < COL_COUNT; i++) {
+ cols.add(new KeyValue(row, Bytes.toBytes("column:" + Integer.toString(i)),
+ timestamp, new byte[] { (byte)(i + '0') }));
+ }
+ log.append(regionName, tableName, cols, false);
+ long logSeqId = log.startCacheFlush();
+ log.completeCacheFlush(regionName, tableName, logSeqId);
+ log.close();
+ Path filename = log.computeFilename(log.getFilenum());
+ log = null;
+ // Now open a reader on the log and assert append worked.
+ reader = new SequenceFile.Reader(fs, filename, conf);
+ HLogKey key = new HLogKey();
+ KeyValue val = new KeyValue();
+ for (int i = 0; i < COL_COUNT; i++) {
+ reader.next(key, val);
+ assertTrue(Bytes.equals(regionName, key.getRegionName()));
+ assertTrue(Bytes.equals(tableName, key.getTablename()));
+ assertTrue(Bytes.equals(row, val.getRow()));
+ assertEquals((byte)(i + '0'), val.getValue()[0]);
+ System.out.println(key + " " + val);
+ }
+ while (reader.next(key, val)) {
+ // Assert only one more row... the meta flushed row.
+ assertTrue(Bytes.equals(regionName, key.getRegionName()));
+ assertTrue(Bytes.equals(tableName, key.getTablename()));
+ assertTrue(Bytes.equals(HLog.METAROW, val.getRow()));
+ assertTrue(Bytes.equals(HLog.METACOLUMN, val.getColumn()));
+ assertEquals(0, Bytes.compareTo(HLog.COMPLETE_CACHE_FLUSH,
+ val.getValue()));
+ System.out.println(key + " " + val);
+ }
+ } finally {
+ if (log != null) {
+ log.closeAndDelete();
+ }
+ if (reader != null) {
+ reader.close();
+ }
+ }
+ }
+
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestHMemcache.java b/src/test/org/apache/hadoop/hbase/regionserver/TestHMemcache.java
new file mode 100644
index 0000000..71a4206
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestHMemcache.java
@@ -0,0 +1,458 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.rmi.UnexpectedException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.regionserver.HRegion.Counter;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/** memcache test case */
+public class TestHMemcache extends TestCase {
+ private Memcache hmemcache;
+
+ private static final int ROW_COUNT = 10;
+
+ private static final int COLUMNS_COUNT = 10;
+
+ private static final String COLUMN_FAMILY = "column";
+
+ private static final int FIRST_ROW = 1;
+ private static final int NUM_VALS = 1000;
+ private static final byte [] CONTENTS_BASIC = Bytes.toBytes("contents:basic");
+ private static final String CONTENTSTR = "contentstr";
+ private static final String ANCHORNUM = "anchor:anchornum-";
+ private static final String ANCHORSTR = "anchorstr";
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ this.hmemcache = new Memcache();
+ }
+
+ public void testGetWithDeletes() throws IOException {
+ Memcache mc = new Memcache(HConstants.FOREVER, KeyValue.ROOT_COMPARATOR);
+ final int start = 0;
+ final int end = 5;
+ long now = System.currentTimeMillis();
+ for (int k = start; k <= end; k++) {
+ byte [] row = Bytes.toBytes(k);
+ KeyValue key = new KeyValue(row, CONTENTS_BASIC, now,
+ (CONTENTSTR + k).getBytes(HConstants.UTF8_ENCODING));
+ mc.add(key);
+ System.out.println(key);
+ key = new KeyValue(row, Bytes.toBytes(ANCHORNUM + k), now,
+ (ANCHORSTR + k).getBytes(HConstants.UTF8_ENCODING));
+ mc.add(key);
+ System.out.println(key);
+ }
+ KeyValue key = new KeyValue(Bytes.toBytes(start), CONTENTS_BASIC, now);
+ List<KeyValue> keys = mc.get(key, 1);
+ assertTrue(keys.size() == 1);
+ KeyValue delete = key.cloneDelete();
+ mc.add(delete);
+ keys = mc.get(delete, 1);
+ assertTrue(keys.size() == 0);
+ }
+
+ public void testBinary() throws IOException {
+ Memcache mc = new Memcache(HConstants.FOREVER, KeyValue.ROOT_COMPARATOR);
+ final int start = 43;
+ final int end = 46;
+ for (int k = start; k <= end; k++) {
+ byte [] kk = Bytes.toBytes(k);
+ byte [] row =
+ Bytes.toBytes(".META.,table," + Bytes.toString(kk) + ",1," + k);
+ KeyValue key = new KeyValue(row, CONTENTS_BASIC,
+ System.currentTimeMillis(),
+ (CONTENTSTR + k).getBytes(HConstants.UTF8_ENCODING));
+ mc.add(key);
+ System.out.println(key);
+// key = new KeyValue(row, Bytes.toBytes(ANCHORNUM + k),
+// System.currentTimeMillis(),
+// (ANCHORSTR + k).getBytes(HConstants.UTF8_ENCODING));
+// mc.add(key);
+// System.out.println(key);
+ }
+ int index = start;
+ for (KeyValue kv: mc.memcache) {
+ System.out.println(kv);
+ byte [] b = kv.getRow();
+ // Hardcoded offsets into String
+ String str = Bytes.toString(b, 13, 4);
+ byte [] bb = Bytes.toBytes(index);
+ String bbStr = Bytes.toString(bb);
+ assertEquals(str, bbStr);
+ index++;
+ }
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testMemcache() throws IOException {
+ for (int k = FIRST_ROW; k <= NUM_VALS; k++) {
+ byte [] row = Bytes.toBytes("row_" + k);
+ KeyValue key = new KeyValue(row, CONTENTS_BASIC,
+ System.currentTimeMillis(),
+ (CONTENTSTR + k).getBytes(HConstants.UTF8_ENCODING));
+ hmemcache.add(key);
+ key = new KeyValue(row, Bytes.toBytes(ANCHORNUM + k),
+ System.currentTimeMillis(),
+ (ANCHORSTR + k).getBytes(HConstants.UTF8_ENCODING));
+ hmemcache.add(key);
+ }
+ // this.hmemcache.dump();
+
+ // Read them back
+
+ for (int k = FIRST_ROW; k <= NUM_VALS; k++) {
+ List<KeyValue> results;
+ byte [] row = Bytes.toBytes("row_" + k);
+ KeyValue key = new KeyValue(row, CONTENTS_BASIC, Long.MAX_VALUE);
+ results = hmemcache.get(key, 1);
+ assertNotNull("no data for " + key.toString(), results);
+ assertEquals(1, results.size());
+ KeyValue kv = results.get(0);
+ String bodystr = Bytes.toString(kv.getBuffer(), kv.getValueOffset(),
+ kv.getValueLength());
+ String teststr = CONTENTSTR + k;
+ assertTrue("Incorrect value for key: (" + key.toString() +
+ "), expected: '" + teststr + "' got: '" +
+ bodystr + "'", teststr.compareTo(bodystr) == 0);
+
+ key = new KeyValue(row, Bytes.toBytes(ANCHORNUM + k), Long.MAX_VALUE);
+ results = hmemcache.get(key, 1);
+ assertNotNull("no data for " + key.toString(), results);
+ assertEquals(1, results.size());
+ kv = results.get(0);
+ bodystr = Bytes.toString(kv.getBuffer(), kv.getValueOffset(),
+ kv.getValueLength());
+ teststr = ANCHORSTR + k;
+ assertTrue("Incorrect value for key: (" + key.toString() +
+ "), expected: '" + teststr + "' got: '" + bodystr + "'",
+ teststr.compareTo(bodystr) == 0);
+ }
+ }
+
+ private byte [] getRowName(final int index) {
+ return Bytes.toBytes("row" + Integer.toString(index));
+ }
+
+ private byte [] getColumnName(final int rowIndex, final int colIndex) {
+ return Bytes.toBytes(COLUMN_FAMILY + ":" + Integer.toString(rowIndex) + ";" +
+ Integer.toString(colIndex));
+ }
+
+ /**
+ * Adds {@link #ROW_COUNT} rows and {@link #COLUMNS_COUNT}
+ * @param hmc Instance to add rows to.
+ * @throws IOException
+ */
+ private void addRows(final Memcache hmc) {
+ for (int i = 0; i < ROW_COUNT; i++) {
+ long timestamp = System.currentTimeMillis();
+ for (int ii = 0; ii < COLUMNS_COUNT; ii++) {
+ byte [] k = getColumnName(i, ii);
+ hmc.add(new KeyValue(getRowName(i), k, timestamp, k));
+ }
+ }
+ }
+
+ private void runSnapshot(final Memcache hmc) throws UnexpectedException {
+ // Save off old state.
+ int oldHistorySize = hmc.getSnapshot().size();
+ hmc.snapshot();
+ Set<KeyValue> ss = hmc.getSnapshot();
+ // Make some assertions about what just happened.
+ assertTrue("History size has not increased", oldHistorySize < ss.size());
+ hmc.clearSnapshot(ss);
+ }
+
+ /**
+ * Test memcache snapshots
+ * @throws IOException
+ */
+ public void testSnapshotting() throws IOException {
+ final int snapshotCount = 5;
+ // Add some rows, run a snapshot. Do it a few times.
+ for (int i = 0; i < snapshotCount; i++) {
+ addRows(this.hmemcache);
+ runSnapshot(this.hmemcache);
+ Set<KeyValue> ss = this.hmemcache.getSnapshot();
+ assertEquals("History not being cleared", 0, ss.size());
+ }
+ }
+
+ private void isExpectedRowWithoutTimestamps(final int rowIndex,
+ List<KeyValue> kvs) {
+ int i = 0;
+ for (KeyValue kv: kvs) {
+ String expectedColname = Bytes.toString(getColumnName(rowIndex, i++));
+ String colnameStr = kv.getColumnString();
+ assertEquals("Column name", colnameStr, expectedColname);
+ // Value is column name as bytes. Usually result is
+ // 100 bytes in size at least. This is the default size
+ // for BytesWriteable. For comparison, convert bytes to
+ // String and trim to remove trailing null bytes.
+ String colvalueStr = Bytes.toString(kv.getBuffer(), kv.getValueOffset(),
+ kv.getValueLength());
+ assertEquals("Content", colnameStr, colvalueStr);
+ }
+ }
+
+ /** Test getFull from memcache
+ * @throws InterruptedException
+ */
+ public void testGetFull() throws InterruptedException {
+ addRows(this.hmemcache);
+ Thread.sleep(1);
+ addRows(this.hmemcache);
+ Thread.sleep(1);
+ addRows(this.hmemcache);
+ Thread.sleep(1);
+ addRows(this.hmemcache);
+ long now = System.currentTimeMillis();
+ Map<KeyValue, Counter> versionCounter =
+ new TreeMap<KeyValue, Counter>(this.hmemcache.comparatorIgnoreTimestamp);
+ for (int i = 0; i < ROW_COUNT; i++) {
+ KeyValue kv = new KeyValue(getRowName(i), now);
+ List<KeyValue> all = new ArrayList<KeyValue>();
+ NavigableSet<KeyValue> deletes =
+ new TreeSet<KeyValue>(KeyValue.COMPARATOR);
+ this.hmemcache.getFull(kv, null, null, 1, versionCounter, deletes, all,
+ System.currentTimeMillis());
+ isExpectedRowWithoutTimestamps(i, all);
+ }
+ // Test getting two versions.
+ versionCounter =
+ new TreeMap<KeyValue, Counter>(this.hmemcache.comparatorIgnoreTimestamp);
+ for (int i = 0; i < ROW_COUNT; i++) {
+ KeyValue kv = new KeyValue(getRowName(i), now);
+ List<KeyValue> all = new ArrayList<KeyValue>();
+ NavigableSet<KeyValue> deletes =
+ new TreeSet<KeyValue>(KeyValue.COMPARATOR);
+ this.hmemcache.getFull(kv, null, null, 2, versionCounter, deletes, all,
+ System.currentTimeMillis());
+ byte [] previousRow = null;
+ int count = 0;
+ for (KeyValue k: all) {
+ if (previousRow != null) {
+ assertTrue(this.hmemcache.comparator.compareRows(k, previousRow) == 0);
+ }
+ previousRow = k.getRow();
+ count++;
+ }
+ assertEquals(ROW_COUNT * 2, count);
+ }
+ }
+
+ /** Test getNextRow from memcache
+ * @throws InterruptedException
+ */
+ public void testGetNextRow() throws InterruptedException {
+ addRows(this.hmemcache);
+ // Add more versions to make it a little more interesting.
+ Thread.sleep(1);
+ addRows(this.hmemcache);
+ KeyValue closestToEmpty = this.hmemcache.getNextRow(KeyValue.LOWESTKEY);
+ assertTrue(KeyValue.COMPARATOR.compareRows(closestToEmpty,
+ new KeyValue(getRowName(0), System.currentTimeMillis())) == 0);
+ for (int i = 0; i < ROW_COUNT; i++) {
+ KeyValue nr = this.hmemcache.getNextRow(new KeyValue(getRowName(i),
+ System.currentTimeMillis()));
+ if (i + 1 == ROW_COUNT) {
+ assertEquals(nr, null);
+ } else {
+ assertTrue(KeyValue.COMPARATOR.compareRows(nr,
+ new KeyValue(getRowName(i + 1), System.currentTimeMillis())) == 0);
+ }
+ }
+ }
+
+ /** Test getClosest from memcache
+ * @throws InterruptedException
+ */
+ public void testGetClosest() throws InterruptedException {
+ addRows(this.hmemcache);
+ // Add more versions to make it a little more interesting.
+ Thread.sleep(1);
+ addRows(this.hmemcache);
+ KeyValue kv = this.hmemcache.getNextRow(KeyValue.LOWESTKEY);
+ assertTrue(KeyValue.COMPARATOR.compareRows(new KeyValue(getRowName(0),
+ System.currentTimeMillis()), kv) == 0);
+ for (int i = 0; i < ROW_COUNT; i++) {
+ KeyValue nr = this.hmemcache.getNextRow(new KeyValue(getRowName(i),
+ System.currentTimeMillis()));
+ if (i + 1 == ROW_COUNT) {
+ assertEquals(nr, null);
+ } else {
+ assertTrue(KeyValue.COMPARATOR.compareRows(nr,
+ new KeyValue(getRowName(i + 1), System.currentTimeMillis())) == 0);
+ }
+ }
+ }
+
+ /**
+ * Test memcache scanner
+ * @throws IOException
+ * @throws InterruptedException
+ */
+ public void testScanner() throws IOException, InterruptedException {
+ addRows(this.hmemcache);
+ Thread.sleep(1);
+ addRows(this.hmemcache);
+ Thread.sleep(1);
+ addRows(this.hmemcache);
+ long timestamp = System.currentTimeMillis();
+ NavigableSet<byte []> columns = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ for (int i = 0; i < ROW_COUNT; i++) {
+ for (int ii = 0; ii < COLUMNS_COUNT; ii++) {
+ columns.add(getColumnName(i, ii));
+ }
+ }
+ InternalScanner scanner =
+ this.hmemcache.getScanner(timestamp, columns, HConstants.EMPTY_START_ROW);
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ for (int i = 0; scanner.next(results); i++) {
+ KeyValue.COMPARATOR.compareRows(results.get(0), getRowName(i));
+ assertEquals("Count of columns", COLUMNS_COUNT, results.size());
+ isExpectedRowWithoutTimestamps(i, results);
+ // Clear out set. Otherwise row results accumulate.
+ results.clear();
+ }
+ }
+
+ /** For HBASE-528 */
+ public void testGetRowKeyAtOrBefore() {
+ // set up some test data
+ byte [] t10 = Bytes.toBytes("010");
+ byte [] t20 = Bytes.toBytes("020");
+ byte [] t30 = Bytes.toBytes("030");
+ byte [] t35 = Bytes.toBytes("035");
+ byte [] t40 = Bytes.toBytes("040");
+
+ hmemcache.add(getKV(t10, "t10 bytes".getBytes()));
+ hmemcache.add(getKV(t20, "t20 bytes".getBytes()));
+ hmemcache.add(getKV(t30, "t30 bytes".getBytes()));
+ hmemcache.add(getKV(t35, "t35 bytes".getBytes()));
+ // write a delete in there to see if things still work ok
+ hmemcache.add(getDeleteKV(t35));
+ hmemcache.add(getKV(t40, "t40 bytes".getBytes()));
+
+ NavigableSet<KeyValue> results = null;
+
+ // try finding "015"
+ results =
+ new TreeSet<KeyValue>(this.hmemcache.comparator.getComparatorIgnoringType());
+ KeyValue t15 = new KeyValue(Bytes.toBytes("015"),
+ System.currentTimeMillis());
+ hmemcache.getRowKeyAtOrBefore(t15, results);
+ KeyValue kv = results.last();
+ assertTrue(KeyValue.COMPARATOR.compareRows(kv, t10) == 0);
+
+ // try "020", we should get that row exactly
+ results =
+ new TreeSet<KeyValue>(this.hmemcache.comparator.getComparatorIgnoringType());
+ hmemcache.getRowKeyAtOrBefore(new KeyValue(t20, System.currentTimeMillis()),
+ results);
+ assertTrue(KeyValue.COMPARATOR.compareRows(results.last(), t20) == 0);
+
+ // try "030", we should get that row exactly
+ results =
+ new TreeSet<KeyValue>(this.hmemcache.comparator.getComparatorIgnoringType());
+ hmemcache.getRowKeyAtOrBefore(new KeyValue(t30, System.currentTimeMillis()),
+ results);
+ assertTrue(KeyValue.COMPARATOR.compareRows(results.last(), t30) == 0);
+
+ // try "038", should skip the deleted "035" and give "030"
+ results =
+ new TreeSet<KeyValue>(this.hmemcache.comparator.getComparatorIgnoringType());
+ byte [] t38 = Bytes.toBytes("038");
+ hmemcache.getRowKeyAtOrBefore(new KeyValue(t38, System.currentTimeMillis()),
+ results);
+ assertTrue(KeyValue.COMPARATOR.compareRows(results.last(), t30) == 0);
+
+ // try "050", should get stuff from "040"
+ results =
+ new TreeSet<KeyValue>(this.hmemcache.comparator.getComparatorIgnoringType());
+ byte [] t50 = Bytes.toBytes("050");
+ hmemcache.getRowKeyAtOrBefore(new KeyValue(t50, System.currentTimeMillis()),
+ results);
+ assertTrue(KeyValue.COMPARATOR.compareRows(results.last(), t40) == 0);
+ }
+
+ private KeyValue getDeleteKV(byte [] row) {
+ return new KeyValue(row, Bytes.toBytes("test_col:"),
+ HConstants.LATEST_TIMESTAMP, KeyValue.Type.Delete, null);
+ }
+
+ private KeyValue getKV(byte [] row, byte [] value) {
+ return new KeyValue(row, Bytes.toBytes("test_col:"),
+ HConstants.LATEST_TIMESTAMP, value);
+ }
+
+ /**
+ * Test memcache scanner scanning cached rows, HBASE-686
+ * @throws IOException
+ */
+ public void testScanner_686() throws IOException {
+ addRows(this.hmemcache);
+ long timestamp = System.currentTimeMillis();
+ NavigableSet<byte []> cols = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+ for (int i = 0; i < ROW_COUNT; i++) {
+ for (int ii = 0; ii < COLUMNS_COUNT; ii++) {
+ cols.add(getColumnName(i, ii));
+ }
+ }
+ //starting from each row, validate results should contain the starting row
+ for (int startRowId = 0; startRowId < ROW_COUNT; startRowId++) {
+ InternalScanner scanner = this.hmemcache.getScanner(timestamp,
+ cols, getRowName(startRowId));
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ for (int i = 0; scanner.next(results); i++) {
+ int rowId = startRowId + i;
+ assertTrue("Row name",
+ KeyValue.COMPARATOR.compareRows(results.get(0),
+ getRowName(rowId)) == 0);
+ assertEquals("Count of columns", COLUMNS_COUNT, results.size());
+ List<KeyValue> row = new ArrayList<KeyValue>();
+ for (KeyValue kv : results) {
+ row.add(kv);
+ }
+ isExpectedRowWithoutTimestamps(rowId, row);
+ // Clear out set. Otherwise row results accumulate.
+ results.clear();
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestHRegion.java b/src/test/org/apache/hadoop/hbase/regionserver/TestHRegion.java
new file mode 100644
index 0000000..63260e8
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestHRegion.java
@@ -0,0 +1,655 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Basic stand-alone testing of HRegion.
+ *
+ * A lot of the meta information for an HRegion now lives inside other
+ * HRegions or in the HBaseMaster, so only basic testing is possible.
+ */
+public class TestHRegion extends HBaseTestCase {
+ static final Log LOG = LogFactory.getLog(TestHRegion.class);
+
+ private static final int FIRST_ROW = 1;
+ private static final int NUM_VALS = 1000;
+ private static final String CONTENTS_BASIC_STR = "contents:basic";
+ private static final byte [] CONTENTS_BASIC = Bytes.toBytes(CONTENTS_BASIC_STR);
+ private static final String CONTENTSTR = "contentstr";
+ private static final String ANCHORNUM = "anchor:anchornum-";
+ private static final String ANCHORSTR = "anchorstr";
+ private static final byte [] CONTENTS_FIRSTCOL = Bytes.toBytes("contents:firstcol");
+ private static final byte [] ANCHOR_SECONDCOL = Bytes.toBytes("anchor:secondcol");
+
+ private MiniDFSCluster cluster = null;
+ private HTableDescriptor desc = null;
+ HRegion r = null;
+ HRegionIncommon region = null;
+
+ private static int numInserted = 0;
+
+ /**
+ * @see org.apache.hadoop.hbase.HBaseTestCase#setUp()
+ */
+ @Override
+ protected void setUp() throws Exception {
+ this.conf.set("hbase.hstore.compactionThreshold", "2");
+
+ conf.setLong("hbase.hregion.max.filesize", 65536);
+
+ cluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+ fs = cluster.getFileSystem();
+
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR,
+ this.cluster.getFileSystem().getHomeDirectory().toString());
+
+ super.setUp();
+ }
+
+ /**
+ * Since all the "tests" depend on the results of the previous test, they are
+ * not Junit tests that can stand alone. Consequently we have a single Junit
+ * test that runs the "sub-tests" as private methods.
+ * @throws IOException
+ */
+ public void testHRegion() throws IOException {
+ try {
+ init();
+ locks();
+ badPuts();
+ basic();
+ scan();
+ splitAndMerge();
+ read();
+ } finally {
+ shutdownDfs(cluster);
+ }
+ }
+
+ // Create directories, start mini cluster, etc.
+
+ private void init() throws IOException {
+ desc = new HTableDescriptor("test");
+ desc.addFamily(new HColumnDescriptor("contents:"));
+ desc.addFamily(new HColumnDescriptor("anchor:"));
+ r = createNewHRegion(desc, null, null);
+ region = new HRegionIncommon(r);
+ LOG.info("setup completed.");
+ }
+
+ // Test basic functionality. Writes to contents:basic and anchor:anchornum-*
+
+ private void basic() throws IOException {
+ long startTime = System.currentTimeMillis();
+
+ // Write out a bunch of values
+ for (int k = FIRST_ROW; k <= NUM_VALS; k++) {
+ BatchUpdate batchUpdate =
+ new BatchUpdate(Bytes.toBytes("row_" + k), System.currentTimeMillis());
+ batchUpdate.put(CONTENTS_BASIC,
+ (CONTENTSTR + k).getBytes(HConstants.UTF8_ENCODING));
+ batchUpdate.put(Bytes.toBytes(ANCHORNUM + k),
+ (ANCHORSTR + k).getBytes(HConstants.UTF8_ENCODING));
+ region.commit(batchUpdate);
+ }
+ LOG.info("Write " + NUM_VALS + " rows. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // Flush cache
+
+ startTime = System.currentTimeMillis();
+
+ region.flushcache();
+
+ LOG.info("Cache flush elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // Read them back in
+
+ startTime = System.currentTimeMillis();
+
+ byte [] collabel = null;
+ for (int k = FIRST_ROW; k <= NUM_VALS; k++) {
+ String rowlabelStr = "row_" + k;
+ byte [] rowlabel = Bytes.toBytes(rowlabelStr);
+ if (k % 100 == 0) LOG.info(Bytes.toString(rowlabel));
+ Cell c = region.get(rowlabel, CONTENTS_BASIC);
+ assertNotNull("K is " + k, c);
+ byte [] bodydata = c.getValue();
+ assertNotNull(bodydata);
+ String bodystr = new String(bodydata, HConstants.UTF8_ENCODING).trim();
+ String teststr = CONTENTSTR + k;
+ assertEquals("Incorrect value for key: (" + rowlabelStr + "," + CONTENTS_BASIC_STR
+ + "), expected: '" + teststr + "' got: '" + bodystr + "'",
+ bodystr, teststr);
+ String collabelStr = ANCHORNUM + k;
+ collabel = Bytes.toBytes(collabelStr);
+ bodydata = region.get(rowlabel, collabel).getValue();
+ bodystr = new String(bodydata, HConstants.UTF8_ENCODING).trim();
+ teststr = ANCHORSTR + k;
+ assertEquals("Incorrect value for key: (" + rowlabelStr + "," + collabelStr
+ + "), expected: '" + teststr + "' got: '" + bodystr + "'",
+ bodystr, teststr);
+ }
+
+ LOG.info("Read " + NUM_VALS + " rows. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ LOG.info("basic completed.");
+ }
+
+ private void badPuts() {
+ // Try column name not registered in the table.
+ boolean exceptionThrown = false;
+ exceptionThrown = false;
+ try {
+ BatchUpdate batchUpdate = new BatchUpdate(Bytes.toBytes("Some old key"));
+ String unregisteredColName = "FamilyGroup:FamilyLabel";
+ batchUpdate.put(Bytes.toBytes(unregisteredColName),
+ unregisteredColName.getBytes(HConstants.UTF8_ENCODING));
+ region.commit(batchUpdate);
+ } catch (IOException e) {
+ exceptionThrown = true;
+ } finally {
+ }
+ assertTrue("Bad family", exceptionThrown);
+ LOG.info("badPuts completed.");
+ }
+
+ /**
+ * Test getting and releasing locks.
+ */
+ private void locks() {
+ final int threadCount = 10;
+ final int lockCount = 10;
+
+ List<Thread>threads = new ArrayList<Thread>(threadCount);
+ for (int i = 0; i < threadCount; i++) {
+ threads.add(new Thread(Integer.toString(i)) {
+ @Override
+ public void run() {
+ Integer [] lockids = new Integer[lockCount];
+ // Get locks.
+ for (int i = 0; i < lockCount; i++) {
+ try {
+ byte [] rowid = Bytes.toBytes(Integer.toString(i));
+ lockids[i] = r.obtainRowLock(rowid);
+ assertEquals(rowid, r.getRowFromLock(lockids[i]));
+ LOG.debug(getName() + " locked " + Bytes.toString(rowid));
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ }
+ LOG.debug(getName() + " set " +
+ Integer.toString(lockCount) + " locks");
+
+ // Abort outstanding locks.
+ for (int i = lockCount - 1; i >= 0; i--) {
+ r.releaseRowLock(lockids[i]);
+ LOG.debug(getName() + " unlocked " + i);
+ }
+ LOG.debug(getName() + " released " +
+ Integer.toString(lockCount) + " locks");
+ }
+ });
+ }
+
+ // Startup all our threads.
+ for (Thread t : threads) {
+ t.start();
+ }
+
+ // Now wait around till all are done.
+ for (Thread t: threads) {
+ while (t.isAlive()) {
+ try {
+ Thread.sleep(1);
+ } catch (InterruptedException e) {
+ // Go around again.
+ }
+ }
+ }
+ LOG.info("locks completed.");
+ }
+
+ // Test scanners. Writes contents:firstcol and anchor:secondcol
+
+ private void scan() throws IOException {
+ byte [] cols [] = {
+ CONTENTS_FIRSTCOL,
+ ANCHOR_SECONDCOL
+ };
+
+ // Test the Scanner!!!
+ String[] vals1 = new String[1000];
+ for(int k = 0; k < vals1.length; k++) {
+ vals1[k] = Integer.toString(k);
+ }
+
+ // 1. Insert a bunch of values
+ long startTime = System.currentTimeMillis();
+ for(int k = 0; k < vals1.length / 2; k++) {
+ String kLabel = String.format("%1$03d", k);
+
+ BatchUpdate batchUpdate =
+ new BatchUpdate(Bytes.toBytes("row_vals1_" + kLabel),
+ System.currentTimeMillis());
+ batchUpdate.put(cols[0], vals1[k].getBytes(HConstants.UTF8_ENCODING));
+ batchUpdate.put(cols[1], vals1[k].getBytes(HConstants.UTF8_ENCODING));
+ region.commit(batchUpdate);
+ numInserted += 2;
+ }
+ LOG.info("Write " + (vals1.length / 2) + " elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // 2. Scan from cache
+ startTime = System.currentTimeMillis();
+ ScannerIncommon s = this.region.getScanner(cols, HConstants.EMPTY_START_ROW,
+ System.currentTimeMillis());
+ int numFetched = 0;
+ try {
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ int k = 0;
+ while(s.next(curVals)) {
+ for (KeyValue kv: curVals) {
+ byte [] val = kv.getValue();
+ int curval =
+ Integer.parseInt(new String(val, HConstants.UTF8_ENCODING).trim());
+ for(int j = 0; j < cols.length; j++) {
+ if (!kv.matchingColumn(cols[j])) {
+ assertEquals("Error at: " + kv + " " + Bytes.toString(cols[j]),
+ k, curval);
+ numFetched++;
+ break;
+ }
+ }
+ }
+ curVals.clear();
+ k++;
+ }
+ } finally {
+ s.close();
+ }
+ assertEquals(numInserted, numFetched);
+
+ LOG.info("Scanned " + (vals1.length / 2)
+ + " rows from cache. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // 3. Flush to disk
+ startTime = System.currentTimeMillis();
+ region.flushcache();
+ LOG.info("Cache flush elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // 4. Scan from disk
+ startTime = System.currentTimeMillis();
+ s = this.region.getScanner(cols, HConstants.EMPTY_START_ROW,
+ System.currentTimeMillis());
+ numFetched = 0;
+ try {
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ int k = 0;
+ while(s.next(curVals)) {
+ for(Iterator<KeyValue> it = curVals.iterator(); it.hasNext(); ) {
+ KeyValue kv = it.next();
+ byte [] col = kv.getColumn();
+ byte [] val = kv.getValue();
+ int curval =
+ Integer.parseInt(new String(val, HConstants.UTF8_ENCODING).trim());
+ for(int j = 0; j < cols.length; j++) {
+ if (Bytes.compareTo(col, cols[j]) == 0) {
+ assertEquals("Error at:" + kv.getRow() + "/"
+ + kv.getTimestamp()
+ + ", Value for " + col + " should be: " + k
+ + ", but was fetched as: " + curval, k, curval);
+ numFetched++;
+ }
+ }
+ }
+ curVals.clear();
+ k++;
+ }
+ } finally {
+ s.close();
+ }
+ assertEquals("Inserted " + numInserted + " values, but fetched " + numFetched, numInserted, numFetched);
+
+ LOG.info("Scanned " + (vals1.length / 2)
+ + " rows from disk. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // 5. Insert more values
+ startTime = System.currentTimeMillis();
+ for(int k = vals1.length/2; k < vals1.length; k++) {
+ String kLabel = String.format("%1$03d", k);
+ BatchUpdate batchUpdate =
+ new BatchUpdate(Bytes.toBytes("row_vals1_" + kLabel),
+ System.currentTimeMillis());
+ batchUpdate.put(cols[0], vals1[k].getBytes(HConstants.UTF8_ENCODING));
+ batchUpdate.put(cols[1], vals1[k].getBytes(HConstants.UTF8_ENCODING));
+ region.commit(batchUpdate);
+ numInserted += 2;
+ }
+
+ LOG.info("Write " + (vals1.length / 2) + " rows (second half). Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // 6. Scan from cache and disk
+ startTime = System.currentTimeMillis();
+ s = this.region.getScanner(cols, HConstants.EMPTY_START_ROW,
+ System.currentTimeMillis());
+ numFetched = 0;
+ try {
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ int k = 0;
+ while(s.next(curVals)) {
+ for(Iterator<KeyValue> it = curVals.iterator(); it.hasNext(); ) {
+ KeyValue kv = it.next();
+ byte [] col = kv.getColumn();
+ byte [] val = kv.getValue();
+ int curval =
+ Integer.parseInt(new String(val, HConstants.UTF8_ENCODING).trim());
+ for(int j = 0; j < cols.length; j++) {
+ if(Bytes.compareTo(col, cols[j]) == 0) {
+ assertEquals("Error at:" + kv.getRow() + "/"
+ + kv.getTimestamp()
+ + ", Value for " + col + " should be: " + k
+ + ", but was fetched as: " + curval, k, curval);
+ numFetched++;
+ }
+ }
+ }
+ curVals.clear();
+ k++;
+ }
+ } finally {
+ s.close();
+ }
+ assertEquals("Inserted " + numInserted + " values, but fetched " +
+ numFetched, numInserted, numFetched);
+
+ LOG.info("Scanned " + vals1.length
+ + " rows from cache and disk. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // 7. Flush to disk
+ startTime = System.currentTimeMillis();
+ region.flushcache();
+ LOG.info("Cache flush elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // 8. Scan from disk
+ startTime = System.currentTimeMillis();
+ s = this.region.getScanner(cols, HConstants.EMPTY_START_ROW,
+ System.currentTimeMillis());
+ numFetched = 0;
+ try {
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ int k = 0;
+ while(s.next(curVals)) {
+ for(Iterator<KeyValue> it = curVals.iterator(); it.hasNext(); ) {
+ KeyValue kv = it.next();
+ byte [] col = kv.getColumn();
+ byte [] val = kv.getValue();
+ int curval =
+ Integer.parseInt(new String(val, HConstants.UTF8_ENCODING).trim());
+ for (int j = 0; j < cols.length; j++) {
+ if (Bytes.compareTo(col, cols[j]) == 0) {
+ assertEquals("Value for " + Bytes.toString(col) + " should be: " + k
+ + ", but was fetched as: " + curval, curval, k);
+ numFetched++;
+ }
+ }
+ }
+ curVals.clear();
+ k++;
+ }
+ } finally {
+ s.close();
+ }
+ assertEquals("Inserted " + numInserted + " values, but fetched " + numFetched,
+ numInserted, numFetched);
+ LOG.info("Scanned " + vals1.length
+ + " rows from disk. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ // 9. Scan with a starting point
+ startTime = System.currentTimeMillis();
+ s = this.region.getScanner(cols, Bytes.toBytes("row_vals1_500"),
+ System.currentTimeMillis());
+ numFetched = 0;
+ try {
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ int k = 500;
+ while(s.next(curVals)) {
+ for(Iterator<KeyValue> it = curVals.iterator(); it.hasNext(); ) {
+ KeyValue kv = it.next();
+ byte [] col = kv.getColumn();
+ byte [] val = kv.getValue();
+ int curval =
+ Integer.parseInt(new String(val, HConstants.UTF8_ENCODING).trim());
+ for (int j = 0; j < cols.length; j++) {
+ if (Bytes.compareTo(col, cols[j]) == 0) {
+ assertEquals("Value for " + Bytes.toString(col) + " should be: " + k
+ + ", but was fetched as: " + curval, curval, k);
+ numFetched++;
+ }
+ }
+ }
+ curVals.clear();
+ k++;
+ }
+ } finally {
+ s.close();
+ }
+ assertEquals("Should have fetched " + (numInserted / 2) +
+ " values, but fetched " + numFetched, (numInserted / 2), numFetched);
+
+ LOG.info("Scanned " + (numFetched / 2)
+ + " rows from disk with specified start point. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ LOG.info("scan completed.");
+ }
+
+ // NOTE: This test depends on testBatchWrite succeeding
+ private void splitAndMerge() throws IOException {
+ Path oldRegionPath = r.getRegionDir();
+ byte [] splitRow = r.compactStores();
+ assertNotNull(splitRow);
+ long startTime = System.currentTimeMillis();
+ HRegion subregions [] = r.splitRegion(splitRow);
+ if (subregions != null) {
+ LOG.info("Split region elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+ assertEquals("Number of subregions", subregions.length, 2);
+ for (int i = 0; i < subregions.length; i++) {
+ subregions[i] = openClosedRegion(subregions[i]);
+ subregions[i].compactStores();
+ }
+
+ // Now merge it back together
+ Path oldRegion1 = subregions[0].getRegionDir();
+ Path oldRegion2 = subregions[1].getRegionDir();
+ startTime = System.currentTimeMillis();
+ r = HRegion.mergeAdjacent(subregions[0], subregions[1]);
+ region = new HRegionIncommon(r);
+ LOG.info("Merge regions elapsed time: " +
+ ((System.currentTimeMillis() - startTime) / 1000.0));
+ fs.delete(oldRegion1, true);
+ fs.delete(oldRegion2, true);
+ fs.delete(oldRegionPath, true);
+ }
+ LOG.info("splitAndMerge completed.");
+ }
+
+ // This test verifies that everything is still there after splitting and merging
+
+ private void read() throws IOException {
+ // First verify the data written by testBasic()
+ byte [][] cols = {
+ Bytes.toBytes(ANCHORNUM + "[0-9]+"),
+ CONTENTS_BASIC
+ };
+ long startTime = System.currentTimeMillis();
+ InternalScanner s =
+ r.getScanner(cols, HConstants.EMPTY_START_ROW,
+ System.currentTimeMillis(), null);
+ try {
+ int contentsFetched = 0;
+ int anchorFetched = 0;
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ int k = 0;
+ while(s.next(curVals)) {
+ for(Iterator<KeyValue> it = curVals.iterator(); it.hasNext(); ) {
+ KeyValue kv = it.next();
+ byte [] col = kv.getColumn();
+ byte [] val = kv.getValue();
+ String curval = Bytes.toString(val);
+ if (Bytes.compareTo(col, CONTENTS_BASIC) == 0) {
+ assertTrue("Error at:" + kv
+ + ", Value for " + col + " should start with: " + CONTENTSTR
+ + ", but was fetched as: " + curval,
+ curval.startsWith(CONTENTSTR));
+ contentsFetched++;
+
+ } else if (Bytes.toString(col).startsWith(ANCHORNUM)) {
+ assertTrue("Error at:" + kv
+ + ", Value for " + Bytes.toString(col) +
+ " should start with: " + ANCHORSTR
+ + ", but was fetched as: " + curval,
+ curval.startsWith(ANCHORSTR));
+ anchorFetched++;
+
+ } else {
+ LOG.info("UNEXPECTED COLUMN " + Bytes.toString(col));
+ }
+ }
+ curVals.clear();
+ k++;
+ }
+ assertEquals("Expected " + NUM_VALS + " " + Bytes.toString(CONTENTS_BASIC) +
+ " values, but fetched " + contentsFetched, NUM_VALS, contentsFetched);
+ assertEquals("Expected " + NUM_VALS + " " + ANCHORNUM +
+ " values, but fetched " + anchorFetched, NUM_VALS, anchorFetched);
+
+ LOG.info("Scanned " + NUM_VALS
+ + " rows from disk. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ } finally {
+ s.close();
+ }
+
+ // Verify testScan data
+
+ cols = new byte [][] {CONTENTS_FIRSTCOL, ANCHOR_SECONDCOL};
+
+ startTime = System.currentTimeMillis();
+
+ s = r.getScanner(cols, HConstants.EMPTY_START_ROW,
+ System.currentTimeMillis(), null);
+ try {
+ int numFetched = 0;
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ int k = 0;
+ while(s.next(curVals)) {
+ for(Iterator<KeyValue> it = curVals.iterator(); it.hasNext(); ) {
+ KeyValue kv = it.next();
+ byte [] col = kv.getColumn();
+ byte [] val = kv.getValue();
+ int curval =
+ Integer.parseInt(new String(val, HConstants.UTF8_ENCODING).trim());
+
+ for (int j = 0; j < cols.length; j++) {
+ if (Bytes.compareTo(col, cols[j]) == 0) {
+ assertEquals("Value for " + Bytes.toString(col) + " should be: " + k
+ + ", but was fetched as: " + curval, curval, k);
+ numFetched++;
+ }
+ }
+ }
+ curVals.clear();
+ k++;
+ }
+ assertEquals("Inserted " + numInserted + " values, but fetched " +
+ numFetched, numInserted, numFetched);
+
+ LOG.info("Scanned " + (numFetched / 2)
+ + " rows from disk. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ } finally {
+ s.close();
+ }
+
+ // Test a scanner which only specifies the column family name
+
+ cols = new byte [][] {
+ Bytes.toBytes("anchor:")
+ };
+
+ startTime = System.currentTimeMillis();
+
+ s = r.getScanner(cols, HConstants.EMPTY_START_ROW, System.currentTimeMillis(), null);
+
+ try {
+ int fetched = 0;
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ while(s.next(curVals)) {
+ for(Iterator<KeyValue> it = curVals.iterator(); it.hasNext(); ) {
+ it.next();
+ fetched++;
+ }
+ curVals.clear();
+ }
+ assertEquals("Inserted " + (NUM_VALS + numInserted/2) +
+ " values, but fetched " + fetched, (NUM_VALS + numInserted/2), fetched);
+ LOG.info("Scanned " + fetched
+ + " rows from disk. Elapsed time: "
+ + ((System.currentTimeMillis() - startTime) / 1000.0));
+
+ } finally {
+ s.close();
+ }
+ LOG.info("read completed.");
+ }
+
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java b/src/test/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java
new file mode 100644
index 0000000..fcb22fb
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java
@@ -0,0 +1,37 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TestHRegionInfo extends HBaseTestCase {
+ public void testCreateHRegionInfoName() throws Exception {
+ String tableName = "tablename";
+ final byte [] tn = Bytes.toBytes(tableName);
+ String startKey = "startkey";
+ final byte [] sk = Bytes.toBytes(startKey);
+ String id = "id";
+ byte [] name = HRegionInfo.createRegionName(tn, sk, id);
+ String nameStr = Bytes.toString(name);
+ assertEquals(nameStr, tableName + "," + startKey + "," + id);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestLogRolling.java b/src/test/org/apache/hadoop/hbase/regionserver/TestLogRolling.java
new file mode 100644
index 0000000..5ef110d
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestLogRolling.java
@@ -0,0 +1,164 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test log deletion as logs are rolled.
+ */
+public class TestLogRolling extends HBaseClusterTestCase {
+ private static final Log LOG = LogFactory.getLog(TestLogRolling.class);
+ private HRegionServer server;
+ private HLog log;
+ private String tableName;
+ private byte[] value;
+
+ /**
+ * constructor
+ * @throws Exception
+ */
+ public TestLogRolling() throws Exception {
+ // start one regionserver and a minidfs.
+ super();
+ try {
+ this.server = null;
+ this.log = null;
+ this.tableName = null;
+ this.value = null;
+
+ String className = this.getClass().getName();
+ StringBuilder v = new StringBuilder(className);
+ while (v.length() < 1000) {
+ v.append(className);
+ }
+ value = Bytes.toBytes(v.toString());
+
+ } catch (Exception e) {
+ LOG.fatal("error in constructor", e);
+ throw e;
+ }
+ }
+
+ // Need to override this setup so we can edit the config before it gets sent
+ // to the cluster startup.
+ @Override
+ protected void preHBaseClusterSetup() {
+ // Force a region split after every 768KB
+ conf.setLong("hbase.hregion.max.filesize", 768L * 1024L);
+
+ // We roll the log after every 32 writes
+ conf.setInt("hbase.regionserver.maxlogentries", 32);
+
+ // For less frequently updated regions flush after every 2 flushes
+ conf.setInt("hbase.hregion.memcache.optionalflushcount", 2);
+
+ // We flush the cache after every 8192 bytes
+ conf.setInt("hbase.hregion.memcache.flush.size", 8192);
+
+ // Make lease timeout longer, lease checks less frequent
+ conf.setInt("hbase.master.lease.period", 10 * 1000);
+
+ // Increase the amount of time between client retries
+ conf.setLong("hbase.client.pause", 15 * 1000);
+
+ // Reduce thread wake frequency so that other threads can get
+ // a chance to run.
+ conf.setInt(HConstants.THREAD_WAKE_FREQUENCY, 2 * 1000);
+ }
+
+ private void startAndWriteData() throws Exception {
+ // When the META table can be opened, the region servers are running
+ new HTable(conf, HConstants.META_TABLE_NAME);
+
+ this.server = cluster.getRegionThreads().get(0).getRegionServer();
+ this.log = server.getLog();
+
+ // Create the test table and open it
+ HTableDescriptor desc = new HTableDescriptor(tableName);
+ desc.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY));
+ HBaseAdmin admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ HTable table = new HTable(conf, tableName);
+
+ for (int i = 1; i <= 256; i++) { // 256 writes should cause 8 log rolls
+ BatchUpdate b =
+ new BatchUpdate("row" + String.format("%1$04d", i));
+ b.put(HConstants.COLUMN_FAMILY, value);
+ table.commit(b);
+
+ if (i % 32 == 0) {
+ // After every 32 writes sleep to let the log roller run
+
+ try {
+ Thread.sleep(2000);
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ }
+
+ /**
+ * Tests that logs are deleted
+ *
+ * @throws Exception
+ */
+ public void testLogRolling() throws Exception {
+ tableName = getName();
+ try {
+ startAndWriteData();
+ LOG.info("after writing there are " + log.getNumLogFiles() + " log files");
+
+ // flush all regions
+
+ List<HRegion> regions =
+ new ArrayList<HRegion>(server.getOnlineRegions());
+ for (HRegion r: regions) {
+ r.flushcache();
+ }
+
+ // Now roll the log
+ log.rollWriter();
+
+ int count = log.getNumLogFiles();
+ LOG.info("after flushing all regions and rolling logs there are " +
+ log.getNumLogFiles() + " log files");
+ assertTrue(("actual count: " + count), count <= 2);
+ } catch (Exception e) {
+ LOG.fatal("unexpected exception", e);
+ throw e;
+ }
+ }
+
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestLruHashMap.java b/src/test/org/apache/hadoop/hbase/regionserver/TestLruHashMap.java
new file mode 100644
index 0000000..4abb742
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestLruHashMap.java
@@ -0,0 +1,354 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.Set;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.HStoreKey;
+
+public class TestLruHashMap extends TestCase {
+ private static LruHashMap<HStoreKey, HStoreKey> lru = null;
+
+ private static HStoreKey[] keys = null;
+ private static HStoreKey key = null;
+ private static HStoreKey tmpKey = null;
+
+ private static HStoreKey[] vals = null;
+ private static HStoreKey val = null;
+ private static HStoreKey tmpVal = null;
+
+ private static HStoreKey[] tmpData = null;
+
+ //Have to set type
+ private static Set<LruHashMap.Entry<HStoreKey, HStoreKey>> hashSet = null;
+ private static List<LruHashMap.Entry<HStoreKey, HStoreKey>> entryList = null;
+ private static LruHashMap.Entry<HStoreKey, HStoreKey> entry = null;
+
+ private static Random rand = null;
+ private static int ENTRY_ARRAY_LEN = 2000;
+ private static int LOOPS = 10;
+
+ protected void setUp()
+ throws Exception{
+ super.setUp();
+ long maxMemUsage = 10000000L;
+ //Using the default values for everything, except memUsage
+ lru = new LruHashMap<HStoreKey, HStoreKey>(maxMemUsage);
+
+
+ rand = new Random();
+
+ keys = new HStoreKey[ENTRY_ARRAY_LEN];
+ vals = new HStoreKey[ENTRY_ARRAY_LEN];
+ tmpData = new HStoreKey[ENTRY_ARRAY_LEN];
+ }
+
+ protected void tearDown()
+ throws Exception{
+ super.tearDown();
+ }
+
+
+ /**
+ * This test adds data to the Lru and checks that the head and tail pointers
+ * are updated correctly
+ */
+ public void testAdd_Pointers(){
+ for(int i=0; i<LOOPS; i++){
+ sequential(keys);
+ tmpKey = keys[0];
+
+ for(HStoreKey key: keys){
+ lru.put(key, key);
+ assertTrue("headPtr key not correct",
+ lru.getHeadPtr().getKey().equals(tmpKey));
+
+ assertTrue("tailPtr key not correct",
+ lru.getTailPtr().getKey().equals(key));
+ }
+ lru.clear();
+ }
+ System.out.println("testAdd_Pointers: OK");
+ }
+
+ /**
+ * This test adds data to the Lru and checks that the memFree variable never
+ * goes below 0
+ */
+ public void testAdd_MemUsage_random(){
+ for(int i=0; i<LOOPS; i++){
+ random(keys);
+
+ for(HStoreKey key : keys){
+ lru.put(key, key);
+
+ assertTrue("Memory usage exceeded!", lru.getMemFree() > 0);
+ }
+
+ lru.clear();
+ }
+ System.out.println("testAdd_MemUsage: OK");
+ }
+
+ /**
+ * This test adds data to the Lru and checks that the memFree variable never
+ * goes below 0
+ */
+ public void testAdd_MemUsage_sequential(){
+ for(int i=0; i<LOOPS; i++){
+ sequential(keys);
+
+ for(HStoreKey key : keys){
+ lru.put(key, key);
+
+ assertTrue("Memory usage exceeded!", lru.getMemFree() > 0);
+ }
+
+ lru.clear();
+ }
+ System.out.println("testAdd_MemUsage: OK");
+ }
+
+ /**
+ * This test adds data to the Lru and checks that the order in the lru is the
+ * same as the insert order
+ */
+ public void testAdd_Order()
+ throws Exception{
+ for(int i=0; i<LOOPS; i++){
+ //Adding to Lru
+ put();
+
+ //Getting order from lru
+ entryList = lru.entryLruList();
+
+ //Comparing orders
+ assertTrue("Different lengths" , keys.length == entryList.size());
+ int j = 0;
+ for(Map.Entry entry : entryList){
+ //Comparing keys
+ assertTrue("Different order", keys[j++].equals(entry.getKey()));
+ }
+
+ //Clearing the Lru
+ lru.clear();
+ }
+ System.out.println("testAdd_Order: OK");
+ }
+
+
+ /**
+ * This test adds data to the Lru, clears it and checks that the memoryUsage
+ * looks ok afterwards
+ */
+ public void testAdd_Clear()
+ throws Exception{
+ long initMemUsage = 0L;
+ long putMemUsage = 0L;
+ long clearMemUsage = 0L;
+ for(int i=0; i<LOOPS; i++){
+ initMemUsage = lru.getMemFree();
+
+ //Adding to Lru
+ put();
+ putMemUsage = lru.getMemFree();
+
+ //Clearing the Lru
+ lru.clear();
+ clearMemUsage = lru.getMemFree();
+ assertTrue("memUsage went down", clearMemUsage <= initMemUsage);
+ }
+
+ System.out.println("testAdd_Clear: OK");
+ }
+
+
+ /**
+ * This test adds data to the Lru and checks that all the data that is in
+ * the hashSet is also in the EntryList
+ */
+ public void testAdd_Containment(){
+ for(int i=0; i<LOOPS; i++){
+ //Adding to Lru
+ put();
+
+ //Getting HashSet
+ hashSet = lru.entryTableSet();
+
+ //Getting EntryList
+ entryList = lru.entryLruList();
+
+ //Comparing
+ assertTrue("Wrong size", hashSet.size() == entryList.size());
+ for(int j=0; j<entryList.size(); j++){
+ assertTrue("Set doesn't contain value from list",
+ hashSet.contains(entryList.get(j)));
+ }
+
+ //Clearing the Lru
+ lru.clear();
+ }
+ System.out.println("testAdd_Containment: OK");
+ }
+
+
+ /**
+ * This test gets an entry from the map and checks that the position of it has
+ * been updated afterwards.
+ */
+ public void testGet(){
+ int getter = 0;
+
+ for(int i=0; i<LOOPS; i++){
+ //Adding to Lru
+ put();
+
+ //Getting a random entry from the map
+ getter = rand.nextInt(ENTRY_ARRAY_LEN);
+ key = keys[getter];
+ val = lru.get(key);
+
+ //Checking if the entries position has changed
+ entryList = lru.entryLruList();
+ tmpKey = entryList.get(entryList.size()-1).getKey();
+ assertTrue("Get did not put entry first", tmpKey.equals(key));
+
+ if(getter != ENTRY_ARRAY_LEN -1){
+ tmpKey = entryList.get(getter).getKey();
+ assertFalse("Get did leave entry in same position", tmpKey.equals(key));
+ }
+
+ lru.clear();
+ }
+ System.out.println("testGet: OK");
+ }
+
+ /**
+ * Updates an entry in the map and checks that the position of it has been
+ * updated afterwards.
+ */
+ public void testUpdate(){
+ for(int i=0; i<LOOPS; i++){
+ //Adding to Lru
+ put();
+
+ //Getting a random entry from the map
+ key = keys[rand.nextInt(ENTRY_ARRAY_LEN)];
+ val = random(val);
+
+ tmpVal = lru.put(key, val);
+
+ //Checking if the value has been updated and that the position i first
+ entryList = lru.entryLruList();
+ tmpKey = entryList.get(entryList.size()-1).getKey();
+ assertTrue("put(update) did not put entry first", tmpKey.equals(key));
+ if(!val.equals(tmpVal)){
+ assertTrue("Value was not updated",
+ entryList.get(entryList.size()-1).getValue().equals(val));
+ assertFalse("Value was not updated",
+ entryList.get(entryList.size()-1).getValue().equals(tmpVal));
+ }
+
+ lru.clear();
+ }
+
+ System.out.println("testUpdate: OK");
+ }
+
+ /**
+ * Removes an entry in the map and checks that it is no longer in the
+ * entryList nor the HashSet afterwards
+ */
+ public void testRemove(){
+ for(int i=0; i<LOOPS; i++){
+ //Adding to Lru
+ put();
+ entryList = lru.entryLruList();
+
+ //Getting a random entry from the map
+ key = keys[rand.nextInt(ENTRY_ARRAY_LEN)];
+ val = lru.remove(key);
+
+ //Checking key is in list
+ entryList = lru.entryLruList();
+ for(int j=0; j<entryList.size(); j++){
+ assertFalse("Entry found in list after remove",
+ entryList.get(j).equals(key));
+ }
+ lru.clear();
+ }
+ System.out.println("testRemove: OK");
+ }
+
+
+ //Helpers
+ private static void put(){
+ //Setting up keys and values
+ random(keys);
+ vals = keys;
+
+ //Inserting into Lru
+ for(int i=0; i<keys.length; i++){
+ lru.put(keys[i], vals[i]);
+ }
+ }
+
+ // Generating data
+ private static HStoreKey random(HStoreKey data){
+ return new HStoreKey(Bytes.toBytes(rand.nextInt(ENTRY_ARRAY_LEN)));
+ }
+ private static void random(HStoreKey[] keys){
+ final int LENGTH = keys.length;
+ Set<Integer> set = new HashSet<Integer>();
+ for(int i=0; i<LENGTH; i++){
+ Integer pos = 0;
+ while(set.contains(pos = new Integer(rand.nextInt(LENGTH)))){}
+ set.add(pos);
+ keys[i] = new HStoreKey(Bytes.toBytes(pos));
+ }
+ }
+
+ private static void sequential(HStoreKey[] keys){
+ for(int i=0; i<keys.length; i++){
+ keys[i] = new HStoreKey(Bytes.toBytes(i));
+ }
+ }
+
+
+ //testAdd
+ private HStoreKey[] mapEntriesToArray(List<LruHashMap.Entry<HStoreKey,
+ HStoreKey>> entryList){
+ List<HStoreKey> res = new ArrayList<HStoreKey>();
+ for(Map.Entry<HStoreKey, HStoreKey> entry : entryList){
+ res.add(entry.getKey());
+ }
+ return res.toArray(new HStoreKey[0]);
+ }
+
+}
+
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestScanner.java b/src/test/org/apache/hadoop/hbase/regionserver/TestScanner.java
new file mode 100644
index 0000000..56225ea
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestScanner.java
@@ -0,0 +1,393 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.StopRowFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchRowFilter;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * Test of a long-lived scanner validating as we go.
+ */
+public class TestScanner extends HBaseTestCase {
+ private final Log LOG = LogFactory.getLog(this.getClass());
+
+ private static final byte [] FIRST_ROW =
+ HConstants.EMPTY_START_ROW;
+ private static final byte [][] COLS = {
+ HConstants.COLUMN_FAMILY
+ };
+ private static final byte [][] EXPLICIT_COLS = {
+ HConstants.COL_REGIONINFO,
+ HConstants.COL_SERVER,
+ HConstants.COL_STARTCODE
+ };
+
+ static final HTableDescriptor TESTTABLEDESC =
+ new HTableDescriptor("testscanner");
+ static {
+ TESTTABLEDESC.addFamily(new HColumnDescriptor(HConstants.COLUMN_FAMILY,
+ 10, // Ten is arbitrary number. Keep versions to help debuggging.
+ Compression.Algorithm.NONE.getName(), false, true, 8 * 1024,
+ Integer.MAX_VALUE, HConstants.FOREVER, false));
+ }
+ /** HRegionInfo for root region */
+ public static final HRegionInfo REGION_INFO =
+ new HRegionInfo(TESTTABLEDESC, HConstants.EMPTY_BYTE_ARRAY,
+ HConstants.EMPTY_BYTE_ARRAY);
+
+ private static final byte [] ROW_KEY = REGION_INFO.getRegionName();
+
+ private static final long START_CODE = Long.MAX_VALUE;
+
+ private MiniDFSCluster cluster = null;
+ private HRegion r;
+ private HRegionIncommon region;
+
+ @Override
+ public void setUp() throws Exception {
+ cluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR,
+ this.cluster.getFileSystem().getHomeDirectory().toString());
+ super.setUp();
+
+ }
+
+ /**
+ * Test basic stop row filter works.
+ * @throws Exception
+ */
+ public void testStopRow() throws Exception {
+ byte [] startrow = Bytes.toBytes("bbb");
+ byte [] stoprow = Bytes.toBytes("ccc");
+ try {
+ this.r = createNewHRegion(REGION_INFO.getTableDesc(), null, null);
+ addContent(this.r, HConstants.COLUMN_FAMILY);
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ // Do simple test of getting one row only first.
+ InternalScanner s = r.getScanner(HConstants.COLUMN_FAMILY_ARRAY,
+ Bytes.toBytes("abc"), HConstants.LATEST_TIMESTAMP,
+ new WhileMatchRowFilter(new StopRowFilter(Bytes.toBytes("abd"))));
+ int count = 0;
+ while (s.next(results)) {
+ count++;
+ }
+ s.close();
+ assertEquals(1, count);
+ // Now do something a bit more imvolved.
+ s = r.getScanner(HConstants.COLUMN_FAMILY_ARRAY,
+ startrow, HConstants.LATEST_TIMESTAMP,
+ new WhileMatchRowFilter(new StopRowFilter(stoprow)));
+ count = 0;
+ KeyValue kv = null;
+ results = new ArrayList<KeyValue>();
+ for (boolean first = true; s.next(results);) {
+ kv = results.get(0);
+ if (first) {
+ assertTrue(Bytes.BYTES_COMPARATOR.compare(startrow, kv.getRow()) == 0);
+ first = false;
+ }
+ count++;
+ }
+ assertTrue(Bytes.BYTES_COMPARATOR.compare(stoprow, kv.getRow()) > 0);
+ // We got something back.
+ assertTrue(count > 10);
+ s.close();
+ } finally {
+ this.r.close();
+ this.r.getLog().closeAndDelete();
+ shutdownDfs(this.cluster);
+ }
+ }
+
+ /** The test!
+ * @throws IOException
+ */
+ public void testScanner() throws IOException {
+ try {
+ r = createNewHRegion(TESTTABLEDESC, null, null);
+ region = new HRegionIncommon(r);
+
+ // Write information to the meta table
+
+ BatchUpdate batchUpdate =
+ new BatchUpdate(ROW_KEY, System.currentTimeMillis());
+
+ ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
+ DataOutputStream s = new DataOutputStream(byteStream);
+ REGION_INFO.write(s);
+ batchUpdate.put(HConstants.COL_REGIONINFO, byteStream.toByteArray());
+ region.commit(batchUpdate);
+
+ // What we just committed is in the memcache. Verify that we can get
+ // it back both with scanning and get
+
+ scan(false, null);
+ getRegionInfo();
+
+ // Close and re-open
+
+ r.close();
+ r = openClosedRegion(r);
+ region = new HRegionIncommon(r);
+
+ // Verify we can get the data back now that it is on disk.
+
+ scan(false, null);
+ getRegionInfo();
+
+ // Store some new information
+
+ HServerAddress address = new HServerAddress("foo.bar.com:1234");
+
+ batchUpdate = new BatchUpdate(ROW_KEY, System.currentTimeMillis());
+
+ batchUpdate.put(HConstants.COL_SERVER, Bytes.toBytes(address.toString()));
+
+ batchUpdate.put(HConstants.COL_STARTCODE, Bytes.toBytes(START_CODE));
+
+ region.commit(batchUpdate);
+
+ // Validate that we can still get the HRegionInfo, even though it is in
+ // an older row on disk and there is a newer row in the memcache
+
+ scan(true, address.toString());
+ getRegionInfo();
+
+ // flush cache
+
+ region.flushcache();
+
+ // Validate again
+
+ scan(true, address.toString());
+ getRegionInfo();
+
+ // Close and reopen
+
+ r.close();
+ r = openClosedRegion(r);
+ region = new HRegionIncommon(r);
+
+ // Validate again
+
+ scan(true, address.toString());
+ getRegionInfo();
+
+ // Now update the information again
+
+ address = new HServerAddress("bar.foo.com:4321");
+
+ batchUpdate = new BatchUpdate(ROW_KEY, System.currentTimeMillis());
+
+ batchUpdate.put(HConstants.COL_SERVER,
+ Bytes.toBytes(address.toString()));
+
+ region.commit(batchUpdate);
+
+ // Validate again
+
+ scan(true, address.toString());
+ getRegionInfo();
+
+ // flush cache
+
+ region.flushcache();
+
+ // Validate again
+
+ scan(true, address.toString());
+ getRegionInfo();
+
+ // Close and reopen
+
+ r.close();
+ r = openClosedRegion(r);
+ region = new HRegionIncommon(r);
+
+ // Validate again
+
+ scan(true, address.toString());
+ getRegionInfo();
+
+ // clean up
+
+ r.close();
+ r.getLog().closeAndDelete();
+
+ } finally {
+ shutdownDfs(cluster);
+ }
+ }
+
+ /** Compare the HRegionInfo we read from HBase to what we stored */
+ private void validateRegionInfo(byte [] regionBytes) throws IOException {
+ HRegionInfo info =
+ (HRegionInfo) Writables.getWritable(regionBytes, new HRegionInfo());
+
+ assertEquals(REGION_INFO.getRegionId(), info.getRegionId());
+ assertEquals(0, info.getStartKey().length);
+ assertEquals(0, info.getEndKey().length);
+ assertEquals(0, Bytes.compareTo(info.getRegionName(), REGION_INFO.getRegionName()));
+ assertEquals(0, info.getTableDesc().compareTo(REGION_INFO.getTableDesc()));
+ }
+
+ /** Use a scanner to get the region info and then validate the results */
+ private void scan(boolean validateStartcode, String serverName)
+ throws IOException {
+ InternalScanner scanner = null;
+ List<KeyValue> results = new ArrayList<KeyValue>();
+ byte [][][] scanColumns = {
+ COLS,
+ EXPLICIT_COLS
+ };
+
+ for(int i = 0; i < scanColumns.length; i++) {
+ try {
+ scanner = r.getScanner(scanColumns[i], FIRST_ROW,
+ System.currentTimeMillis(), null);
+ while (scanner.next(results)) {
+ assertTrue(hasColumn(results, HConstants.COL_REGIONINFO));
+ byte [] val = getColumn(results, HConstants.COL_REGIONINFO).getValue();
+ validateRegionInfo(val);
+ if(validateStartcode) {
+ assertTrue(hasColumn(results, HConstants.COL_STARTCODE));
+ val = getColumn(results, HConstants.COL_STARTCODE).getValue();
+ assertNotNull(val);
+ assertFalse(val.length == 0);
+ long startCode = Bytes.toLong(val);
+ assertEquals(START_CODE, startCode);
+ }
+
+ if(serverName != null) {
+ assertTrue(hasColumn(results, HConstants.COL_SERVER));
+ val = getColumn(results, HConstants.COL_SERVER).getValue();
+ assertNotNull(val);
+ assertFalse(val.length == 0);
+ String server = Bytes.toString(val);
+ assertEquals(0, server.compareTo(serverName));
+ }
+ results.clear();
+ }
+
+ } finally {
+ InternalScanner s = scanner;
+ scanner = null;
+ if(s != null) {
+ s.close();
+ }
+ }
+ }
+ }
+
+ private boolean hasColumn(final List<KeyValue> kvs, final byte [] column) {
+ for (KeyValue kv: kvs) {
+ if (kv.matchingColumn(column)) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ private KeyValue getColumn(final List<KeyValue> kvs, final byte [] column) {
+ for (KeyValue kv: kvs) {
+ if (kv.matchingColumn(column)) {
+ return kv;
+ }
+ }
+ return null;
+ }
+
+ /** Use get to retrieve the HRegionInfo and validate it */
+ private void getRegionInfo() throws IOException {
+ byte [] bytes = region.get(ROW_KEY, HConstants.COL_REGIONINFO).getValue();
+ validateRegionInfo(bytes);
+ }
+
+ /**
+ * HBase-910.
+ * @throws Exception
+ */
+ public void testScanAndConcurrentFlush() throws Exception {
+ this.r = createNewHRegion(REGION_INFO.getTableDesc(), null, null);
+ HRegionIncommon hri = new HRegionIncommon(r);
+ try {
+ LOG.info("Added: " +
+ addContent(hri, Bytes.toString(HConstants.COL_REGIONINFO)));
+ int count = count(hri, -1);
+ assertEquals(count, count(hri, 100));
+ assertEquals(count, count(hri, 0));
+ assertEquals(count, count(hri, count - 1));
+ } catch (Exception e) {
+ LOG.error("Failed", e);
+ throw e;
+ } finally {
+ this.r.close();
+ this.r.getLog().closeAndDelete();
+ shutdownDfs(cluster);
+ }
+ }
+
+ /*
+ * @param hri Region
+ * @param flushIndex At what row we start the flush.
+ * @return Count of rows found.
+ * @throws IOException
+ */
+ private int count(final HRegionIncommon hri, final int flushIndex)
+ throws IOException {
+ LOG.info("Taking out counting scan");
+ ScannerIncommon s = hri.getScanner(EXPLICIT_COLS,
+ HConstants.EMPTY_START_ROW, HConstants.LATEST_TIMESTAMP);
+ List<KeyValue> values = new ArrayList<KeyValue>();
+ int count = 0;
+ while (s.next(values)) {
+ count++;
+ if (flushIndex == count) {
+ LOG.info("Starting flush at flush index " + flushIndex);
+ hri.flushcache();
+ LOG.info("Finishing flush");
+ }
+ }
+ s.close();
+ LOG.info("Found " + count + " items");
+ return count;
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestSplit.java b/src/test/org/apache/hadoop/hbase/regionserver/TestSplit.java
new file mode 100644
index 0000000..99c2468
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestSplit.java
@@ -0,0 +1,264 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.TreeMap;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * {@Link TestHRegion} does a split but this TestCase adds testing of fast
+ * split and manufactures odd-ball split scenarios.
+ */
+public class TestSplit extends HBaseClusterTestCase {
+ static final Log LOG = LogFactory.getLog(TestSplit.class.getName());
+
+ /** constructor */
+ public TestSplit() {
+ super();
+
+ // Always compact if there is more than one store file.
+ conf.setInt("hbase.hstore.compactionThreshold", 2);
+
+ // Make lease timeout longer, lease checks less frequent
+ conf.setInt("hbase.master.lease.period", 10 * 1000);
+ conf.setInt("hbase.master.lease.thread.wakefrequency", 5 * 1000);
+
+ conf.setInt("hbase.regionserver.lease.period", 10 * 1000);
+
+ // Increase the amount of time between client retries
+ conf.setLong("hbase.client.pause", 15 * 1000);
+
+ // This size should make it so we always split using the addContent
+ // below. After adding all data, the first region is 1.3M
+ conf.setLong("hbase.hregion.max.filesize", 1024 * 128);
+ }
+
+ /**
+ * Splits twice and verifies getting from each of the split regions.
+ * @throws Exception
+ */
+ public void testBasicSplit() throws Exception {
+ HRegion region = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME3));
+ region = createNewHRegion(htd, null, null);
+ basicSplit(region);
+ } finally {
+ if (region != null) {
+ region.close();
+ region.getLog().closeAndDelete();
+ }
+ }
+ }
+
+ /**
+ * Test for HBASE-810
+ * @throws Exception
+ */
+ public void testScanSplitOnRegion() throws Exception {
+ HRegion region = null;
+ try {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ htd.addFamily(new HColumnDescriptor(COLFAMILY_NAME3));
+ region = createNewHRegion(htd, null, null);
+ addContent(region, COLFAMILY_NAME3);
+ region.flushcache();
+ final byte [] midkey = region.compactStores();
+ assertNotNull(midkey);
+ byte [][] cols = {COLFAMILY_NAME3};
+ final InternalScanner s = region.getScanner(cols,
+ HConstants.EMPTY_START_ROW, System.currentTimeMillis(), null);
+ final HRegion regionForThread = region;
+
+ Thread splitThread = new Thread() {
+ @Override
+ public void run() {
+ try {
+ split(regionForThread, midkey);
+ } catch (IOException e) {
+ fail("Unexpected exception " + e);
+ }
+ }
+ };
+ splitThread.start();
+ HRegionServer server = cluster.getRegionThreads().get(0).getRegionServer();
+ long id = server.addScanner(s);
+ for(int i = 0; i < 6; i++) {
+ try {
+ BatchUpdate update =
+ new BatchUpdate(region.getRegionInfo().getStartKey());
+ update.put(COLFAMILY_NAME3, Bytes.toBytes("val"));
+ region.batchUpdate(update);
+ Thread.sleep(1000);
+ }
+ catch (InterruptedException e) {
+ fail("Unexpected exception " + e);
+ }
+ }
+ server.next(id);
+ server.close(id);
+ } catch(UnknownScannerException ex) {
+ ex.printStackTrace();
+ fail("Got the " + ex);
+ }
+ }
+
+ private void basicSplit(final HRegion region) throws Exception {
+ LOG.info("" + addContent(region, COLFAMILY_NAME3));
+ region.flushcache();
+ byte [] splitRow = region.compactStores();
+ assertNotNull(splitRow);
+ LOG.info("SplitRow: " + Bytes.toString(splitRow));
+ HRegion [] regions = split(region, splitRow);
+ try {
+ // Need to open the regions.
+ // TODO: Add an 'open' to HRegion... don't do open by constructing
+ // instance.
+ for (int i = 0; i < regions.length; i++) {
+ regions[i] = openClosedRegion(regions[i]);
+ }
+ // Assert can get rows out of new regions. Should be able to get first
+ // row from first region and the midkey from second region.
+ assertGet(regions[0], COLFAMILY_NAME3, Bytes.toBytes(START_KEY));
+ assertGet(regions[1], COLFAMILY_NAME3, splitRow);
+ // Test I can get scanner and that it starts at right place.
+ assertScan(regions[0], COLFAMILY_NAME3,
+ Bytes.toBytes(START_KEY));
+ assertScan(regions[1], COLFAMILY_NAME3, splitRow);
+ // Now prove can't split regions that have references.
+ for (int i = 0; i < regions.length; i++) {
+ // Add so much data to this region, we create a store file that is >
+ // than one of our unsplitable references. it will.
+ for (int j = 0; j < 2; j++) {
+ addContent(regions[i], COLFAMILY_NAME3);
+ }
+ addContent(regions[i], COLFAMILY_NAME2);
+ addContent(regions[i], COLFAMILY_NAME1);
+ regions[i].flushcache();
+ }
+
+ byte [][] midkeys = new byte [regions.length][];
+ // To make regions splitable force compaction.
+ for (int i = 0; i < regions.length; i++) {
+ midkeys[i] = regions[i].compactStores();
+ }
+
+ TreeMap<String, HRegion> sortedMap = new TreeMap<String, HRegion>();
+ // Split these two daughter regions so then I'll have 4 regions. Will
+ // split because added data above.
+ for (int i = 0; i < regions.length; i++) {
+ HRegion[] rs = null;
+ if (midkeys[i] != null) {
+ rs = split(regions[i], midkeys[i]);
+ for (int j = 0; j < rs.length; j++) {
+ sortedMap.put(Bytes.toString(rs[j].getRegionName()),
+ openClosedRegion(rs[j]));
+ }
+ }
+ }
+ LOG.info("Made 4 regions");
+ // The splits should have been even. Test I can get some arbitrary row out
+ // of each.
+ int interval = (LAST_CHAR - FIRST_CHAR) / 3;
+ byte[] b = Bytes.toBytes(START_KEY);
+ for (HRegion r : sortedMap.values()) {
+ assertGet(r, COLFAMILY_NAME3, b);
+ b[0] += interval;
+ }
+ } finally {
+ for (int i = 0; i < regions.length; i++) {
+ try {
+ regions[i].close();
+ } catch (IOException e) {
+ // Ignore.
+ }
+ }
+ }
+ }
+
+ private void assertGet(final HRegion r, final byte [] family, final byte [] k)
+ throws IOException {
+ // Now I have k, get values out and assert they are as expected.
+ Cell[] results = Cell.createSingleCellArray(r.get(k, family, -1, Integer.MAX_VALUE));
+ for (int j = 0; j < results.length; j++) {
+ byte [] tmp = results[j].getValue();
+ // Row should be equal to value every time.
+ assertTrue(Bytes.equals(k, tmp));
+ }
+ }
+
+ /*
+ * Assert first value in the passed region is <code>firstValue</code>.
+ * @param r
+ * @param column
+ * @param firstValue
+ * @throws IOException
+ */
+ private void assertScan(final HRegion r, final byte [] column,
+ final byte [] firstValue)
+ throws IOException {
+ byte [][] cols = {column};
+ InternalScanner s = r.getScanner(cols,
+ HConstants.EMPTY_START_ROW, System.currentTimeMillis(), null);
+ try {
+ List<KeyValue> curVals = new ArrayList<KeyValue>();
+ boolean first = true;
+ OUTER_LOOP: while(s.next(curVals)) {
+ for (KeyValue kv: curVals) {
+ byte [] val = kv.getValue();
+ byte [] curval = val;
+ if (first) {
+ first = false;
+ assertTrue(Bytes.compareTo(curval, firstValue) == 0);
+ } else {
+ // Not asserting anything. Might as well break.
+ break OUTER_LOOP;
+ }
+ }
+ }
+ } finally {
+ s.close();
+ }
+ }
+
+ protected HRegion [] split(final HRegion r, final byte [] splitRow)
+ throws IOException {
+ // Assert can get mid key from passed region.
+ assertGet(r, COLFAMILY_NAME3, splitRow);
+ HRegion [] regions = r.splitRegion(splitRow);
+ assertEquals(regions.length, 2);
+ return regions;
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestStoreFile.java b/src/test/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
new file mode 100644
index 0000000..941d22f4
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
@@ -0,0 +1,299 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HStoreKey;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.Reference.Range;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test HStoreFile
+ */
+public class TestStoreFile extends HBaseTestCase {
+ static final Log LOG = LogFactory.getLog(TestStoreFile.class);
+ private MiniDFSCluster cluster;
+
+ @Override
+ public void setUp() throws Exception {
+ try {
+ this.cluster = new MiniDFSCluster(this.conf, 2, true, (String[])null);
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR,
+ this.cluster.getFileSystem().getHomeDirectory().toString());
+ } catch (IOException e) {
+ shutdownDfs(cluster);
+ }
+ super.setUp();
+ }
+
+ @Override
+ public void tearDown() throws Exception {
+ super.tearDown();
+ shutdownDfs(cluster);
+ // ReflectionUtils.printThreadInfo(new PrintWriter(System.out),
+ // "Temporary end-of-test thread dump debugging HADOOP-2040: " + getName());
+ }
+
+ /**
+ * Write a file and then assert that we can read from top and bottom halves
+ * using two HalfMapFiles.
+ * @throws Exception
+ */
+ public void testBasicHalfMapFile() throws Exception {
+ // Make up a directory hierarchy that has a regiondir and familyname.
+ HFile.Writer writer = StoreFile.getWriter(this.fs,
+ new Path(new Path(this.testDir, "regionname"), "familyname"),
+ 2 * 1024, null, null, false);
+ writeStoreFile(writer);
+ checkHalfHFile(new StoreFile(this.fs, writer.getPath()));
+ }
+
+ /*
+ * Writes HStoreKey and ImmutableBytes data to passed writer and
+ * then closes it.
+ * @param writer
+ * @throws IOException
+ */
+ private void writeStoreFile(final HFile.Writer writer)
+ throws IOException {
+ long now = System.currentTimeMillis();
+ byte [] column =
+ Bytes.toBytes(getName() + KeyValue.COLUMN_FAMILY_DELIMITER + getName());
+ try {
+ for (char d = FIRST_CHAR; d <= LAST_CHAR; d++) {
+ for (char e = FIRST_CHAR; e <= LAST_CHAR; e++) {
+ byte[] b = new byte[] { (byte) d, (byte) e };
+ writer.append(new KeyValue(b, column, now, b));
+ }
+ }
+ } finally {
+ writer.close();
+ }
+ }
+
+ /**
+ * Test that our mechanism of writing store files in one region to reference
+ * store files in other regions works.
+ * @throws IOException
+ */
+ public void testReference()
+ throws IOException {
+ Path storedir = new Path(new Path(this.testDir, "regionname"), "familyname");
+ Path dir = new Path(storedir, "1234567890");
+ // Make a store file and write data to it.
+ HFile.Writer writer = StoreFile.getWriter(this.fs, dir, 8 * 1024, null,
+ null, false);
+ writeStoreFile(writer);
+ StoreFile hsf = new StoreFile(this.fs, writer.getPath());
+ HFile.Reader reader = hsf.getReader();
+ // Split on a row, not in middle of row. Midkey returned by reader
+ // may be in middle of row. Create new one with empty column and
+ // timestamp.
+ HStoreKey hsk = HStoreKey.create(reader.midkey());
+ byte [] midkey = hsk.getRow();
+ hsk = HStoreKey.create(reader.getLastKey());
+ byte [] finalKey = hsk.getRow();
+ // Make a reference
+ Path refPath = StoreFile.split(fs, dir, hsf, reader.midkey(), Range.top);
+ StoreFile refHsf = new StoreFile(this.fs, refPath);
+ // Now confirm that I can read from the reference and that it only gets
+ // keys from top half of the file.
+ HFileScanner s = refHsf.getReader().getScanner();
+ for(boolean first = true; (!s.isSeeked() && s.seekTo()) || s.next();) {
+ ByteBuffer bb = s.getKey();
+ hsk = HStoreKey.create(bb.array(), bb.arrayOffset(), bb.limit());
+ if (first) {
+ assertTrue(Bytes.equals(hsk.getRow(), midkey));
+ first = false;
+ }
+ }
+ assertTrue(Bytes.equals(hsk.getRow(), finalKey));
+ }
+
+ private void checkHalfHFile(final StoreFile f)
+ throws IOException {
+ byte [] midkey = f.getReader().midkey();
+ // Create top split.
+ Path topDir = Store.getStoreHomedir(this.testDir, 1,
+ Bytes.toBytes(f.getPath().getParent().getName()));
+ if (this.fs.exists(topDir)) {
+ this.fs.delete(topDir, true);
+ }
+ Path topPath = StoreFile.split(this.fs, topDir, f, midkey, Range.top);
+ // Create bottom split.
+ Path bottomDir = Store.getStoreHomedir(this.testDir, 2,
+ Bytes.toBytes(f.getPath().getParent().getName()));
+ if (this.fs.exists(bottomDir)) {
+ this.fs.delete(bottomDir, true);
+ }
+ Path bottomPath = StoreFile.split(this.fs, bottomDir,
+ f, midkey, Range.bottom);
+ // Make readers on top and bottom.
+ HFile.Reader top = new StoreFile(this.fs, topPath).getReader();
+ HFile.Reader bottom = new StoreFile(this.fs, bottomPath).getReader();
+ ByteBuffer previous = null;
+ LOG.info("Midkey: " + Bytes.toString(midkey));
+ byte [] midkeyBytes = new HStoreKey(midkey).getBytes();
+ ByteBuffer bbMidkeyBytes = ByteBuffer.wrap(midkeyBytes);
+ try {
+ // Now make two HalfMapFiles and assert they can read the full backing
+ // file, one from the top and the other from the bottom.
+ // Test bottom half first.
+ // Now test reading from the top.
+ boolean first = true;
+ ByteBuffer key = null;
+ HFileScanner topScanner = top.getScanner();
+ while ((!topScanner.isSeeked() && topScanner.seekTo()) ||
+ (topScanner.isSeeked() && topScanner.next())) {
+ key = topScanner.getKey();
+
+ assertTrue(topScanner.getReader().getComparator().compare(key.array(),
+ key.arrayOffset(), key.limit(), midkeyBytes, 0, midkeyBytes.length) >= 0);
+ if (first) {
+ first = false;
+ LOG.info("First in top: " + Bytes.toString(Bytes.toBytes(key)));
+ }
+ }
+ LOG.info("Last in top: " + Bytes.toString(Bytes.toBytes(key)));
+
+ first = true;
+ HFileScanner bottomScanner = bottom.getScanner();
+ while ((!bottomScanner.isSeeked() && bottomScanner.seekTo()) ||
+ bottomScanner.next()) {
+ previous = bottomScanner.getKey();
+ key = bottomScanner.getKey();
+ if (first) {
+ first = false;
+ LOG.info("First in bottom: " +
+ Bytes.toString(Bytes.toBytes(previous)));
+ }
+ assertTrue(key.compareTo(bbMidkeyBytes) < 0);
+ }
+ if (previous != null) {
+ LOG.info("Last in bottom: " + Bytes.toString(Bytes.toBytes(previous)));
+ }
+ // Remove references.
+ this.fs.delete(topPath, false);
+ this.fs.delete(bottomPath, false);
+
+ // Next test using a midkey that does not exist in the file.
+ // First, do a key that is < than first key. Ensure splits behave
+ // properly.
+ byte [] badmidkey = Bytes.toBytes(" .");
+ topPath = StoreFile.split(this.fs, topDir, f, badmidkey, Range.top);
+ bottomPath = StoreFile.split(this.fs, bottomDir, f, badmidkey,
+ Range.bottom);
+ top = new StoreFile(this.fs, topPath).getReader();
+ bottom = new StoreFile(this.fs, bottomPath).getReader();
+ bottomScanner = bottom.getScanner();
+ int count = 0;
+ while ((!bottomScanner.isSeeked() && bottomScanner.seekTo()) ||
+ bottomScanner.next()) {
+ count++;
+ }
+ // When badkey is < than the bottom, should return no values.
+ assertTrue(count == 0);
+ // Now read from the top.
+ first = true;
+ topScanner = top.getScanner();
+ while ((!topScanner.isSeeked() && topScanner.seekTo()) ||
+ topScanner.next()) {
+ key = topScanner.getKey();
+ assertTrue(topScanner.getReader().getComparator().compare(key.array(),
+ key.arrayOffset(), key.limit(), badmidkey, 0, badmidkey.length) >= 0);
+ if (first) {
+ first = false;
+ first = false;
+ HStoreKey keyhsk = HStoreKey.create(key);
+ LOG.info("First top when key < bottom: " + keyhsk);
+ String tmp = Bytes.toString(keyhsk.getRow());
+ for (int i = 0; i < tmp.length(); i++) {
+ assertTrue(tmp.charAt(i) == 'a');
+ }
+ }
+ }
+ HStoreKey keyhsk = HStoreKey.create(key);
+ LOG.info("Last top when key < bottom: " + keyhsk);
+ String tmp = Bytes.toString(keyhsk.getRow());
+ for (int i = 0; i < tmp.length(); i++) {
+ assertTrue(tmp.charAt(i) == 'z');
+ }
+ // Remove references.
+ this.fs.delete(topPath, false);
+ this.fs.delete(bottomPath, false);
+
+ // Test when badkey is > than last key in file ('||' > 'zz').
+ badmidkey = Bytes.toBytes("|||");
+ topPath = StoreFile.split(this.fs, topDir, f, badmidkey, Range.top);
+ bottomPath = StoreFile.split(this.fs, bottomDir, f, badmidkey,
+ Range.bottom);
+ top = new StoreFile(this.fs, topPath).getReader();
+ bottom = new StoreFile(this.fs, bottomPath).getReader();
+ first = true;
+ bottomScanner = bottom.getScanner();
+ while ((!bottomScanner.isSeeked() && bottomScanner.seekTo()) ||
+ bottomScanner.next()) {
+ key = bottomScanner.getKey();
+ if (first) {
+ first = false;
+ keyhsk = HStoreKey.create(key);
+ LOG.info("First bottom when key > top: " + keyhsk);
+ tmp = Bytes.toString(keyhsk.getRow());
+ for (int i = 0; i < tmp.length(); i++) {
+ assertTrue(tmp.charAt(i) == 'a');
+ }
+ }
+ }
+ keyhsk = HStoreKey.create(key);
+ LOG.info("Last bottom when key > top: " + keyhsk);
+ for (int i = 0; i < tmp.length(); i++) {
+ assertTrue(Bytes.toString(keyhsk.getRow()).charAt(i) == 'z');
+ }
+ count = 0;
+ topScanner = top.getScanner();
+ while ((!topScanner.isSeeked() && topScanner.seekTo()) ||
+ (topScanner.isSeeked() && topScanner.next())) {
+ count++;
+ }
+ // When badkey is < than the bottom, should return no values.
+ assertTrue(count == 0);
+ } finally {
+ if (top != null) {
+ top.close();
+ }
+ if (bottom != null) {
+ bottom.close();
+ }
+ fs.delete(f.getPath(), true);
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/TestTimestamp.java b/src/test/org/apache/hadoop/hbase/regionserver/TestTimestamp.java
new file mode 100644
index 0000000..04da57c
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/TestTimestamp.java
@@ -0,0 +1,85 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TimestampTestBase;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests user specifiable time stamps putting, getting and scanning. Also
+ * tests same in presence of deletes. Test cores are written so can be
+ * run against an HRegion and against an HTable: i.e. both local and remote.
+ */
+public class TestTimestamp extends HBaseClusterTestCase {
+ private static final Log LOG =
+ LogFactory.getLog(TestTimestamp.class.getName());
+
+ private static final String COLUMN_NAME = "contents:";
+ private static final byte [] COLUMN = Bytes.toBytes(COLUMN_NAME);
+ private static final int VERSIONS = 3;
+
+ /**
+ * Test that delete works according to description in <a
+ * href="https://issues.apache.org/jira/browse/HADOOP-1784">hadoop-1784</a>.
+ * @throws IOException
+ */
+ public void testDelete() throws IOException {
+ final HRegion r = createRegion();
+ try {
+ final HRegionIncommon region = new HRegionIncommon(r);
+ TimestampTestBase.doTestDelete(region, region);
+ } finally {
+ r.close();
+ r.getLog().closeAndDelete();
+ }
+ LOG.info("testDelete() finished");
+ }
+
+ /**
+ * Test scanning against different timestamps.
+ * @throws IOException
+ */
+ public void testTimestampScanning() throws IOException {
+ final HRegion r = createRegion();
+ try {
+ final HRegionIncommon region = new HRegionIncommon(r);
+ TimestampTestBase.doTestTimestampScanning(region, region);
+ } finally {
+ r.close();
+ r.getLog().closeAndDelete();
+ }
+ LOG.info("testTimestampScanning() finished");
+ }
+
+ private HRegion createRegion() throws IOException {
+ HTableDescriptor htd = createTableDescriptor(getName());
+ htd.addFamily(new HColumnDescriptor(COLUMN, VERSIONS,
+ HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+ Integer.MAX_VALUE, HConstants.FOREVER, false));
+ return createNewHRegion(htd, null, null);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/transactional/DisabledTestHLogRecovery.java b/src/test/org/apache/hadoop/hbase/regionserver/transactional/DisabledTestHLogRecovery.java
new file mode 100644
index 0000000..50d5b27
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/transactional/DisabledTestHLogRecovery.java
@@ -0,0 +1,279 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.transactional;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.LocalHBaseCluster;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.client.transactional.CommitUnsuccessfulException;
+import org.apache.hadoop.hbase.client.transactional.TransactionManager;
+import org.apache.hadoop.hbase.client.transactional.TransactionState;
+import org.apache.hadoop.hbase.client.transactional.TransactionalTable;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.ipc.TransactionalRegionInterface;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class DisabledTestHLogRecovery extends HBaseClusterTestCase {
+ protected static final Log LOG = LogFactory.getLog(DisabledTestHLogRecovery.class);
+
+ private static final String TABLE_NAME = "table1";
+
+ private static final byte[] FAMILY = Bytes.toBytes("family:");
+ static final byte[] COL_A = Bytes.toBytes("family:a");
+
+ private static final byte[] ROW1 = Bytes.toBytes("row1");
+ private static final byte[] ROW2 = Bytes.toBytes("row2");
+ private static final byte[] ROW3 = Bytes.toBytes("row3");
+ private static final int TOTAL_VALUE = 10;
+
+ private HBaseAdmin admin;
+ private TransactionManager transactionManager;
+ private TransactionalTable table;
+
+ /** constructor */
+ public DisabledTestHLogRecovery() {
+ super(2, false);
+
+ conf.set(HConstants.REGION_SERVER_CLASS, TransactionalRegionInterface.class
+ .getName());
+ conf.set(HConstants.REGION_SERVER_IMPL, TransactionalRegionServer.class
+ .getName());
+
+ // Set flush params so we don't get any
+ // FIXME (defaults are probably fine)
+
+ // Copied from TestRegionServerExit
+ conf.setInt("ipc.client.connect.max.retries", 5); // reduce ipc retries
+ conf.setInt("ipc.client.timeout", 10000); // and ipc timeout
+ conf.setInt("hbase.client.pause", 10000); // increase client timeout
+ conf.setInt("hbase.client.retries.number", 10); // increase HBase retries
+ }
+
+ @Override
+ protected void setUp() throws Exception {
+ FileSystem.getLocal(conf).delete(new Path(conf.get(HConstants.HBASE_DIR)), true);
+ super.setUp();
+
+ HTableDescriptor desc = new HTableDescriptor(TABLE_NAME);
+ desc.addFamily(new HColumnDescriptor(FAMILY));
+ admin = new HBaseAdmin(conf);
+ admin.createTable(desc);
+ table = new TransactionalTable(conf, desc.getName());
+
+ transactionManager = new TransactionManager(conf);
+ writeInitalRows();
+ }
+
+ private void writeInitalRows() throws IOException {
+ BatchUpdate update = new BatchUpdate(ROW1);
+ update.put(COL_A, Bytes.toBytes(TOTAL_VALUE));
+ table.commit(update);
+ update = new BatchUpdate(ROW2);
+ update.put(COL_A, Bytes.toBytes(0));
+ table.commit(update);
+ update = new BatchUpdate(ROW3);
+ update.put(COL_A, Bytes.toBytes(0));
+ table.commit(update);
+ }
+
+ public void testWithoutFlush() throws IOException,
+ CommitUnsuccessfulException {
+ writeInitalRows();
+ TransactionState state1 = makeTransaction(false);
+ transactionManager.tryCommit(state1);
+ stopOrAbortRegionServer(true);
+
+ Thread t = startVerificationThread(1);
+ t.start();
+ threadDumpingJoin(t);
+ }
+
+ public void testWithFlushBeforeCommit() throws IOException,
+ CommitUnsuccessfulException {
+ writeInitalRows();
+ TransactionState state1 = makeTransaction(false);
+ flushRegionServer();
+ transactionManager.tryCommit(state1);
+ stopOrAbortRegionServer(true);
+
+ Thread t = startVerificationThread(1);
+ t.start();
+ threadDumpingJoin(t);
+ }
+
+ // FIXME, TODO
+ // public void testWithFlushBetweenTransactionWrites() {
+ // fail();
+ // }
+
+ private void flushRegionServer() {
+ List<LocalHBaseCluster.RegionServerThread> regionThreads = cluster
+ .getRegionThreads();
+
+ HRegion region = null;
+ int server = -1;
+ for (int i = 0; i < regionThreads.size() && server == -1; i++) {
+ HRegionServer s = regionThreads.get(i).getRegionServer();
+ Collection<HRegion> regions = s.getOnlineRegions();
+ for (HRegion r : regions) {
+ if (Bytes.equals(r.getTableDesc().getName(), Bytes.toBytes(TABLE_NAME))) {
+ server = i;
+ region = r;
+ }
+ }
+ }
+ if (server == -1) {
+ LOG.fatal("could not find region server serving table region");
+ fail();
+ }
+ ((TransactionalRegionServer) regionThreads.get(server).getRegionServer())
+ .getFlushRequester().request(region);
+ }
+
+ /**
+ * Stop the region server serving TABLE_NAME.
+ *
+ * @param abort set to true if region server should be aborted, if false it is
+ * just shut down.
+ */
+ private void stopOrAbortRegionServer(final boolean abort) {
+ List<LocalHBaseCluster.RegionServerThread> regionThreads = cluster
+ .getRegionThreads();
+
+ int server = -1;
+ for (int i = 0; i < regionThreads.size(); i++) {
+ HRegionServer s = regionThreads.get(i).getRegionServer();
+ Collection<HRegion> regions = s.getOnlineRegions();
+ LOG.info("server: " + regionThreads.get(i).getName());
+ for (HRegion r : regions) {
+ LOG.info("region: " + r.getRegionInfo().getRegionNameAsString());
+ if (Bytes.equals(r.getTableDesc().getName(), Bytes.toBytes(TABLE_NAME))) {
+ server = i;
+ }
+ }
+ }
+ if (server == -1) {
+ LOG.fatal("could not find region server serving table region");
+ fail();
+ }
+ if (abort) {
+ this.cluster.abortRegionServer(server);
+
+ } else {
+ this.cluster.stopRegionServer(server, false);
+ }
+ LOG.info(this.cluster.waitOnRegionServer(server) + " has been "
+ + (abort ? "aborted" : "shut down"));
+ }
+
+ protected void verify(final int numRuns) throws IOException {
+ // Reads
+ int row1 = Bytes.toInt(table.get(ROW1, COL_A).getValue());
+ int row2 = Bytes.toInt(table.get(ROW2, COL_A).getValue());
+ int row3 = Bytes.toInt(table.get(ROW3, COL_A).getValue());
+
+ assertEquals(TOTAL_VALUE - 2 * numRuns, row1);
+ assertEquals(numRuns, row2);
+ assertEquals(numRuns, row3);
+ }
+
+ // Move 2 out of ROW1 and 1 into ROW2 and 1 into ROW3
+ private TransactionState makeTransaction(final boolean flushMidWay)
+ throws IOException {
+ TransactionState transactionState = transactionManager.beginTransaction();
+
+ // Reads
+ int row1 = Bytes.toInt(table.get(transactionState, ROW1, COL_A).getValue());
+ int row2 = Bytes.toInt(table.get(transactionState, ROW2, COL_A).getValue());
+ int row3 = Bytes.toInt(table.get(transactionState, ROW3, COL_A).getValue());
+
+ row1 -= 2;
+ row2 += 1;
+ row3 += 1;
+
+ if (flushMidWay) {
+ flushRegionServer();
+ }
+
+ // Writes
+ BatchUpdate write = new BatchUpdate(ROW1);
+ write.put(COL_A, Bytes.toBytes(row1));
+ table.commit(transactionState, write);
+
+ write = new BatchUpdate(ROW2);
+ write.put(COL_A, Bytes.toBytes(row2));
+ table.commit(transactionState, write);
+
+ write = new BatchUpdate(ROW3);
+ write.put(COL_A, Bytes.toBytes(row3));
+ table.commit(transactionState, write);
+
+ return transactionState;
+ }
+
+ /*
+ * Run verification in a thread so I can concurrently run a thread-dumper
+ * while we're waiting (because in this test sometimes the meta scanner looks
+ * to be be stuck). @param tableName Name of table to find. @param row Row we
+ * expect to find. @return Verification thread. Caller needs to calls start on
+ * it.
+ */
+ private Thread startVerificationThread(final int numRuns) {
+ Runnable runnable = new Runnable() {
+ public void run() {
+ try {
+ // Now try to open a scanner on the meta table. Should stall until
+ // meta server comes back up.
+ HTable t = new HTable(conf, TABLE_NAME);
+ Scanner s = t.getScanner(new byte[][] { COL_A },
+ HConstants.EMPTY_START_ROW);
+ s.close();
+
+ } catch (IOException e) {
+ LOG.fatal("could not re-open meta table because", e);
+ fail();
+ }
+ try {
+ verify(numRuns);
+ LOG.info("Success!");
+ } catch (Exception e) {
+ e.printStackTrace();
+ fail();
+ }
+ }
+ };
+ return new Thread(runnable);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/regionserver/transactional/TestTransactionalHLogManager.java b/src/test/org/apache/hadoop/hbase/regionserver/transactional/TestTransactionalHLogManager.java
new file mode 100644
index 0000000..533d77d
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/regionserver/transactional/TestTransactionalHLogManager.java
@@ -0,0 +1,308 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.transactional;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/** JUnit test case for HLog */
+public class TestTransactionalHLogManager extends HBaseTestCase implements
+ HConstants {
+ private Path dir;
+ private MiniDFSCluster cluster;
+
+ final byte[] tableName = Bytes.toBytes("tablename");
+ final HTableDescriptor tableDesc = new HTableDescriptor(tableName);
+ final HRegionInfo regionInfo = new HRegionInfo(tableDesc,
+ HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW);
+ final byte[] row1 = Bytes.toBytes("row1");
+ final byte[] val1 = Bytes.toBytes("val1");
+ final byte[] row2 = Bytes.toBytes("row2");
+ final byte[] val2 = Bytes.toBytes("val2");
+ final byte[] row3 = Bytes.toBytes("row3");
+ final byte[] val3 = Bytes.toBytes("val3");
+ final byte[] col = Bytes.toBytes("col:A");
+
+ @Override
+ public void setUp() throws Exception {
+ cluster = new MiniDFSCluster(conf, 2, true, (String[]) null);
+ // Set the hbase.rootdir to be the home directory in mini dfs.
+ this.conf.set(HConstants.HBASE_DIR, this.cluster.getFileSystem()
+ .getHomeDirectory().toString());
+ super.setUp();
+ this.dir = new Path("/hbase", getName());
+ if (fs.exists(dir)) {
+ fs.delete(dir, true);
+ }
+ }
+
+ @Override
+ public void tearDown() throws Exception {
+ if (this.fs.exists(this.dir)) {
+ this.fs.delete(this.dir, true);
+ }
+ shutdownDfs(cluster);
+ super.tearDown();
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testSingleCommit() throws IOException {
+
+ HLog log = new HLog(fs, dir, this.conf, null);
+ TransactionalHLogManager logMangaer = new TransactionalHLogManager(log, fs,
+ regionInfo, conf);
+
+ // Write columns named 1, 2, 3, etc. and then values of single byte
+ // 1, 2, 3...
+ long transactionId = 1;
+ logMangaer.writeStartToLog(transactionId);
+
+ BatchUpdate update1 = new BatchUpdate(row1);
+ update1.put(col, val1);
+ logMangaer.writeUpdateToLog(transactionId, update1);
+
+ BatchUpdate update2 = new BatchUpdate(row2);
+ update2.put(col, val2);
+ logMangaer.writeUpdateToLog(transactionId, update2);
+
+ BatchUpdate update3 = new BatchUpdate(row3);
+ update3.put(col, val3);
+ logMangaer.writeUpdateToLog(transactionId, update3);
+
+ logMangaer.writeCommitToLog(transactionId);
+
+ // log.completeCacheFlush(regionName, tableName, logSeqId);
+
+ log.close();
+ Path filename = log.computeFilename(log.getFilenum());
+
+ Map<Long, List<BatchUpdate>> commits = logMangaer.getCommitsFromLog(
+ filename, -1, null);
+
+ assertEquals(1, commits.size());
+ assertTrue(commits.containsKey(transactionId));
+ assertEquals(3, commits.get(transactionId).size());
+
+ List<BatchUpdate> updates = commits.get(transactionId);
+
+ update1 = updates.get(0);
+ assertTrue(Bytes.equals(row1, update1.getRow()));
+ assertTrue(Bytes.equals(val1, update1.iterator().next().getValue()));
+
+ update2 = updates.get(1);
+ assertTrue(Bytes.equals(row2, update2.getRow()));
+ assertTrue(Bytes.equals(val2, update2.iterator().next().getValue()));
+
+ update3 = updates.get(2);
+ assertTrue(Bytes.equals(row3, update3.getRow()));
+ assertTrue(Bytes.equals(val3, update3.iterator().next().getValue()));
+
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testSingleAbort() throws IOException {
+
+ HLog log = new HLog(fs, dir, this.conf, null);
+ TransactionalHLogManager logMangaer = new TransactionalHLogManager(log, fs,
+ regionInfo, conf);
+
+ long transactionId = 1;
+ logMangaer.writeStartToLog(transactionId);
+
+ BatchUpdate update1 = new BatchUpdate(row1);
+ update1.put(col, val1);
+ logMangaer.writeUpdateToLog(transactionId, update1);
+
+ BatchUpdate update2 = new BatchUpdate(row2);
+ update2.put(col, val2);
+ logMangaer.writeUpdateToLog(transactionId, update2);
+
+ BatchUpdate update3 = new BatchUpdate(row3);
+ update3.put(col, val3);
+ logMangaer.writeUpdateToLog(transactionId, update3);
+
+ logMangaer.writeAbortToLog(transactionId);
+
+ // log.completeCacheFlush(regionName, tableName, logSeqId);
+
+ log.close();
+ Path filename = log.computeFilename(log.getFilenum());
+
+ Map<Long, List<BatchUpdate>> commits = logMangaer.getCommitsFromLog(
+ filename, -1, null);
+
+ assertEquals(0, commits.size());
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testInterlievedCommits() throws IOException {
+
+ HLog log = new HLog(fs, dir, this.conf, null);
+ TransactionalHLogManager logMangaer = new TransactionalHLogManager(log, fs,
+ regionInfo, conf);
+
+ long transaction1Id = 1;
+ long transaction2Id = 2;
+ logMangaer.writeStartToLog(transaction1Id);
+
+ BatchUpdate update1 = new BatchUpdate(row1);
+ update1.put(col, val1);
+ logMangaer.writeUpdateToLog(transaction1Id, update1);
+
+ logMangaer.writeStartToLog(transaction2Id);
+
+ BatchUpdate update2 = new BatchUpdate(row2);
+ update2.put(col, val2);
+ logMangaer.writeUpdateToLog(transaction2Id, update2);
+
+ BatchUpdate update3 = new BatchUpdate(row3);
+ update3.put(col, val3);
+ logMangaer.writeUpdateToLog(transaction1Id, update3);
+
+ logMangaer.writeCommitToLog(transaction2Id);
+ logMangaer.writeCommitToLog(transaction1Id);
+
+ // log.completeCacheFlush(regionName, tableName, logSeqId);
+
+ log.close();
+ Path filename = log.computeFilename(log.getFilenum());
+
+ Map<Long, List<BatchUpdate>> commits = logMangaer.getCommitsFromLog(
+ filename, -1, null);
+
+ assertEquals(2, commits.size());
+ assertEquals(2, commits.get(transaction1Id).size());
+ assertEquals(1, commits.get(transaction2Id).size());
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testInterlievedAbortCommit() throws IOException {
+
+ HLog log = new HLog(fs, dir, this.conf, null);
+ TransactionalHLogManager logMangaer = new TransactionalHLogManager(log, fs,
+ regionInfo, conf);
+
+ long transaction1Id = 1;
+ long transaction2Id = 2;
+ logMangaer.writeStartToLog(transaction1Id);
+
+ BatchUpdate update1 = new BatchUpdate(row1);
+ update1.put(col, val1);
+ logMangaer.writeUpdateToLog(transaction1Id, update1);
+
+ logMangaer.writeStartToLog(transaction2Id);
+
+ BatchUpdate update2 = new BatchUpdate(row2);
+ update2.put(col, val2);
+ logMangaer.writeUpdateToLog(transaction2Id, update2);
+
+ logMangaer.writeAbortToLog(transaction2Id);
+
+ BatchUpdate update3 = new BatchUpdate(row3);
+ update3.put(col, val3);
+ logMangaer.writeUpdateToLog(transaction1Id, update3);
+
+ logMangaer.writeCommitToLog(transaction1Id);
+
+ // log.completeCacheFlush(regionName, tableName, logSeqId);
+
+ log.close();
+ Path filename = log.computeFilename(log.getFilenum());
+
+ Map<Long, List<BatchUpdate>> commits = logMangaer.getCommitsFromLog(
+ filename, -1, null);
+
+ assertEquals(1, commits.size());
+ assertEquals(2, commits.get(transaction1Id).size());
+ }
+
+ /**
+ * @throws IOException
+ */
+ public void testInterlievedCommitAbort() throws IOException {
+
+ HLog log = new HLog(fs, dir, this.conf, null);
+ TransactionalHLogManager logMangaer = new TransactionalHLogManager(log, fs,
+ regionInfo, conf);
+
+ long transaction1Id = 1;
+ long transaction2Id = 2;
+ logMangaer.writeStartToLog(transaction1Id);
+
+ BatchUpdate update1 = new BatchUpdate(row1);
+ update1.put(col, val1);
+ logMangaer.writeUpdateToLog(transaction1Id, update1);
+
+ logMangaer.writeStartToLog(transaction2Id);
+
+ BatchUpdate update2 = new BatchUpdate(row2);
+ update2.put(col, val2);
+ logMangaer.writeUpdateToLog(transaction2Id, update2);
+
+ logMangaer.writeCommitToLog(transaction2Id);
+
+ BatchUpdate update3 = new BatchUpdate(row3);
+ update3.put(col, val3);
+ logMangaer.writeUpdateToLog(transaction1Id, update3);
+
+ logMangaer.writeAbortToLog(transaction1Id);
+
+ // log.completeCacheFlush(regionName, tableName, logSeqId);
+
+ log.close();
+ Path filename = log.computeFilename(log.getFilenum());
+
+ Map<Long, List<BatchUpdate>> commits = logMangaer.getCommitsFromLog(
+ filename, -1, null);
+
+ assertEquals(1, commits.size());
+ assertEquals(1, commits.get(transaction2Id).size());
+ }
+
+ // FIXME Cannot do this test without a global transacton manager
+ // public void testMissingCommit() {
+ // fail();
+ // }
+
+ // FIXME Cannot do this test without a global transacton manager
+ // public void testMissingAbort() {
+ // fail();
+ // }
+
+}
diff --git a/src/test/org/apache/hadoop/hbase/thrift/TestThriftServer.java b/src/test/org/apache/hadoop/hbase/thrift/TestThriftServer.java
new file mode 100644
index 0000000..7c661af
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/thrift/TestThriftServer.java
@@ -0,0 +1,380 @@
+/**
+ * Copyright 2008-2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.thrift.generated.BatchMutation;
+import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor;
+import org.apache.hadoop.hbase.thrift.generated.IllegalArgument;
+import org.apache.hadoop.hbase.thrift.generated.Mutation;
+import org.apache.hadoop.hbase.thrift.generated.TCell;
+import org.apache.hadoop.hbase.thrift.generated.TRowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Unit testing for ThriftServer.HBaseHandler, a part of the
+ * org.apache.hadoop.hbase.thrift package.
+ */
+public class TestThriftServer extends HBaseClusterTestCase {
+
+ // Static names for tables, columns, rows, and values
+ private static byte[] tableAname = Bytes.toBytes("tableA");
+ private static byte[] tableBname = Bytes.toBytes("tableB");
+ private static byte[] columnAname = Bytes.toBytes("columnA:");
+ private static byte[] columnBname = Bytes.toBytes("columnB:");
+ private static byte[] badColumnName = Bytes.toBytes("forgotColon");
+ private static byte[] rowAname = Bytes.toBytes("rowA");
+ private static byte[] rowBname = Bytes.toBytes("rowB");
+ private static byte[] valueAname = Bytes.toBytes("valueA");
+ private static byte[] valueBname = Bytes.toBytes("valueB");
+ private static byte[] valueCname = Bytes.toBytes("valueC");
+ private static byte[] valueDname = Bytes.toBytes("valueD");
+
+ /**
+ * Runs all of the tests under a single JUnit test method. We
+ * consolidate all testing to one method because HBaseClusterTestCase
+ * is prone to OutOfMemoryExceptions when there are three or more
+ * JUnit test methods.
+ *
+ * @throws Exception
+ */
+ public void testAll() throws Exception {
+ // Run all tests
+ doTestTableCreateDrop();
+ doTestTableMutations();
+ doTestTableTimestampsAndColumns();
+ doTestTableScanners();
+ }
+
+ /**
+ * Tests for creating, enabling, disabling, and deleting tables. Also
+ * tests that creating a table with an invalid column name yields an
+ * IllegalArgument exception.
+ *
+ * @throws Exception
+ */
+ public void doTestTableCreateDrop() throws Exception {
+ ThriftServer.HBaseHandler handler = new ThriftServer.HBaseHandler();
+
+ // Create/enable/disable/delete tables, ensure methods act correctly
+ assertEquals(handler.getTableNames().size(), 0);
+ handler.createTable(tableAname, getColumnDescriptors());
+ assertEquals(handler.getTableNames().size(), 1);
+ assertEquals(handler.getColumnDescriptors(tableAname).size(), 2);
+ assertTrue(handler.isTableEnabled(tableAname));
+ handler.createTable(tableBname, new ArrayList<ColumnDescriptor>());
+ assertEquals(handler.getTableNames().size(), 2);
+ handler.disableTable(tableBname);
+ assertFalse(handler.isTableEnabled(tableBname));
+ handler.deleteTable(tableBname);
+ assertEquals(handler.getTableNames().size(), 1);
+ handler.disableTable(tableAname);
+ assertFalse(handler.isTableEnabled(tableAname));
+ handler.enableTable(tableAname);
+ assertTrue(handler.isTableEnabled(tableAname));
+ handler.disableTable(tableAname);
+ handler.deleteTable(tableAname);
+
+ // Make sure that trying to create a table with a bad column name creates
+ // an IllegalArgument exception.
+ List<ColumnDescriptor> cDescriptors = new ArrayList<ColumnDescriptor>();
+ ColumnDescriptor badDescriptor = new ColumnDescriptor();
+ badDescriptor.name = badColumnName;
+ cDescriptors.add(badDescriptor);
+ String message = null;
+ try {
+ handler.createTable(tableBname, cDescriptors);
+ } catch (IllegalArgument ia) {
+ message = ia.message;
+ }
+ assertEquals("Family names must end in a colon: " + new String(badColumnName), message);
+ }
+
+ /**
+ * Tests adding a series of Mutations and BatchMutations, including a
+ * delete mutation. Also tests data retrieval, and getting back multiple
+ * versions.
+ *
+ * @throws Exception
+ */
+ public void doTestTableMutations() throws Exception {
+ // Setup
+ ThriftServer.HBaseHandler handler = new ThriftServer.HBaseHandler();
+ handler.createTable(tableAname, getColumnDescriptors());
+
+ // Apply a few Mutations to rowA
+ handler.mutateRow(tableAname, rowAname, getMutations());
+
+ // Assert that the changes were made
+ assertTrue(Bytes.equals(valueAname, handler.get(tableAname, rowAname, columnAname).get(0).value));
+ TRowResult rowResult1 = handler.getRow(tableAname, rowAname).get(0);
+ assertTrue(Bytes.equals(rowAname, rowResult1.row));
+ assertTrue(Bytes.equals(valueBname, rowResult1.columns.get(columnBname).value));
+
+ // Apply a few BatchMutations for rowA and rowB
+ handler.mutateRows(tableAname, getBatchMutations());
+
+ // Assert that changes were made to rowA
+ assertFalse(handler.get(tableAname, rowAname, columnAname).size() > 0);
+ assertTrue(Bytes.equals(valueCname, handler.get(tableAname, rowAname, columnBname).get(0).value));
+ List<TCell> versions = handler.getVer(tableAname, rowAname, columnBname, MAXVERSIONS);
+ assertTrue(Bytes.equals(valueCname, versions.get(0).value));
+ assertTrue(Bytes.equals(valueBname, versions.get(1).value));
+
+ // Assert that changes were made to rowB
+ TRowResult rowResult2 = handler.getRow(tableAname, rowBname).get(0);
+ assertTrue(Bytes.equals(rowBname, rowResult2.row));
+ assertTrue(Bytes.equals(valueCname, rowResult2.columns.get(columnAname).value));
+ assertTrue(Bytes.equals(valueDname, rowResult2.columns.get(columnBname).value));
+
+ // Apply some deletes
+ handler.deleteAll(tableAname, rowAname, columnBname);
+ handler.deleteAllRow(tableAname, rowBname);
+
+ // Assert that the deletes were applied
+ int size = handler.get(tableAname, rowAname, columnBname).size();
+ assertEquals(0, size);
+ size = handler.getRow(tableAname, rowBname).size();
+ assertEquals(0, size);
+
+ // Teardown
+ handler.disableTable(tableAname);
+ handler.deleteTable(tableAname);
+ }
+
+ /**
+ * Similar to testTableMutations(), except Mutations are applied with
+ * specific timestamps and data retrieval uses these timestamps to
+ * extract specific versions of data.
+ *
+ * @throws Exception
+ */
+ public void doTestTableTimestampsAndColumns() throws Exception {
+ // Setup
+ ThriftServer.HBaseHandler handler = new ThriftServer.HBaseHandler();
+ handler.createTable(tableAname, getColumnDescriptors());
+
+ // Apply timestamped Mutations to rowA
+ long time1 = System.currentTimeMillis();
+ handler.mutateRowTs(tableAname, rowAname, getMutations(), time1);
+
+ // Sleep to assure that 'time1' and 'time2' will be different even with a
+ // coarse grained system timer.
+ Thread.sleep(1000);
+
+ // Apply timestamped BatchMutations for rowA and rowB
+ long time2 = System.currentTimeMillis();
+ handler.mutateRowsTs(tableAname, getBatchMutations(), time2);
+
+ // Apply an overlapping timestamped mutation to rowB
+ handler.mutateRowTs(tableAname, rowBname, getMutations(), time2);
+
+ // Assert that the timestamp-related methods retrieve the correct data
+ assertEquals(handler.getVerTs(tableAname, rowAname, columnBname, time2,
+ MAXVERSIONS).size(), 2);
+ assertEquals(handler.getVerTs(tableAname, rowAname, columnBname, time1,
+ MAXVERSIONS).size(), 1);
+
+ TRowResult rowResult1 = handler.getRowTs(tableAname, rowAname, time1).get(0);
+ TRowResult rowResult2 = handler.getRowTs(tableAname, rowAname, time2).get(0);
+ assertTrue(Bytes.equals(rowResult1.columns.get(columnAname).value, valueAname));
+ assertTrue(Bytes.equals(rowResult1.columns.get(columnBname).value, valueBname));
+ assertTrue(Bytes.equals(rowResult2.columns.get(columnBname).value, valueCname));
+
+ // Maybe I'd reading this wrong but at line #187 above, the BatchMutations
+ // are adding a columnAname at time2 so the below should be true not false
+ // -- St.Ack
+ assertTrue(rowResult2.columns.containsKey(columnAname));
+
+ List<byte[]> columns = new ArrayList<byte[]>();
+ columns.add(columnBname);
+
+ rowResult1 = handler.getRowWithColumns(tableAname, rowAname, columns).get(0);
+ assertTrue(Bytes.equals(rowResult1.columns.get(columnBname).value, valueCname));
+ assertFalse(rowResult1.columns.containsKey(columnAname));
+
+ rowResult1 = handler.getRowWithColumnsTs(tableAname, rowAname, columns, time1).get(0);
+ assertTrue(Bytes.equals(rowResult1.columns.get(columnBname).value, valueBname));
+ assertFalse(rowResult1.columns.containsKey(columnAname));
+
+ // Apply some timestamped deletes
+ handler.deleteAllTs(tableAname, rowAname, columnBname, time1);
+ handler.deleteAllRowTs(tableAname, rowBname, time2);
+
+ // Assert that the timestamp-related methods retrieve the correct data
+ int size = handler.getVerTs(tableAname, rowAname, columnBname, time1, MAXVERSIONS).size();
+ assertFalse(size > 0);
+ assertTrue(Bytes.equals(handler.get(tableAname, rowAname, columnBname).get(0).value, valueCname));
+ assertFalse(handler.getRow(tableAname, rowBname).size() > 0);
+
+ // Teardown
+ handler.disableTable(tableAname);
+ handler.deleteTable(tableAname);
+ }
+
+ /**
+ * Tests the four different scanner-opening methods (with and without
+ * a stoprow, with and without a timestamp).
+ *
+ * @throws Exception
+ */
+ public void doTestTableScanners() throws Exception {
+ // Setup
+ ThriftServer.HBaseHandler handler = new ThriftServer.HBaseHandler();
+ handler.createTable(tableAname, getColumnDescriptors());
+
+ // Apply timestamped Mutations to rowA
+ long time1 = System.currentTimeMillis();
+ handler.mutateRowTs(tableAname, rowAname, getMutations(), time1);
+
+ // Sleep to assure that 'time1' and 'time2' will be different even with a
+ // coarse grained system timer.
+ Thread.sleep(1000);
+
+ // Apply timestamped BatchMutations for rowA and rowB
+ long time2 = System.currentTimeMillis();
+ handler.mutateRowsTs(tableAname, getBatchMutations(), time2);
+
+ // Test a scanner on all rows and all columns, no timestamp
+ int scanner1 = handler.scannerOpen(tableAname, rowAname, getColumnList(true, true));
+ TRowResult rowResult1a = handler.scannerGet(scanner1).get(0);
+ assertTrue(Bytes.equals(rowResult1a.row, rowAname));
+ // This used to be '1'. I don't know why when we are asking for two columns
+ // and when the mutations above would seem to add two columns to the row.
+ // -- St.Ack 05/12/2009
+ assertEquals(rowResult1a.columns.size(), 2);
+ assertTrue(Bytes.equals(rowResult1a.columns.get(columnBname).value, valueCname));
+ TRowResult rowResult1b = handler.scannerGet(scanner1).get(0);
+ assertTrue(Bytes.equals(rowResult1b.row, rowBname));
+ assertEquals(rowResult1b.columns.size(), 2);
+ assertTrue(Bytes.equals(rowResult1b.columns.get(columnAname).value, valueCname));
+ assertTrue(Bytes.equals(rowResult1b.columns.get(columnBname).value, valueDname));
+ closeScanner(scanner1, handler);
+
+ // Test a scanner on all rows and all columns, with timestamp
+ int scanner2 = handler.scannerOpenTs(tableAname, rowAname, getColumnList(true, true), time1);
+ TRowResult rowResult2a = handler.scannerGet(scanner2).get(0);
+ assertEquals(rowResult2a.columns.size(), 2);
+ assertTrue(Bytes.equals(rowResult2a.columns.get(columnAname).value, valueAname));
+ assertTrue(Bytes.equals(rowResult2a.columns.get(columnBname).value, valueBname));
+ closeScanner(scanner2, handler);
+
+ // Test a scanner on the first row and first column only, no timestamp
+ int scanner3 = handler.scannerOpenWithStop(tableAname, rowAname, rowBname,
+ getColumnList(true, false));
+ closeScanner(scanner3, handler);
+
+ // Test a scanner on the first row and second column only, with timestamp
+ int scanner4 = handler.scannerOpenWithStopTs(tableAname, rowAname, rowBname,
+ getColumnList(false, true), time1);
+ TRowResult rowResult4a = handler.scannerGet(scanner4).get(0);
+ assertEquals(rowResult4a.columns.size(), 1);
+ assertTrue(Bytes.equals(rowResult4a.columns.get(columnBname).value, valueBname));
+
+ // Teardown
+ handler.disableTable(tableAname);
+ handler.deleteTable(tableAname);
+ }
+
+ /**
+ *
+ * @return a List of ColumnDescriptors for use in creating a table. Has one
+ * default ColumnDescriptor and one ColumnDescriptor with fewer versions
+ */
+ private List<ColumnDescriptor> getColumnDescriptors() {
+ ArrayList<ColumnDescriptor> cDescriptors = new ArrayList<ColumnDescriptor>();
+
+ // A default ColumnDescriptor
+ ColumnDescriptor cDescA = new ColumnDescriptor();
+ cDescA.name = columnAname;
+ cDescriptors.add(cDescA);
+
+ // A slightly customized ColumnDescriptor (only 2 versions)
+ ColumnDescriptor cDescB = new ColumnDescriptor(columnBname, 2, "NONE",
+ false, 2147483647, "NONE", 0, 0, false, -1);
+ cDescriptors.add(cDescB);
+
+ return cDescriptors;
+ }
+
+ /**
+ *
+ * @param includeA whether or not to include columnA
+ * @param includeB whether or not to include columnB
+ * @return a List of column names for use in retrieving a scanner
+ */
+ private List<byte[]> getColumnList(boolean includeA, boolean includeB) {
+ List<byte[]> columnList = new ArrayList<byte[]>();
+ if (includeA) columnList.add(columnAname);
+ if (includeB) columnList.add(columnBname);
+ return columnList;
+ }
+
+ /**
+ *
+ * @return a List of Mutations for a row, with columnA having valueA
+ * and columnB having valueB
+ */
+ private List<Mutation> getMutations() {
+ List<Mutation> mutations = new ArrayList<Mutation>();
+ mutations.add(new Mutation(false, columnAname, valueAname));
+ mutations.add(new Mutation(false, columnBname, valueBname));
+ return mutations;
+ }
+
+ /**
+ *
+ * @return a List of BatchMutations with the following effects:
+ * (rowA, columnA): delete
+ * (rowA, columnB): place valueC
+ * (rowB, columnA): place valueC
+ * (rowB, columnB): place valueD
+ */
+ private List<BatchMutation> getBatchMutations() {
+ List<BatchMutation> batchMutations = new ArrayList<BatchMutation>();
+ // Mutations to rowA
+ List<Mutation> rowAmutations = new ArrayList<Mutation>();
+ rowAmutations.add(new Mutation(true, columnAname, null));
+ rowAmutations.add(new Mutation(false, columnBname, valueCname));
+ batchMutations.add(new BatchMutation(rowAname, rowAmutations));
+ // Mutations to rowB
+ List<Mutation> rowBmutations = new ArrayList<Mutation>();
+ rowBmutations.add(new Mutation(false, columnAname, valueCname));
+ rowBmutations.add(new Mutation(false, columnBname, valueDname));
+ batchMutations.add(new BatchMutation(rowBname, rowBmutations));
+ return batchMutations;
+ }
+
+ /**
+ * Asserts that the passed scanner is exhausted, and then closes
+ * the scanner.
+ *
+ * @param scannerId the scanner to close
+ * @param handler the HBaseHandler interfacing to HBase
+ * @throws Exception
+ */
+ private void closeScanner(int scannerId, ThriftServer.HBaseHandler handler) throws Exception {
+ handler.scannerGet(scannerId);
+ handler.scannerClose(scannerId);
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/util/DisabledTestMetaUtils.java b/src/test/org/apache/hadoop/hbase/util/DisabledTestMetaUtils.java
new file mode 100644
index 0000000..7bf8efb
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/DisabledTestMetaUtils.java
@@ -0,0 +1,63 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+
+/**
+ * Test is flakey. Needs work. Fails too often on hudson.
+ */
+public class DisabledTestMetaUtils extends HBaseClusterTestCase {
+ public void testColumnEdits() throws Exception {
+ HBaseAdmin admin = new HBaseAdmin(this.conf);
+ final String oldColumn = "oldcolumn:";
+ // Add three tables
+ for (int i = 0; i < 5; i++) {
+ HTableDescriptor htd = new HTableDescriptor(getName() + i);
+ htd.addFamily(new HColumnDescriptor(oldColumn));
+ admin.createTable(htd);
+ }
+ this.cluster.shutdown();
+ this.cluster = null;
+ MetaUtils utils = new MetaUtils(this.conf);
+ // Add a new column to the third table, getName() + '2', and remove the old.
+ final byte [] editTable = Bytes.toBytes(getName() + 2);
+ final byte [] newColumn = Bytes.toBytes("newcolumn:");
+ utils.addColumn(editTable, new HColumnDescriptor(newColumn));
+ utils.deleteColumn(editTable, Bytes.toBytes(oldColumn));
+ utils.shutdown();
+ // Delete again so we go get it all fresh.
+ HConnectionManager.deleteConnectionInfo(conf, false);
+ // Now assert columns were added and deleted.
+ this.cluster = new MiniHBaseCluster(this.conf, 1);
+ // Now assert columns were added and deleted.
+ HTable t = new HTable(conf, editTable);
+ HTableDescriptor htd = t.getTableDescriptor();
+ HColumnDescriptor hcd = htd.getFamily(newColumn);
+ assertTrue(hcd != null);
+ assertNull(htd.getFamily(Bytes.toBytes(oldColumn)));
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/util/MigrationTest.java b/src/test/org/apache/hadoop/hbase/util/MigrationTest.java
new file mode 100644
index 0000000..34f4207
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/MigrationTest.java
@@ -0,0 +1,229 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.zip.ZipEntry;
+import java.util.zip.ZipInputStream;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scanner;
+import org.apache.hadoop.hbase.io.RowResult;
+
+/**
+ * Runs migration of filesystem from hbase 0.x to 0.x
+ */
+public class MigrationTest extends HBaseTestCase {
+ private static final Log LOG = LogFactory.getLog(MigrationTest.class);
+
+ // This is the name of the table that is in the data file.
+ private static final String TABLENAME = "TestUpgrade";
+
+ // The table has two columns
+ private static final byte [][] TABLENAME_COLUMNS =
+ {Bytes.toBytes("column_a:"), Bytes.toBytes("column_b:")};
+
+ // Expected count of rows in migrated table.
+ private static final int EXPECTED_COUNT = 17576;
+
+ /**
+ * Test migration. To be used in future migrations
+ * @throws IOException
+ */
+ public void testUpgrade() throws IOException {
+ }
+
+ /*
+ * Load up test data.
+ * @param dfs
+ * @param rootDir
+ * @throws IOException
+ */
+ private void loadTestData(final FileSystem dfs, final Path rootDir)
+ throws IOException {
+ FileSystem localfs = FileSystem.getLocal(conf);
+ // Get path for zip file. If running this test in eclipse, define
+ // the system property src.testdata for your test run.
+ String srcTestdata = System.getProperty("src.testdata");
+ if (srcTestdata == null) {
+ throw new NullPointerException("Define src.test system property");
+ }
+ Path data = new Path(srcTestdata, "HADOOP-2478-testdata-v0.1.zip");
+ if (!localfs.exists(data)) {
+ throw new FileNotFoundException(data.toString());
+ }
+ FSDataInputStream hs = localfs.open(data);
+ ZipInputStream zip = new ZipInputStream(hs);
+ unzip(zip, dfs, rootDir);
+ zip.close();
+ hs.close();
+ }
+
+ /*
+ * Verify can read the migrated table.
+ * @throws IOException
+ */
+ private void verify() throws IOException {
+ // Delete any cached connections. Need to do this because connection was
+ // created earlier when no master was around. The fact that there was no
+ // master gets cached. Need to delete so we go get master afresh.
+ HConnectionManager.deleteConnectionInfo(conf, false);
+
+ LOG.info("Start a cluster against migrated FS");
+ // Up number of retries. Needed while cluster starts up. Its been set to 1
+ // above.
+ final int retries = 5;
+ this.conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER_KEY, retries);
+
+ MiniHBaseCluster cluster = new MiniHBaseCluster(this.conf, 1);
+ try {
+ HBaseAdmin hb = new HBaseAdmin(this.conf);
+ assertTrue(hb.isMasterRunning());
+ HTableDescriptor [] tables = hb.listTables();
+ boolean foundTable = false;
+ for (int i = 0; i < tables.length; i++) {
+ if (Bytes.equals(Bytes.toBytes(TABLENAME), tables[i].getName())) {
+ foundTable = true;
+ break;
+ }
+ }
+ assertTrue(foundTable);
+ LOG.info(TABLENAME + " exists. Now waiting till startcode " +
+ "changes before opening a scanner");
+ waitOnStartCodeChange(retries);
+ // Delete again so we go get it all fresh.
+ HConnectionManager.deleteConnectionInfo(conf, false);
+ HTable t = new HTable(this.conf, TABLENAME);
+ int count = 0;
+ LOG.info("OPENING SCANNER");
+ Scanner s = t.getScanner(TABLENAME_COLUMNS);
+ try {
+ for (RowResult r: s) {
+ if (r == null || r.size() == 0) {
+ break;
+ }
+ count++;
+ if (count % 1000 == 0 && count > 0) {
+ LOG.info("Iterated over " + count + " rows.");
+ }
+ }
+ assertEquals(EXPECTED_COUNT, count);
+ } finally {
+ s.close();
+ }
+ } finally {
+ HConnectionManager.deleteConnectionInfo(conf, false);
+ cluster.shutdown();
+ }
+ }
+
+ /*
+ * Wait till the startcode changes before we put up a scanner. Otherwise
+ * we tend to hang, at least on hudson and I've had it time to time on
+ * my laptop. The hang is down in RPC Client doing its call. It
+ * never returns though the socket has a read timeout of 60 seconds by
+ * default. St.Ack
+ * @param retries How many retries to run.
+ * @throws IOException
+ */
+ private void waitOnStartCodeChange(final int retries) throws IOException {
+ HTable m = new HTable(this.conf, HConstants.META_TABLE_NAME);
+ // This is the start code that is in the old data.
+ long oldStartCode = 1199736332062L;
+ // This is the first row for the TestTable that is in the old data.
+ byte [] row = Bytes.toBytes("TestUpgrade,,1199736362468");
+ long pause = conf.getLong("hbase.client.pause", 5 * 1000);
+ long startcode = -1;
+ boolean changed = false;
+ for (int i = 0; i < retries; i++) {
+ startcode = Writables.cellToLong(m.get(row, HConstants.COL_STARTCODE));
+ if (startcode != oldStartCode) {
+ changed = true;
+ break;
+ }
+ if ((i + 1) != retries) {
+ try {
+ Thread.sleep(pause);
+ } catch (InterruptedException e) {
+ // continue
+ }
+ }
+ }
+ // If after all attempts startcode has not changed, fail.
+ if (!changed) {
+ throw new IOException("Startcode didn't change after " + retries +
+ " attempts");
+ }
+ }
+
+ private void unzip(ZipInputStream zip, FileSystem dfs, Path rootDir)
+ throws IOException {
+ ZipEntry e = null;
+ while ((e = zip.getNextEntry()) != null) {
+ if (e.isDirectory()) {
+ dfs.mkdirs(new Path(rootDir, e.getName()));
+ } else {
+ FSDataOutputStream out = dfs.create(new Path(rootDir, e.getName()));
+ byte[] buffer = new byte[4096];
+ int len;
+ do {
+ len = zip.read(buffer);
+ if (len > 0) {
+ out.write(buffer, 0, len);
+ }
+ } while (len > 0);
+ out.close();
+ }
+ zip.closeEntry();
+ }
+ }
+
+ private void listPaths(FileSystem filesystem, Path dir, int rootdirlength)
+ throws IOException {
+ FileStatus[] stats = filesystem.listStatus(dir);
+ if (stats == null || stats.length == 0) {
+ return;
+ }
+ for (int i = 0; i < stats.length; i++) {
+ String path = stats[i].getPath().toString();
+ if (stats[i].isDir()) {
+ System.out.println("d " + path);
+ listPaths(filesystem, stats[i].getPath(), rootdirlength);
+ } else {
+ System.out.println("f " + path + " size=" + stats[i].getLen());
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/util/SoftSortedMapTest.java b/src/test/org/apache/hadoop/hbase/util/SoftSortedMapTest.java
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/SoftSortedMapTest.java
diff --git a/src/test/org/apache/hadoop/hbase/util/SoftValueSortedMapTest.java b/src/test/org/apache/hadoop/hbase/util/SoftValueSortedMapTest.java
new file mode 100644
index 0000000..5cc1792
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/SoftValueSortedMapTest.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+public class SoftValueSortedMapTest {
+ private static void testMap(SortedMap<Integer, Integer> map) {
+ System.out.println("Testing " + map.getClass());
+ for(int i = 0; i < 1000000; i++) {
+ map.put(new Integer(i), new Integer(i));
+ }
+ System.out.println(map.size());
+ byte[] block = new byte[849*1024*1024]; // 10 MB
+ System.out.println(map.size());
+ }
+
+ public static void main(String[] args) {
+ testMap(new SoftValueSortedMap<Integer, Integer>());
+ testMap(new TreeMap<Integer, Integer>());
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/util/TestBase64.java b/src/test/org/apache/hadoop/hbase/util/TestBase64.java
new file mode 100644
index 0000000..20382be
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/TestBase64.java
@@ -0,0 +1,67 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.UnsupportedEncodingException;
+import java.util.Map;
+import java.util.TreeMap;
+
+import junit.framework.TestCase;
+
+/**
+ * Test order preservation characteristics of ordered Base64 dialect
+ */
+public class TestBase64 extends TestCase {
+ // Note: uris is sorted. We need to prove that the ordered Base64
+ // preserves that ordering
+ private String[] uris = {
+ "dns://dns.powerset.com/www.powerset.com",
+ "dns:www.powerset.com",
+ "file:///usr/bin/java",
+ "filename",
+ "ftp://one.two.three/index.html",
+ "http://one.two.three/index.html",
+ "https://one.two.three:9443/index.html",
+ "r:dns://com.powerset.dns/www.powerset.com",
+ "r:ftp://three.two.one/index.html",
+ "r:http://three.two.one/index.html",
+ "r:https://three.two.one:9443/index.html"
+ };
+
+ /**
+ * the test
+ * @throws UnsupportedEncodingException
+ */
+ public void testBase64() throws UnsupportedEncodingException {
+ TreeMap<String, String> sorted = new TreeMap<String, String>();
+
+ for (int i = 0; i < uris.length; i++) {
+ byte[] bytes = uris[i].getBytes("UTF-8");
+ sorted.put(Base64.encodeBytes(bytes, Base64.ORDERED), uris[i]);
+ }
+ System.out.println();
+
+ int i = 0;
+ for (Map.Entry<String, String> e: sorted.entrySet()) {
+ assertTrue(uris[i++].compareTo(e.getValue()) == 0);
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/util/TestBytes.java b/src/test/org/apache/hadoop/hbase/util/TestBytes.java
new file mode 100644
index 0000000..e919d5f
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/TestBytes.java
@@ -0,0 +1,153 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import junit.framework.TestCase;
+
+public class TestBytes extends TestCase {
+ public void testSplit() throws Exception {
+ byte [] lowest = Bytes.toBytes("AAA");
+ byte [] middle = Bytes.toBytes("CCC");
+ byte [] highest = Bytes.toBytes("EEE");
+ byte [][] parts = Bytes.split(lowest, highest, 1);
+ for (int i = 0; i < parts.length; i++) {
+ System.out.println(Bytes.toString(parts[i]));
+ }
+ assertEquals(3, parts.length);
+ assertTrue(Bytes.equals(parts[1], middle));
+ // Now divide into three parts. Change highest so split is even.
+ highest = Bytes.toBytes("DDD");
+ parts = Bytes.split(lowest, highest, 2);
+ for (int i = 0; i < parts.length; i++) {
+ System.out.println(Bytes.toString(parts[i]));
+ }
+ assertEquals(4, parts.length);
+ // Assert that 3rd part is 'CCC'.
+ assertTrue(Bytes.equals(parts[2], middle));
+ }
+
+ public void testSplit2() throws Exception {
+ // More split tests.
+ byte [] lowest = Bytes.toBytes("http://A");
+ byte [] highest = Bytes.toBytes("http://z");
+ byte [] middle = Bytes.toBytes("http://]");
+ byte [][] parts = Bytes.split(lowest, highest, 1);
+ for (int i = 0; i < parts.length; i++) {
+ System.out.println(Bytes.toString(parts[i]));
+ }
+ assertEquals(3, parts.length);
+ assertTrue(Bytes.equals(parts[1], middle));
+ }
+
+ public void testToLong() throws Exception {
+ long [] longs = {-1l, 123l, 122232323232l};
+ for (int i = 0; i < longs.length; i++) {
+ byte [] b = Bytes.toBytes(longs[i]);
+ assertEquals(longs[i], Bytes.toLong(b));
+ }
+ }
+
+ public void testToFloat() throws Exception {
+ float [] floats = {-1f, 123.123f, Float.MAX_VALUE};
+ for (int i = 0; i < floats.length; i++) {
+ byte [] b = Bytes.toBytes(floats[i]);
+ assertEquals(floats[i], Bytes.toFloat(b));
+ }
+ }
+
+ public void testToDouble() throws Exception {
+ double [] doubles = {Double.MIN_VALUE, Double.MAX_VALUE};
+ for (int i = 0; i < doubles.length; i++) {
+ byte [] b = Bytes.toBytes(doubles[i]);
+ assertEquals(doubles[i], Bytes.toDouble(b));
+ }
+ }
+
+ public void testBinarySearch() throws Exception {
+ byte [][] arr = {
+ {1},
+ {3},
+ {5},
+ {7},
+ {9},
+ {11},
+ {13},
+ {15},
+ };
+ byte [] key1 = {3,1};
+ byte [] key2 = {4,9};
+ byte [] key2_2 = {4};
+ byte [] key3 = {5,11};
+
+ assertEquals(1, Bytes.binarySearch(arr, key1, 0, 1,
+ Bytes.BYTES_RAWCOMPARATOR));
+ assertEquals(0, Bytes.binarySearch(arr, key1, 1, 1,
+ Bytes.BYTES_RAWCOMPARATOR));
+ assertEquals(-(2+1), Arrays.binarySearch(arr, key2_2,
+ Bytes.BYTES_COMPARATOR));
+ assertEquals(-(2+1), Bytes.binarySearch(arr, key2, 0, 1,
+ Bytes.BYTES_RAWCOMPARATOR));
+ assertEquals(4, Bytes.binarySearch(arr, key2, 1, 1,
+ Bytes.BYTES_RAWCOMPARATOR));
+ assertEquals(2, Bytes.binarySearch(arr, key3, 0, 1,
+ Bytes.BYTES_RAWCOMPARATOR));
+ assertEquals(5, Bytes.binarySearch(arr, key3, 1, 1,
+ Bytes.BYTES_RAWCOMPARATOR));
+ }
+
+ public void testIncrementBytes() throws IOException {
+
+ assertTrue(checkTestIncrementBytes(10, 1));
+ assertTrue(checkTestIncrementBytes(12, 123435445));
+ assertTrue(checkTestIncrementBytes(124634654, 1));
+ assertTrue(checkTestIncrementBytes(10005460, 5005645));
+ assertTrue(checkTestIncrementBytes(1, -1));
+ assertTrue(checkTestIncrementBytes(10, -1));
+ assertTrue(checkTestIncrementBytes(10, -5));
+ assertTrue(checkTestIncrementBytes(1005435000, -5));
+ assertTrue(checkTestIncrementBytes(10, -43657655));
+ assertTrue(checkTestIncrementBytes(-1, 1));
+ assertTrue(checkTestIncrementBytes(-26, 5034520));
+ assertTrue(checkTestIncrementBytes(-10657200, 5));
+ assertTrue(checkTestIncrementBytes(-12343250, 45376475));
+ assertTrue(checkTestIncrementBytes(-10, -5));
+ assertTrue(checkTestIncrementBytes(-12343250, -5));
+ assertTrue(checkTestIncrementBytes(-12, -34565445));
+ assertTrue(checkTestIncrementBytes(-1546543452, -34565445));
+ }
+
+ private static boolean checkTestIncrementBytes(long val, long amount)
+ throws IOException {
+ byte[] value = Bytes.toBytes(val);
+ byte [] testValue = {-1, -1, -1, -1, -1, -1, -1, -1};
+ if (value[0] > 0) {
+ testValue = new byte[Bytes.SIZEOF_LONG];
+ }
+ System.arraycopy(value, 0, testValue, testValue.length - value.length,
+ value.length);
+
+ long incrementResult = Bytes.toLong(Bytes.incrementBytes(value, amount));
+
+ return (Bytes.toLong(testValue) + amount) == incrementResult;
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/util/TestKeying.java b/src/test/org/apache/hadoop/hbase/util/TestKeying.java
new file mode 100644
index 0000000..14106aa
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/TestKeying.java
@@ -0,0 +1,62 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests url transformations
+ */
+public class TestKeying extends TestCase {
+
+ @Override
+ protected void setUp() throws Exception {
+ super.setUp();
+ }
+
+ @Override
+ protected void tearDown() throws Exception {
+ super.tearDown();
+ }
+
+ /**
+ * Test url transformations
+ * @throws Exception
+ */
+ public void testURI() throws Exception {
+ checkTransform("http://abc:bcd@www.example.com/index.html" +
+ "?query=something#middle");
+ checkTransform("file:///usr/bin/java");
+ checkTransform("dns:www.powerset.com");
+ checkTransform("dns://dns.powerset.com/www.powerset.com");
+ checkTransform("http://one.two.three/index.html");
+ checkTransform("https://one.two.three:9443/index.html");
+ checkTransform("ftp://one.two.three/index.html");
+
+ checkTransform("filename");
+ }
+
+ private void checkTransform(final String u) {
+ String k = Keying.createKey(u);
+ String uri = Keying.keyToUri(k);
+ System.out.println("Original url " + u + ", Transformed url " + k);
+ assertEquals(u, uri);
+ }
+}
\ No newline at end of file
diff --git a/src/test/org/apache/hadoop/hbase/util/TestMergeTool.java b/src/test/org/apache/hadoop/hbase/util/TestMergeTool.java
new file mode 100644
index 0000000..83d201a
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/TestMergeTool.java
@@ -0,0 +1,232 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.dfs.MiniDFSCluster;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.BatchUpdate;
+import org.apache.hadoop.hbase.io.Cell;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.HLog;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.util.ToolRunner;
+
+/** Test stand alone merge tool that can merge arbitrary regions */
+public class TestMergeTool extends HBaseTestCase {
+ static final Log LOG = LogFactory.getLog(TestMergeTool.class);
+ static final byte [] COLUMN_NAME = Bytes.toBytes("contents:");
+ private final HRegionInfo[] sourceRegions = new HRegionInfo[5];
+ private final HRegion[] regions = new HRegion[5];
+ private HTableDescriptor desc;
+ private byte [][][] rows;
+ private MiniDFSCluster dfsCluster = null;
+
+ @Override
+ public void setUp() throws Exception {
+ this.conf.set("hbase.hstore.compactionThreshold", "2");
+
+ // Create table description
+ this.desc = new HTableDescriptor("TestMergeTool");
+ this.desc.addFamily(new HColumnDescriptor(COLUMN_NAME));
+
+ /*
+ * Create the HRegionInfos for the regions.
+ */
+ // Region 0 will contain the key range [row_0200,row_0300)
+ sourceRegions[0] = new HRegionInfo(this.desc, Bytes.toBytes("row_0200"),
+ Bytes.toBytes("row_0300"));
+
+ // Region 1 will contain the key range [row_0250,row_0400) and overlaps
+ // with Region 0
+ sourceRegions[1] =
+ new HRegionInfo(this.desc, Bytes.toBytes("row_0250"), Bytes.toBytes("row_0400"));
+
+ // Region 2 will contain the key range [row_0100,row_0200) and is adjacent
+ // to Region 0 or the region resulting from the merge of Regions 0 and 1
+ sourceRegions[2] =
+ new HRegionInfo(this.desc, Bytes.toBytes("row_0100"), Bytes.toBytes("row_0200"));
+
+ // Region 3 will contain the key range [row_0500,row_0600) and is not
+ // adjacent to any of Regions 0, 1, 2 or the merged result of any or all
+ // of those regions
+ sourceRegions[3] =
+ new HRegionInfo(this.desc, Bytes.toBytes("row_0500"), Bytes.toBytes("row_0600"));
+
+ // Region 4 will have empty start and end keys and overlaps all regions.
+ sourceRegions[4] =
+ new HRegionInfo(this.desc, HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY);
+
+ /*
+ * Now create some row keys
+ */
+ this.rows = new byte [5][][];
+ this.rows[0] = Bytes.toByteArrays(new String[] { "row_0210", "row_0280" });
+ this.rows[1] = Bytes.toByteArrays(new String[] { "row_0260", "row_0350", "row_035" });
+ this.rows[2] = Bytes.toByteArrays(new String[] { "row_0110", "row_0175", "row_0175", "row_0175"});
+ this.rows[3] = Bytes.toByteArrays(new String[] { "row_0525", "row_0560", "row_0560", "row_0560", "row_0560"});
+ this.rows[4] = Bytes.toByteArrays(new String[] { "row_0050", "row_1000", "row_1000", "row_1000", "row_1000", "row_1000" });
+
+ // Start up dfs
+ this.dfsCluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+ this.fs = this.dfsCluster.getFileSystem();
+ conf.set("fs.default.name", fs.getUri().toString());
+ Path parentdir = fs.getHomeDirectory();
+ conf.set(HConstants.HBASE_DIR, parentdir.toString());
+ fs.mkdirs(parentdir);
+ FSUtils.setVersion(fs, parentdir);
+
+ // Note: we must call super.setUp after starting the mini cluster or
+ // we will end up with a local file system
+
+ super.setUp();
+ try {
+ // Create root and meta regions
+ createRootAndMetaRegions();
+ /*
+ * Create the regions we will merge
+ */
+ for (int i = 0; i < sourceRegions.length; i++) {
+ regions[i] =
+ HRegion.createHRegion(this.sourceRegions[i], this.testDir, this.conf);
+ /*
+ * Insert data
+ */
+ for (int j = 0; j < rows[i].length; j++) {
+ byte [] row = rows[i][j];
+ BatchUpdate b = new BatchUpdate(row);
+ b.put(COLUMN_NAME, new ImmutableBytesWritable(row).get());
+ regions[i].batchUpdate(b, null);
+ }
+ HRegion.addRegionToMETA(meta, regions[i]);
+ }
+ // Close root and meta regions
+ closeRootAndMeta();
+
+ } catch (Exception e) {
+ shutdownDfs(dfsCluster);
+ throw e;
+ }
+ }
+
+ @Override
+ public void tearDown() throws Exception {
+ super.tearDown();
+ shutdownDfs(dfsCluster);
+ }
+
+ /*
+ * @param msg Message that describes this merge
+ * @param regionName1
+ * @param regionName2
+ * @param log Log to use merging.
+ * @param upperbound Verifying, how high up in this.rows to go.
+ * @return Merged region.
+ * @throws Exception
+ */
+ private HRegion mergeAndVerify(final String msg, final String regionName1,
+ final String regionName2, final HLog log, final int upperbound)
+ throws Exception {
+ Merge merger = new Merge(this.conf);
+ LOG.info(msg);
+ int errCode = ToolRunner.run(merger,
+ new String[] {this.desc.getNameAsString(), regionName1, regionName2}
+ );
+ assertTrue("'" + msg + "' failed", errCode == 0);
+ HRegionInfo mergedInfo = merger.getMergedHRegionInfo();
+
+ // Now verify that we can read all the rows from regions 0, 1
+ // in the new merged region.
+ HRegion merged =
+ HRegion.openHRegion(mergedInfo, this.testDir, log, this.conf);
+ verifyMerge(merged, upperbound);
+ merged.close();
+ LOG.info("Verified " + msg);
+ return merged;
+ }
+
+ private void verifyMerge(final HRegion merged, final int upperbound)
+ throws IOException {
+ for (int i = 0; i < upperbound; i++) {
+ for (int j = 0; j < rows[i].length; j++) {
+ byte [] bytes = Cell.createSingleCellArray(merged.get(rows[i][j], COLUMN_NAME, -1, -1))[0].getValue();
+ assertNotNull(rows[i][j].toString(), bytes);
+ assertTrue(Bytes.equals(bytes, rows[i][j]));
+ }
+ }
+ }
+
+ /**
+ * Test merge tool.
+ * @throws Exception
+ */
+ public void testMergeTool() throws Exception {
+ // First verify we can read the rows from the source regions and that they
+ // contain the right data.
+ for (int i = 0; i < regions.length; i++) {
+ for (int j = 0; j < rows[i].length; j++) {
+ byte[] bytes = Cell.createSingleCellArray(regions[i].get(rows[i][j], COLUMN_NAME, -1, -1))[0].getValue();
+ assertNotNull(bytes);
+ assertTrue(Bytes.equals(bytes, rows[i][j]));
+ }
+ // Close the region and delete the log
+ regions[i].close();
+ regions[i].getLog().closeAndDelete();
+ }
+
+ // Create a log that we can reuse when we need to open regions
+ Path logPath = new Path("/tmp", HConstants.HREGION_LOGDIR_NAME + "_" +
+ System.currentTimeMillis());
+ LOG.info("Creating log " + logPath.toString());
+ HLog log = new HLog(this.fs, logPath, this.conf, null);
+ try {
+ // Merge Region 0 and Region 1
+ HRegion merged = mergeAndVerify("merging regions 0 and 1",
+ this.sourceRegions[0].getRegionNameAsString(),
+ this.sourceRegions[1].getRegionNameAsString(), log, 2);
+
+ // Merge the result of merging regions 0 and 1 with region 2
+ merged = mergeAndVerify("merging regions 0+1 and 2",
+ merged.getRegionInfo().getRegionNameAsString(),
+ this.sourceRegions[2].getRegionNameAsString(), log, 3);
+
+ // Merge the result of merging regions 0, 1 and 2 with region 3
+ merged = mergeAndVerify("merging regions 0+1+2 and 3",
+ merged.getRegionInfo().getRegionNameAsString(),
+ this.sourceRegions[3].getRegionNameAsString(), log, 4);
+
+ // Merge the result of merging regions 0, 1, 2 and 3 with region 4
+ merged = mergeAndVerify("merging regions 0+1+2+3 and 4",
+ merged.getRegionInfo().getRegionNameAsString(),
+ this.sourceRegions[4].getRegionNameAsString(), log, rows.length);
+ } finally {
+ log.closeAndDelete();
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/util/TestRootPath.java b/src/test/org/apache/hadoop/hbase/util/TestRootPath.java
new file mode 100644
index 0000000..3a684b9
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/util/TestRootPath.java
@@ -0,0 +1,63 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import junit.framework.TestCase;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Test requirement that root directory must be a URI
+ */
+public class TestRootPath extends TestCase {
+ private static final Log LOG = LogFactory.getLog(TestRootPath.class);
+
+ /** The test */
+ public void testRootPath() {
+ try {
+ // Try good path
+ FSUtils.validateRootPath(new Path("file:///tmp/hbase/hbase"));
+ } catch (IOException e) {
+ LOG.fatal("Unexpected exception checking valid path:", e);
+ fail();
+ }
+ try {
+ // Try good path
+ FSUtils.validateRootPath(new Path("hdfs://a:9000/hbase"));
+ } catch (IOException e) {
+ LOG.fatal("Unexpected exception checking valid path:", e);
+ fail();
+ }
+ try {
+ // bad path
+ FSUtils.validateRootPath(new Path("/hbase"));
+ fail();
+ } catch (IOException e) {
+ // Expected.
+ LOG.info("Got expected exception when checking invalid path:", e);
+ }
+ }
+}
diff --git a/src/test/org/apache/hadoop/hbase/zookeeper/HQuorumPeerTest.java b/src/test/org/apache/hadoop/hbase/zookeeper/HQuorumPeerTest.java
new file mode 100644
index 0000000..7ebfd38
--- /dev/null
+++ b/src/test/org/apache/hadoop/hbase/zookeeper/HQuorumPeerTest.java
@@ -0,0 +1,99 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.ByteArrayInputStream;
+import java.io.InputStream;
+import java.util.Map;
+import java.util.Properties;
+
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.zookeeper.server.ServerConfig;
+import org.apache.zookeeper.server.quorum.QuorumPeerConfig;
+import org.apache.zookeeper.server.quorum.QuorumPeer.QuorumServer;
+
+/**
+ * Test for HQuorumPeer.
+ */
+public class HQuorumPeerTest extends HBaseTestCase {
+ /** @throws Exception */
+ public void testConfigInjection() throws Exception {
+ String s =
+ "tickTime=2000\n" +
+ "initLimit=10\n" +
+ "syncLimit=5\n" +
+ "dataDir=${hbase.tmp.dir}/zookeeper\n" +
+ "clientPort=2181\n" +
+ "server.0=${hbase.master.hostname}:2888:3888\n";
+
+ InputStream is = new ByteArrayInputStream(s.getBytes());
+ Properties properties = HQuorumPeer.parseConfig(is);
+
+ String userName = System.getProperty("user.name");
+ String dataDir = "/tmp/hbase-" + userName + "/zookeeper";
+
+ assertEquals(Integer.valueOf(2000), Integer.valueOf(properties.getProperty("tickTime")));
+ assertEquals(Integer.valueOf(10), Integer.valueOf(properties.getProperty("initLimit")));
+ assertEquals(Integer.valueOf(5), Integer.valueOf(properties.getProperty("syncLimit")));
+ assertEquals(dataDir, properties.get("dataDir"));
+ assertEquals(Integer.valueOf(2181), Integer.valueOf(properties.getProperty("clientPort")));
+ assertEquals("localhost:2888:3888", properties.get("server.0"));
+
+ QuorumPeerConfig.parseProperties(properties);
+
+ int tickTime = QuorumPeerConfig.getTickTime();
+ assertEquals(2000, tickTime);
+ int initLimit = QuorumPeerConfig.getInitLimit();
+ assertEquals(10, initLimit);
+ int syncLimit = QuorumPeerConfig.getSyncLimit();
+ assertEquals(5, syncLimit);
+ assertEquals(dataDir, ServerConfig.getDataDir());
+ assertEquals(2181, ServerConfig.getClientPort());
+ Map<Long,QuorumServer> servers = QuorumPeerConfig.getServers();
+ assertEquals(1, servers.size());
+ assertTrue(servers.containsKey(Long.valueOf(0)));
+ QuorumServer server = servers.get(Long.valueOf(0));
+ assertEquals("localhost", server.addr.getHostName());
+
+ // Override with system property.
+ System.setProperty("hbase.master.hostname", "foo.bar");
+ is = new ByteArrayInputStream(s.getBytes());
+ properties = HQuorumPeer.parseConfig(is);
+ assertEquals("foo.bar:2888:3888", properties.get("server.0"));
+
+ QuorumPeerConfig.parseProperties(properties);
+
+ servers = QuorumPeerConfig.getServers();
+ server = servers.get(Long.valueOf(0));
+ assertEquals("foo.bar", server.addr.getHostName());
+
+ // Special case for property 'hbase.master.hostname' being 'local'
+ System.setProperty("hbase.master.hostname", "local");
+ is = new ByteArrayInputStream(s.getBytes());
+ properties = HQuorumPeer.parseConfig(is);
+ assertEquals("localhost:2888:3888", properties.get("server.0"));
+
+ QuorumPeerConfig.parseProperties(properties);
+
+ servers = QuorumPeerConfig.getServers();
+ server = servers.get(Long.valueOf(0));
+ assertEquals("localhost", server.addr.getHostName());
+ }
+}
diff --git a/src/testdata/HADOOP-2478-testdata-v0.1.zip b/src/testdata/HADOOP-2478-testdata-v0.1.zip
new file mode 100644
index 0000000..f5f3d89
--- /dev/null
+++ b/src/testdata/HADOOP-2478-testdata-v0.1.zip
Binary files differ
diff --git a/src/webapps/master/index.html b/src/webapps/master/index.html
new file mode 100644
index 0000000..6d301ab
--- /dev/null
+++ b/src/webapps/master/index.html
@@ -0,0 +1 @@
+<meta HTTP-EQUIV="REFRESH" content="0;url=master.jsp"/>
diff --git a/src/webapps/master/master.jsp b/src/webapps/master/master.jsp
new file mode 100644
index 0000000..81f2a09
--- /dev/null
+++ b/src/webapps/master/master.jsp
@@ -0,0 +1,105 @@
+<%@ page contentType="text/html;charset=UTF-8"
+ import="java.util.*"
+ import="java.net.URLEncoder"
+ import="org.apache.hadoop.io.Text"
+ import="org.apache.hadoop.hbase.util.Bytes"
+ import="org.apache.hadoop.hbase.master.HMaster"
+ import="org.apache.hadoop.hbase.HConstants"
+ import="org.apache.hadoop.hbase.master.MetaRegion"
+ import="org.apache.hadoop.hbase.client.HBaseAdmin"
+ import="org.apache.hadoop.hbase.HServerInfo"
+ import="org.apache.hadoop.hbase.HServerAddress"
+ import="org.apache.hadoop.hbase.HBaseConfiguration"
+ import="org.apache.hadoop.hbase.HTableDescriptor" %><%
+ HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
+ HBaseConfiguration conf = master.getConfiguration();
+ HServerAddress rootLocation = master.getRootRegionLocation();
+ Map<byte [], MetaRegion> onlineRegions = master.getOnlineMetaRegions();
+ Map<String, HServerInfo> serverToServerInfos =
+ master.getServersToServerInfo();
+ int interval = conf.getInt("hbase.regionserver.msginterval", 3000)/1000;
+ if (interval == 0) {
+ interval = 1;
+ }
+%><?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+ <meta http-equiv="refresh" content="300"/>
+<title>HBase Master: <%= master.getMasterAddress().getHostname()%>:<%= master.getMasterAddress().getPort() %></title>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+
+<body>
+
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Master: <%=master.getMasterAddress().getHostname()%>:<%=master.getMasterAddress().getPort()%></h1>
+<p id="links_menu"><a href="/logs/">Local logs</a>, <a href="/stacks">Thread Dump</a>, <a href="/logLevel">Log Level</a></p>
+<hr id="head_rule" />
+
+<h2>Master Attributes</h2>
+<table>
+<tr><th>Attribute Name</th><th>Value</th><th>Description</th></tr>
+<tr><td>HBase Version</td><td><%= org.apache.hadoop.hbase.util.VersionInfo.getVersion() %>, r<%= org.apache.hadoop.hbase.util.VersionInfo.getRevision() %></td><td>HBase version and svn revision</td></tr>
+<tr><td>HBase Compiled</td><td><%= org.apache.hadoop.hbase.util.VersionInfo.getDate() %>, <%= org.apache.hadoop.hbase.util.VersionInfo.getUser() %></td><td>When HBase version was compiled and by whom</td></tr>
+<tr><td>Hadoop Version</td><td><%= org.apache.hadoop.util.VersionInfo.getVersion() %>, r<%= org.apache.hadoop.util.VersionInfo.getRevision() %></td><td>Hadoop version and svn revision</td></tr>
+<tr><td>Hadoop Compiled</td><td><%= org.apache.hadoop.util.VersionInfo.getDate() %>, <%= org.apache.hadoop.util.VersionInfo.getUser() %></td><td>When Hadoop version was compiled and by whom</td></tr>
+<tr><td>HBase Root Directory</td><td><%= master.getRootDir().toString() %></td><td>Location of HBase home directory</td></tr>
+<tr><td>Load average</td><td><%= master.getAverageLoad() %></td><td>Average load across all region servers. Naive computation.</td></tr>
+<tr><td>Regions On FS</td><td><%= master.countRegionsOnFS() %></td><td>The Number of regions on FileSystem. Rough count.</td></tr>
+</table>
+
+<h2>Catalog Tables</h2>
+<%
+ if (rootLocation != null) { %>
+<table>
+<tr><th>Table</th><th>Description</th></tr>
+<tr><td><a href=/table.jsp?name=<%= URLEncoder.encode(Bytes.toString(HConstants.ROOT_TABLE_NAME), "UTF-8") %>><%= Bytes.toString(HConstants.ROOT_TABLE_NAME) %></a></td><td>The -ROOT- table holds references to all .META. regions.</td></tr>
+<%
+ if (onlineRegions != null && onlineRegions.size() > 0) { %>
+<tr><td><a href=/table.jsp?name=<%= URLEncoder.encode(Bytes.toString(HConstants.META_TABLE_NAME), "UTF-8") %>><%= Bytes.toString(HConstants.META_TABLE_NAME) %></a></td><td>The .META. table holds references to all User Table regions</td></tr>
+
+<% } %>
+</table>
+<%} %>
+
+<h2>User Tables</h2>
+<% HTableDescriptor[] tables = new HBaseAdmin(conf).listTables();
+ if(tables != null && tables.length > 0) { %>
+<table>
+<tr><th>Table</th><th>Description</th></tr>
+<% for(HTableDescriptor htDesc : tables ) { %>
+<tr><td><a href=/table.jsp?name=<%= URLEncoder.encode(htDesc.getNameAsString(), "UTF-8") %>><%= htDesc.getNameAsString() %></a> </td><td><%= htDesc.toString() %></td></tr>
+<% } %>
+<p> <%= tables.length %> table(s) in set.</p>
+</table>
+<% } %>
+
+<h2>Region Servers</h2>
+<% if (serverToServerInfos != null && serverToServerInfos.size() > 0) { %>
+<% int totalRegions = 0;
+ int totalRequests = 0;
+%>
+
+<table>
+<tr><th rowspan=<%= serverToServerInfos.size() + 1%>></th><th>Address</th><th>Start Code</th><th>Load</th></tr>
+<% String[] serverNames = serverToServerInfos.keySet().toArray(new String[serverToServerInfos.size()]);
+ Arrays.sort(serverNames);
+ for (String serverName: serverNames) {
+ HServerInfo hsi = serverToServerInfos.get(serverName);
+ String hostname = hsi.getName() + ":" + hsi.getInfoPort();
+ String url = "http://" + hostname + "/";
+ totalRegions += hsi.getLoad().getNumberOfRegions();
+ totalRequests += hsi.getLoad().getNumberOfRequests() / interval;
+ long startCode = hsi.getStartCode();
+%>
+<tr><td><a href="<%= url %>"><%= hostname %></a></td><td><%= startCode %></td><td><%= hsi.getLoad().toString(interval) %></td></tr>
+<% } %>
+<tr><th>Total: </th><td>servers: <%= serverToServerInfos.size() %></td><td> </td><td>requests=<%= totalRequests %>, regions=<%= totalRegions %></td></tr>
+</table>
+
+<p>Load is requests per second and count of regions loaded</p>
+<% } %>
+</body>
+</html>
diff --git a/src/webapps/master/regionhistorian.jsp b/src/webapps/master/regionhistorian.jsp
new file mode 100644
index 0000000..efbc99f
--- /dev/null
+++ b/src/webapps/master/regionhistorian.jsp
@@ -0,0 +1,56 @@
+<%@ page contentType="text/html;charset=UTF-8"
+ import="java.util.List"
+ import="java.util.regex.*"
+ import="java.net.URLEncoder"
+ import="org.apache.hadoop.hbase.RegionHistorian"
+ import="org.apache.hadoop.hbase.master.HMaster"
+ import="org.apache.hadoop.hbase.RegionHistorian.RegionHistoryInformation"
+ import="org.apache.hadoop.hbase.HConstants"%><%
+ String regionName = request.getParameter("regionname");
+ HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
+ List<RegionHistoryInformation> informations = RegionHistorian.getInstance().getRegionHistory(regionName);
+ // Pattern used so we can wrap a regionname in an href.
+ Pattern pattern = Pattern.compile(RegionHistorian.SPLIT_PREFIX + "(.*)$");
+%><?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+ <meta http-equiv="refresh" content="30"/>
+<title>Region in <%= regionName %></title>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+
+<body>
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Region <%= regionName %></h1>
+<p id="links_menu"><a href="/master.jsp">Master</a>, <a href="/logs/">Local logs</a>, <a href="/stacks">Thread Dump</a>, <a href="/logLevel">Log Level</a></p>
+<hr id="head_rule" />
+<%if(informations != null && informations.size() > 0) { %>
+<table><tr><th>Timestamp</th><th>Event</th><th>Description</th></tr>
+<% for( RegionHistoryInformation information : informations) {
+ String description = information.getDescription();
+ Matcher m = pattern.matcher(description);
+ if (m.matches()) {
+ // Wrap the region name in an href so user can click on it.
+ description = RegionHistorian.SPLIT_PREFIX +
+ "<a href=\"regionhistorian.jsp?regionname=" + URLEncoder.encode(m.group(1), "UTF-8") + "\">" +
+ m.group(1) + "</a>";
+ }
+
+ %>
+<tr><td><%= information.getTimestampAsString() %></td><td><%= information.getEvent() %></td><td><%= description %></td></tr>
+<% } %>
+</table>
+<p>
+Master is the source of following events: creation, open, and assignment. Regions are the source of following events: split, compaction, and flush.
+</p>
+<%} else {%>
+<p>
+This region is no longer available. It may be due to a split, a merge or the name changed.
+</p>
+<%} %>
+
+
+</body>
+</html>
diff --git a/src/webapps/master/table.jsp b/src/webapps/master/table.jsp
new file mode 100644
index 0000000..45cb3b8
--- /dev/null
+++ b/src/webapps/master/table.jsp
@@ -0,0 +1,198 @@
+<%@ page contentType="text/html;charset=UTF-8"
+ import="java.io.IOException"
+ import="java.util.Map"
+ import="java.net.URLEncoder"
+ import="org.apache.hadoop.io.Text"
+ import="org.apache.hadoop.io.Writable"
+ import="org.apache.hadoop.hbase.HTableDescriptor"
+ import="org.apache.hadoop.hbase.client.HTable"
+ import="org.apache.hadoop.hbase.client.HBaseAdmin"
+ import="org.apache.hadoop.hbase.HRegionInfo"
+ import="org.apache.hadoop.hbase.HServerAddress"
+ import="org.apache.hadoop.hbase.HServerInfo"
+ import="org.apache.hadoop.hbase.HBaseConfiguration"
+ import="org.apache.hadoop.hbase.io.ImmutableBytesWritable"
+ import="org.apache.hadoop.hbase.master.HMaster"
+ import="org.apache.hadoop.hbase.master.MetaRegion"
+ import="org.apache.hadoop.hbase.util.Bytes"
+ import="org.apache.hadoop.hbase.HConstants"%><%
+ HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
+ HBaseConfiguration conf = master.getConfiguration();
+ HBaseAdmin hbadmin = new HBaseAdmin(conf);
+ String tableName = request.getParameter("name");
+ HTable table = new HTable(conf, tableName);
+ Map<HServerAddress, HServerInfo> serverAddressToServerInfos =
+ master.getServerAddressToServerInfo();
+ String tableHeader = "<h2>Table Regions</h2><table><tr><th>Name</th><th>Region Server</th><th>Encoded Name</th><th>Start Key</th><th>End Key</th></tr>";
+ HServerAddress rootLocation = master.getRootRegionLocation();
+%>
+
+<?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+
+<%
+ String action = request.getParameter("action");
+ String key = request.getParameter("key");
+ if ( action != null ) {
+%>
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+ <meta http-equiv="refresh" content="5; url=/"/>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+<body>
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Table action request accepted</h1>
+<p><hr><p>
+<%
+ if (action.equals("split")) {
+ if (key != null && key.length() > 0) {
+ Writable[] arr = new Writable[1];
+ arr[0] = new ImmutableBytesWritable(Bytes.toBytes(key));
+ master.modifyTable(Bytes.toBytes(tableName), HConstants.MODIFY_TABLE_SPLIT, arr);
+ } else {
+ master.modifyTable(Bytes.toBytes(tableName), HConstants.MODIFY_TABLE_SPLIT, null);
+ }
+ %> Split request accepted. <%
+ } else if (action.equals("compact")) {
+ if (key != null && key.length() > 0) {
+ Writable[] arr = new Writable[1];
+ arr[0] = new ImmutableBytesWritable(Bytes.toBytes(key));
+ master.modifyTable(Bytes.toBytes(tableName), HConstants.MODIFY_TABLE_COMPACT, arr);
+ } else {
+ master.modifyTable(Bytes.toBytes(tableName), HConstants.MODIFY_TABLE_COMPACT, null);
+ }
+ %> Compact request accepted. <%
+ }
+%>
+<p>This page will refresh in 5 seconds.
+</body>
+<%
+} else {
+%>
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+ <meta http-equiv="refresh" content="30"/>
+<title>Table: <%= tableName %></title>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+<body>
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Table: <%= tableName %></h1>
+<p id="links_menu"><a href="/master.jsp">Master</a>, <a href="/logs/">Local logs</a>, <a href="/stacks">Thread Dump</a>, <a href="/logLevel">Log Level</a></p>
+<hr id="head_rule" />
+<%
+ if(tableName.equals(Bytes.toString(HConstants.ROOT_TABLE_NAME))) {
+%>
+<%= tableHeader %>
+<%
+ int infoPort = serverAddressToServerInfos.get(rootLocation).getInfoPort();
+ String url = "http://" + rootLocation.getHostname() + ":" + infoPort + "/";
+%>
+<tr>
+ <td><%= tableName %></td>
+ <td><a href="<%= url %>"><%= rootLocation.getHostname() %>:<%= rootLocation.getPort() %></a></td>
+ <td>-</td>
+ <td></td>
+ <td>-</td>
+</tr>
+</table>
+<%
+ } else if(tableName.equals(Bytes.toString(HConstants.META_TABLE_NAME))) {
+%>
+<%= tableHeader %>
+<%
+ Map<byte [], MetaRegion> onlineRegions = master.getOnlineMetaRegions();
+ for (MetaRegion meta: onlineRegions.values()) {
+ int infoPort = serverAddressToServerInfos.get(meta.getServer()).getInfoPort();
+ String url = "http://" + meta.getServer().getHostname() + ":" + infoPort + "/";
+%>
+<tr>
+ <td><%= Bytes.toString(meta.getRegionName()) %></td>
+ <td><a href="<%= url %>"><%= meta.getServer().toString() %></a></td>
+ <td>-</td><td><%= Bytes.toString(meta.getStartKey()) %></td><td>-</td>
+</tr>
+<% } %>
+</table>
+<%} else {
+ try { %>
+<h2>Table Attributes</h2>
+<table>
+ <tr><th>Attribute Name</th><th>Value</th><th>Description</th></tr>
+ <tr><td>Enabled</td><td><%= hbadmin.isTableEnabled(table.getTableName()) %></td><td>Is the table enabled</td></tr>
+</table>
+<%
+ Map<HRegionInfo, HServerAddress> regions = table.getRegionsInfo();
+ if(regions != null && regions.size() > 0) { %>
+<%= tableHeader %>
+<%
+ for(Map.Entry<HRegionInfo, HServerAddress> hriEntry : regions.entrySet()) {
+
+ int infoPort = serverAddressToServerInfos.get(
+ hriEntry.getValue()).getInfoPort();
+
+ String urlRegionHistorian =
+ "/regionhistorian.jsp?regionname=" +
+ URLEncoder.encode(hriEntry.getKey().getRegionNameAsString(), "UTF-8");
+
+ String urlRegionServer =
+ "http://" + hriEntry.getValue().getHostname().toString() + ":" + infoPort + "/";
+%>
+<tr>
+ <td><a href="<%= urlRegionHistorian %>"><%= hriEntry.getKey().getRegionNameAsString()%></a></td>
+ <td><a href="<%= urlRegionServer %>"><%= hriEntry.getValue().toString() %></a></td>
+ <td><%= hriEntry.getKey().getEncodedName()%></td> <td><%= Bytes.toString(hriEntry.getKey().getStartKey())%></td>
+ <td><%= Bytes.toString(hriEntry.getKey().getEndKey())%></td>
+</tr>
+<% } %>
+</table>
+<% }
+} catch(Exception ex) {
+ ex.printStackTrace();
+}
+} // end else
+%>
+
+<p><hr><p>
+Actions:
+<p>
+<center>
+<table style="border-style: none" width="90%">
+<tr>
+ <form method="get">
+ <input type="hidden" name="action" value="compact">
+ <input type="hidden" name="name" value="<%= tableName %>">
+ <td style="border-style: none; text-align: center">
+ <input style="font-size: 12pt; width: 10em" type="submit" value="Compact"></td>
+ <td style="border-style: none" width="5%"> </td>
+ <td style="border-style: none">Region Key (optional):<input type="text" name="key" size="40"></td>
+ <td style="border-style: none">This action will force a compaction of all
+ regions of the table, or, if a key is supplied, only the region containing the
+ given key.</td>
+ </form>
+</tr>
+<tr><td style="border-style: none" colspan="4"> </td></tr>
+<tr>
+ <form method="get">
+ <input type="hidden" name="action" value="split">
+ <input type="hidden" name="name" value="<%= tableName %>">
+ <td style="border-style: none; text-align: center">
+ <input style="font-size: 12pt; width: 10em" type="submit" value="Split"></td>
+ <td style="border-style: none" width="5%"> </td>
+ <td style="border-style: none">Region Key (optional):<input type="text" name="key" size="40"></td>
+ <td style="border-style: none">This action will force a split of all eligible
+ regions of the table, or, if a key is supplied, only the region containing the
+ given key. An eligible region is one that does not contain any references to
+ other regions. Split requests for noneligible regions will be ignored.</td>
+ </form>
+</tr>
+</table>
+</center>
+<p>
+
+<%
+}
+%>
+
+</body>
+</html>
diff --git a/src/webapps/regionserver/index.html b/src/webapps/regionserver/index.html
new file mode 100644
index 0000000..bdd3c6a
--- /dev/null
+++ b/src/webapps/regionserver/index.html
@@ -0,0 +1 @@
+<meta HTTP-EQUIV="REFRESH" content="0;url=regionserver.jsp"/>
diff --git a/src/webapps/regionserver/regionserver.jsp b/src/webapps/regionserver/regionserver.jsp
new file mode 100644
index 0000000..be1e60f
--- /dev/null
+++ b/src/webapps/regionserver/regionserver.jsp
@@ -0,0 +1,71 @@
+<%@ page contentType="text/html;charset=UTF-8"
+ import="java.util.*"
+ import="org.apache.hadoop.io.Text"
+ import="org.apache.hadoop.hbase.regionserver.HRegionServer"
+ import="org.apache.hadoop.hbase.regionserver.HRegion"
+ import="org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics"
+ import="org.apache.hadoop.hbase.util.Bytes"
+ import="org.apache.hadoop.hbase.HConstants"
+ import="org.apache.hadoop.hbase.HServerInfo"
+ import="org.apache.hadoop.hbase.HServerLoad"
+ import="org.apache.hadoop.hbase.HRegionInfo" %><%
+ HRegionServer regionServer = (HRegionServer)getServletContext().getAttribute(HRegionServer.REGIONSERVER);
+ HServerInfo serverInfo = regionServer.getServerInfo();
+ RegionServerMetrics metrics = regionServer.getMetrics();
+ Collection<HRegionInfo> onlineRegions = regionServer.getSortedOnlineRegionInfos();
+ int interval = regionServer.getConfiguration().getInt("hbase.regionserver.msginterval", 3000)/1000;
+
+%><?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+ <meta http-equiv="refresh" content="300"/>
+<title>HBase Region Server: <%= serverInfo.getServerAddress().getHostname() %>:<%= serverInfo.getServerAddress().getPort() %></title>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+
+<body>
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Region Server: <%= serverInfo.getServerAddress().getHostname() %>:<%= serverInfo.getServerAddress().getPort() %></h1>
+<p id="links_menu"><a href="/logs/">Local logs</a>, <a href="/stacks">Thread Dump</a>, <a href="/logLevel">Log Level</a></p>
+<hr id="head_rule" />
+
+<h2>Region Server Attributes</h2>
+<table>
+<tr><th>Attribute Name</th><th>Value</th><th>Description</th></tr>
+<tr><td>HBase Version</td><td><%= org.apache.hadoop.hbase.util.VersionInfo.getVersion() %>, r<%= org.apache.hadoop.hbase.util.VersionInfo.getRevision() %></td><td>HBase version and svn revision</td></tr>
+<tr><td>HBase Compiled</td><td><%= org.apache.hadoop.hbase.util.VersionInfo.getDate() %>, <%= org.apache.hadoop.hbase.util.VersionInfo.getUser() %></td><td>When HBase version was compiled and by whom</td></tr>
+<tr><td>Metrics</td><td><%= metrics.toString() %></td><td>RegionServer Metrics; file and heap sizes are in megabytes</td></tr>
+</table>
+
+<h2>Online Regions</h2>
+<% if (onlineRegions != null && onlineRegions.size() > 0) { %>
+<table>
+<tr><th>Region Name</th><th>Encoded Name</th><th>Start Key</th><th>End Key</th><th>Metrics</th></tr>
+<% for (HRegionInfo r: onlineRegions) {
+ HServerLoad.RegionLoad load = regionServer.createRegionLoad(r.getRegionName());
+ %>
+<tr><td><%= r.getRegionNameAsString() %></td><td><%= r.getEncodedName() %></td>
+ <td><%= Bytes.toString(r.getStartKey()) %></td><td><%= Bytes.toString(r.getEndKey()) %></td>
+ <td><%= load.toString() %></td>
+ </tr>
+<% } %>
+</table>
+<p>Region names are made of the containing table's name, a comma,
+the start key, a comma, and a randomly generated region id. To illustrate,
+the region named
+<em>domains,apache.org,5464829424211263407</em> is party to the table
+<em>domains</em>, has an id of <em>5464829424211263407</em> and the first key
+in the region is <em>apache.org</em>. The <em>-ROOT-</em>
+and <em>.META.</em> 'tables' are internal sytem tables (or 'catalog' tables in db-speak).
+The -ROOT- keeps a list of all regions in the .META. table. The .META. table
+keeps a list of all regions in the system. The empty key is used to denote
+table start and table end. A region with an empty start key is the first region in a table.
+If region has both an empty start and an empty end key, its the only region in the table. See
+<a href="http://hbase.org">HBase Home</a> for further explication.<p>
+<% } else { %>
+<p>Not serving regions</p>
+<% } %>
+</body>
+</html>
diff --git a/src/webapps/rest/META-INF/MANIFEST.MF b/src/webapps/rest/META-INF/MANIFEST.MF
new file mode 100644
index 0000000..38e550e
--- /dev/null
+++ b/src/webapps/rest/META-INF/MANIFEST.MF
@@ -0,0 +1,9 @@
+Manifest-Version: 1.0
+Class-Path:
+
+Manifest-Version: 1.0
+Class-Path:
+
+Manifest-Version: 1.0
+Class-Path:
+
diff --git a/src/webapps/rest/WEB-INF/web.xml b/src/webapps/rest/WEB-INF/web.xml
new file mode 100644
index 0000000..01aa3b7
--- /dev/null
+++ b/src/webapps/rest/WEB-INF/web.xml
@@ -0,0 +1,14 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<web-app id="WebApp_ID" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
+ <display-name>jsonrest</display-name>
+ <servlet>
+ <description>Hbase JSONREST Interface</description>
+ <display-name>api</display-name>
+ <servlet-name>api</servlet-name>
+ <servlet-class>org.apache.hadoop.hbase.rest.Dispatcher</servlet-class>
+ </servlet>
+ <servlet-mapping>
+ <servlet-name>api</servlet-name>
+ <url-pattern>/api/*</url-pattern>
+ </servlet-mapping>
+</web-app>
diff --git a/src/webapps/static/hbase.css b/src/webapps/static/hbase.css
new file mode 100644
index 0000000..ffa2584
--- /dev/null
+++ b/src/webapps/static/hbase.css
@@ -0,0 +1,8 @@
+h1, h2, h3 { color: DarkSlateBlue }
+table { border: thin solid DodgerBlue }
+tr { border: thin solid DodgerBlue }
+td { border: thin solid DodgerBlue }
+th { border: thin solid DodgerBlue }
+#logo {float: left;}
+#logo img {border: none;}
+#page_title {padding-top: 27px;}
diff --git a/src/webapps/static/hbase_logo_med.gif b/src/webapps/static/hbase_logo_med.gif
new file mode 100644
index 0000000..36d3e3c
--- /dev/null
+++ b/src/webapps/static/hbase_logo_med.gif
Binary files differ