merge up to 0.19.0 release
git-svn-id: https://svn.apache.org/repos/asf/hadoop/hbase/branches/0.19_on_hadoop_0.18@738681 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/CHANGES.txt b/CHANGES.txt
index 2eee1f2..c22bc11 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -8,11 +8,12 @@
HBASE-852 Cannot scan all families in a row with a LIMIT, STARTROW, etc.
(Izaak Rubin via Stack)
HBASE-953 Enable BLOCKCACHE by default [WAS -> Reevaluate HBASE-288 block
- caching work....?] -- Update your hbad-default.xml file!
+ caching work....?] -- Update your hbase-default.xml file!
HBASE-636 java6 as a requirement
HBASE-994 IPC interfaces with different versions can cause problems
HBASE-1028 If key does not exist, return null in getRow rather than an
empty RowResult
+ HBASE-1134 OOME in HMaster when HBaseRPC is older than 0.19
BUG FIXES
HBASE-891 HRS.validateValuesLength throws IOE, gets caught in the retries
@@ -45,7 +46,6 @@
HBASE-945 Be consistent in use of qualified/unqualified mapfile paths
HBASE-946 Row with 55k deletes timesout scanner lease
HBASE-950 HTable.commit no longer works with existing RowLocks though it's still in API
- HBASE-728 Support for HLog appends
HBASE-952 Deadlock in HRegion.batchUpdate
HBASE-954 Don't reassign root region until ProcessServerShutdown has split
the former region server's log
@@ -63,12 +63,12 @@
HBASE-976 HADOOP 0.19.0 RC0 is broke; replace with HEAD of branch-0.19
HBASE-977 Arcane HStoreKey comparator bug
HBASE-979 REST web app is not started automatically
- HBASE-964 Startup stuck "waiting for root region"
HBASE-980 Undo core of HBASE-975, caching of start and end row
HBASE-982 Deleting a column in MapReduce fails (Doğacan Güney via Stack)
HBASE-984 Fix javadoc warnings
HBASE-985 Fix javadoc warnings
HBASE-951 Either shut down master or let it finish cleanup
+ HBASE-964 Startup stuck "waiting for root region"
HBASE-964, HBASE-678 provide for safe-mode without locking up HBase "waiting
for root region"
HBASE-990 NoSuchElementException in flushSomeRegions; took two attempts.
@@ -80,7 +80,7 @@
major compaction
HBASE-1005 Regex and string comparison operators for ColumnValueFilter
HBASE-910 Scanner misses columns / rows when the scanner is obtained
- durring a memcache flush
+ during a memcache flush
HBASE-1009 Master stuck in loop wanting to assign but regions are closing
HBASE-1016 Fix example in javadoc overvie
HBASE-1021 hbase metrics FileContext not working
@@ -137,6 +137,16 @@
IllegalStateException: Cannot set a region to be closed it it was
not already marked as closing, Does not recover if HRS carrying
-ROOT- goes down
+ HBASE-1114 Weird NPEs compacting
+ HBASE-1116 generated web.xml and svn don't play nice together
+ HBASE-1119 ArrayOutOfBoundsException in HStore.compact
+ HBASE-1121 Cluster confused about where -ROOT- is
+ HBASE-1125 IllegalStateException: Cannot set a region to be closed if it was
+ not already marked as pending close
+ HBASE-1124 Balancer kicks in way too early
+ HBASE-1127 OOME running randomRead PE
+ HBASE-1132 Can't append to HLog, can't roll log, infinite cycle (another
+ spin on HBASE-930)
IMPROVEMENTS
HBASE-901 Add a limit to key length, check key and value length on client side
@@ -179,7 +189,7 @@
HBASE-983 Declare Perl namespace in Hbase.thrift
HBASE-987 We need a Hbase Partitioner for TableMapReduceUtil.initTableReduceJob
MR Jobs (Billy Pearson via Stack)
- HBASE-993 Turn of logging of every catalog table row entry on every scan
+ HBASE-993 Turn off logging of every catalog table row entry on every scan
HBASE-992 Up the versions kept by catalog tables; currently 1. Make it 10?
HBASE-998 Narrow getClosestRowBefore by passing column family
HBASE-999 Up versions on historian and keep history of deleted regions for a
@@ -222,6 +232,7 @@
(Andrzej Bialecki via Stack)
HBASE-625 Metrics support for cluster load history: emissions and graphs
HBASE-883 Secondary indexes (Clint Morgan via Andrew Purtell)
+ HBASE-728 Support for HLog appends
OPTIMIZATIONS
HBASE-748 Add an efficient way to batch update many rows
diff --git a/build.xml b/build.xml
index 766d8ea..96d5c68 100644
--- a/build.xml
+++ b/build.xml
@@ -18,7 +18,7 @@
-->
<project name="hbase" default="jar">
- <property name="version" value="0.19.0-dev"/>
+ <property name="version" value="0.19.0"/>
<property name="Name" value="HBase"/>
<property name="final.name" value="hbase-${version}"/>
<property name="year" value="2008"/>
@@ -174,7 +174,7 @@
<!--Conditionally generate the jsp java pages.
We do it once per ant invocation. See hbase-593.
-->
- <target name="jspc" unless="jspc.not.required">
+ <target name="jspc" depends="init" unless="jspc.not.required">
<path id="jspc.classpath">
<fileset dir="${basedir}/lib/jetty-ext/">
<include name="*jar" />
@@ -191,13 +191,13 @@
uriroot="${src.webapps}/master"
outputdir="${generated.webapps.src}"
package="org.apache.hadoop.hbase.generated.master"
- webxml="${src.webapps}/master/WEB-INF/web.xml">
+ webxml="${build.webapps}/master/WEB-INF/web.xml">
</jspcompiler>
<jspcompiler
uriroot="${src.webapps}/regionserver"
outputdir="${generated.webapps.src}"
package="org.apache.hadoop.hbase.generated.regionserver"
- webxml="${src.webapps}/regionserver/WEB-INF/web.xml">
+ webxml="${build.webapps}/regionserver/WEB-INF/web.xml">
</jspcompiler>
<property name="jspc.not.required" value="true" />
<echo message="Setting jspc.notRequired property. jsp pages generated once per ant session only" />
diff --git a/conf/hbase-default.xml b/conf/hbase-default.xml
index 6845ade..9a458dc 100644
--- a/conf/hbase-default.xml
+++ b/conf/hbase-default.xml
@@ -319,8 +319,7 @@
<description>The size of each block in the block cache.
Enable blockcaching on a per column family basis; see the BLOCKCACHE setting
in HColumnDescriptor. Blocks are kept in a java Soft Reference cache so are
- let go when high pressure on memory. Block caching is enabled by default
- as of hbase 0.19.0.
+ let go when high pressure on memory. Block caching is not enabled by default.
</description>
</property>
<property>
diff --git a/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java b/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
index 5c06acb..183c3a6 100644
--- a/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
+++ b/src/java/org/apache/hadoop/hbase/HColumnDescriptor.java
@@ -103,7 +103,7 @@
/**
* Default setting for whether to use a block cache or not.
*/
- public static final boolean DEFAULT_BLOCKCACHE = true;
+ public static final boolean DEFAULT_BLOCKCACHE = false;
/**
* Default setting for whether or not to use bloomfilters.
diff --git a/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java b/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
index 3caf4dc..4c0ef5e 100644
--- a/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
+++ b/src/java/org/apache/hadoop/hbase/client/HConnectionManager.java
@@ -211,13 +211,13 @@
}
} catch (IOException e) {
- if(tries == numRetries - 1) {
+ if (tries == numRetries - 1) {
// This was our last chance - don't bother sleeping
break;
}
- LOG.info("Attempt " + tries + " of " + this.numRetries +
- " failed with <" + e + ">. Retrying after sleep of " +
- getPauseTime(tries));
+ LOG.info("getMaster attempt " + tries + " of " + this.numRetries +
+ " failed; retrying after sleep of " +
+ getPauseTime(tries), e);
}
// Cannot connect to master or it is not running. Sleep & retry
@@ -550,9 +550,9 @@
}
if (tries < numRetries - 1) {
if (LOG.isDebugEnabled()) {
- LOG.debug("Attempt " + tries + " of " + this.numRetries +
- " failed with <" + e + ">. Retrying after sleep of " +
- getPauseTime(tries));
+ LOG.debug("locateRegionInMeta attempt " + tries + " of " +
+ this.numRetries + " failed; retrying after sleep of " +
+ getPauseTime(tries), e);
}
relocateRegion(parentTable, metaKey);
} else {
@@ -747,7 +747,6 @@
HServerAddress rootRegionAddress = null;
for (int tries = 0; tries < numRetries; tries++) {
int localTimeouts = 0;
-
// ask the master which server has the root region
while (rootRegionAddress == null && localTimeouts < numRetries) {
rootRegionAddress = master.findRootRegion();
@@ -772,13 +771,12 @@
// get a connection to the region server
HRegionInterface server = getHRegionConnection(rootRegionAddress);
-
try {
// if this works, then we're good, and we have an acceptable address,
// so we can stop doing retries and return the result.
server.getRegionInfo(HRegionInfo.ROOT_REGIONINFO.getRegionName());
if (LOG.isDebugEnabled()) {
- LOG.debug("Found ROOT " + HRegionInfo.ROOT_REGIONINFO);
+ LOG.debug("Found ROOT at " + rootRegionAddress);
}
break;
} catch (IOException e) {
@@ -875,7 +873,8 @@
private HRegionLocation
getRegionLocationForRowWithRetries(byte[] tableName, byte[] rowKey,
- boolean reload) throws IOException {
+ boolean reload)
+ throws IOException {
getMaster();
List<Throwable> exceptions = new ArrayList<Throwable>();
HRegionLocation location = null;
diff --git a/src/java/org/apache/hadoop/hbase/io/BlockFSInputStream.java b/src/java/org/apache/hadoop/hbase/io/BlockFSInputStream.java
index b79451b..d3fa956 100644
--- a/src/java/org/apache/hadoop/hbase/io/BlockFSInputStream.java
+++ b/src/java/org/apache/hadoop/hbase/io/BlockFSInputStream.java
@@ -53,10 +53,9 @@
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setDaemon(true);
- t.setName("BlockFSInputStream referenceQueue Checker");
+ t.setName("BlockFSInputStreamReferenceQueueChecker");
return t;
}
-
});
/*
@@ -92,7 +91,7 @@
}
this.fileLength = fileLength;
this.blockSize = blockSize;
- // a memory-sensitive map that has soft references to values
+ // A memory-sensitive map that has soft references to values
this.blocks = new SoftValueMap<Long, byte []>() {
private long hits, misses;
public byte [] get(Object key) {
@@ -111,14 +110,14 @@
};
// Register a Runnable that runs checkReferences on a period.
final int hashcode = hashCode();
- this.registration = EXECUTOR.scheduleAtFixedRate(new Runnable() {
+ this.registration = EXECUTOR.scheduleWithFixedDelay(new Runnable() {
public void run() {
int cleared = checkReferences();
if (LOG.isDebugEnabled() && cleared > 0) {
- LOG.debug("Cleared " + cleared + " in " + hashcode);
+ LOG.debug("Checker cleared " + cleared + " in " + hashcode);
}
}
- }, 10, 10, TimeUnit.SECONDS);
+ }, 1, 1, TimeUnit.SECONDS);
}
@Override
@@ -214,6 +213,10 @@
if (!this.registration.cancel(false)) {
LOG.warn("Failed cancel of " + this.registration);
}
+ int cleared = checkReferences();
+ if (LOG.isDebugEnabled() && cleared > 0) {
+ LOG.debug("Close cleared " + cleared + " in " + hashCode());
+ }
if (blockStream != null) {
blockStream.close();
blockStream = null;
@@ -246,7 +249,7 @@
* @return Count of references cleared.
*/
public synchronized int checkReferences() {
- if (closed || this.blocks == null) {
+ if (this.closed) {
return 0;
}
return this.blocks.checkReferences();
diff --git a/src/java/org/apache/hadoop/hbase/ipc/HBaseServer.java b/src/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
index 1395f4b..6daad60 100644
--- a/src/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
+++ b/src/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
@@ -75,7 +75,8 @@
public static final ByteBuffer HEADER = ByteBuffer.wrap("hrpc".getBytes());
// 1 : Introduce ping and server does not throw away RPCs
- public static final byte CURRENT_VERSION = 2;
+ // 3 : RPC was refactored in 0.19
+ public static final byte CURRENT_VERSION = 3;
/**
* How many calls/handler are allowed in the queue.
diff --git a/src/java/org/apache/hadoop/hbase/master/RegionManager.java b/src/java/org/apache/hadoop/hbase/master/RegionManager.java
index 0db9f83..cffbd16 100644
--- a/src/java/org/apache/hadoop/hbase/master/RegionManager.java
+++ b/src/java/org/apache/hadoop/hbase/master/RegionManager.java
@@ -167,9 +167,7 @@
/*
* Assigns regions to region servers attempting to balance the load across
- * all region servers
- *
- * Note that no synchronization is necessary as the caller
+ * all region servers. Note that no synchronization is necessary as the caller
* (ServerManager.processMsgs) already owns the monitor for the RegionManager.
*
* @param info
@@ -1174,10 +1172,10 @@
}
synchronized void setClosed() {
- if (!pendingClose) {
+ if (!pendingClose && !pendingOpen) {
throw new IllegalStateException(
"Cannot set a region to be closed if it was not already marked as" +
- " pending close. State: " + toString());
+ " pending close or pending open. State: " + toString());
}
this.unassigned = false;
this.pendingOpen = false;
diff --git a/src/java/org/apache/hadoop/hbase/master/ServerManager.java b/src/java/org/apache/hadoop/hbase/master/ServerManager.java
index 5a68bb1..9968ae0 100644
--- a/src/java/org/apache/hadoop/hbase/master/ServerManager.java
+++ b/src/java/org/apache/hadoop/hbase/master/ServerManager.java
@@ -81,6 +81,11 @@
// Last time we logged average load.
private volatile long lastLogOfAverageLaod = 0;
private final long loggingPeriodForAverageLoad;
+
+ /* The regionserver will not be assigned or asked close regions if it
+ * is currently opening >= this many regions.
+ */
+ private final int nobalancingCount;
/**
* @param master
@@ -92,6 +97,8 @@
15 * 1000));
this.loggingPeriodForAverageLoad = master.getConfiguration().
getLong("hbase.master.avgload.logging.period", 60000);
+ this.nobalancingCount = master.getConfiguration().
+ getInt("hbase.regions.nobalancing.count", 4);
}
/*
@@ -330,7 +337,14 @@
}
}
- /** RegionServer is checking in, no exceptional circumstances */
+ /* RegionServer is checking in, no exceptional circumstances
+ * @param serverName
+ * @param serverInfo
+ * @param mostLoadedRegions
+ * @param msgs
+ * @return
+ * @throws IOException
+ */
private HMsg[] processRegionServerAllsWell(String serverName,
HServerInfo serverInfo, HRegionInfo[] mostLoadedRegions, HMsg[] msgs)
throws IOException {
@@ -350,7 +364,6 @@
// and the load on this server has changed
synchronized (loadToServers) {
Set<String> servers = loadToServers.get(load);
-
// Note that servers should never be null because loadToServers
// and serversToLoad are manipulated in pairs
servers.remove(serverName);
@@ -370,6 +383,7 @@
servers.add(serverName);
loadToServers.put(load, servers);
}
+
// Next, process messages for this server
return processMsgs(serverName, serverInfo, mostLoadedRegions, msgs);
}
@@ -389,11 +403,13 @@
"hbase-958 debugging");
}
// Get reports on what the RegionServer did.
+ int openingCount = 0;
for (int i = 0; i < incomingMsgs.length; i++) {
HRegionInfo region = incomingMsgs[i].getRegionInfo();
LOG.info("Received " + incomingMsgs[i] + " from " + serverName);
switch (incomingMsgs[i].getType()) {
case MSG_REPORT_PROCESS_OPEN:
+ openingCount++;
break;
case MSG_REPORT_OPEN:
@@ -425,11 +441,15 @@
}
// Figure out what the RegionServer ought to do, and write back.
- master.regionManager.assignRegions(serverInfo, serverName,
+
+ // Should we tell it close regions because its overloaded? If its
+ // currently opening regions, leave it alone till all are open.
+ if (openingCount < this.nobalancingCount) {
+ this.master.regionManager.assignRegions(serverInfo, serverName,
mostLoadedRegions, returnMsgs);
-
+ }
// Send any pending table actions.
- master.regionManager.applyActions(serverInfo, returnMsgs);
+ this.master.regionManager.applyActions(serverInfo, returnMsgs);
}
return returnMsgs.toArray(new HMsg[returnMsgs.size()]);
}
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java b/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index a17910b..a3fc106 100644
--- a/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ b/src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -696,8 +696,9 @@
private boolean checkOOME(final Throwable e) {
boolean stop = false;
if (e instanceof OutOfMemoryError ||
- (e.getCause()!= null && e.getCause() instanceof OutOfMemoryError) ||
- e.getMessage().contains("java.lang.OutOfMemoryError")) {
+ (e.getCause() != null && e.getCause() instanceof OutOfMemoryError) ||
+ (e.getMessage() != null &&
+ e.getMessage().contains("java.lang.OutOfMemoryError"))) {
LOG.fatal("OutOfMemoryError, aborting.", e);
abort();
stop = true;
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/HStore.java b/src/java/org/apache/hadoop/hbase/regionserver/HStore.java
index 5509678..a9e86b3 100644
--- a/src/java/org/apache/hadoop/hbase/regionserver/HStore.java
+++ b/src/java/org/apache/hadoop/hbase/regionserver/HStore.java
@@ -866,8 +866,10 @@
return null;
}
int len = 0;
- for (FileStatus fstatus:fs.listStatus(path)) {
- len += fstatus.getLen();
+ // listStatus can come back null.
+ FileStatus [] fss = this.fs.listStatus(path);
+ for (int ii = 0; fss != null && ii < fss.length; ii++) {
+ len += fss[ii].getLen();
}
fileSizes[i] = len;
totalSize += len;
diff --git a/src/java/org/apache/hadoop/hbase/regionserver/LogRoller.java b/src/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
index 5c1c4dc..f39df01 100644
--- a/src/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
+++ b/src/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
@@ -68,14 +68,18 @@
}
} catch (FailedLogCloseException e) {
LOG.fatal("Forcing server shutdown", e);
+ server.checkFileSystem();
server.abort();
} catch (java.net.ConnectException e) {
LOG.fatal("Forcing server shutdown", e);
+ server.checkFileSystem();
server.abort();
} catch (IOException ex) {
- LOG.error("Log rolling failed with ioe: ",
- RemoteExceptionHandler.checkIOException(ex));
+ LOG.fatal("Log rolling failed with ioe: ",
+ RemoteExceptionHandler.checkIOException(ex));
server.checkFileSystem();
+ // Abort if we get here. We probably won't recover an IOE. HBASE-1132
+ server.abort();
} catch (Exception ex) {
LOG.error("Log rolling failed", ex);
server.checkFileSystem();
@@ -123,4 +127,4 @@
rollLock.unlock();
}
}
-}
\ No newline at end of file
+}
diff --git a/src/webapps/master/WEB-INF/web.xml b/src/webapps/master/WEB-INF/web.xml
deleted file mode 100644
index 6f1d799..0000000
--- a/src/webapps/master/WEB-INF/web.xml
+++ /dev/null
@@ -1,43 +0,0 @@
-<?xml version="1.0" encoding="ISO-8859-1"?>
-
-<!DOCTYPE web-app
- PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
- "http://java.sun.com/dtd/web-app_2_3.dtd">
-<!--
-Automatically created by Tomcat JspC.
--->
-<web-app>
-
-
- <servlet>
- <servlet-name>org.apache.hadoop.hbase.generated.master.master_jsp</servlet-name>
- <servlet-class>org.apache.hadoop.hbase.generated.master.master_jsp</servlet-class>
- </servlet>
-
- <servlet>
- <servlet-name>org.apache.hadoop.hbase.generated.master.table_jsp</servlet-name>
- <servlet-class>org.apache.hadoop.hbase.generated.master.table_jsp</servlet-class>
- </servlet>
-
- <servlet>
- <servlet-name>org.apache.hadoop.hbase.generated.master.regionhistorian_jsp</servlet-name>
- <servlet-class>org.apache.hadoop.hbase.generated.master.regionhistorian_jsp</servlet-class>
- </servlet>
-
- <servlet-mapping>
- <servlet-name>org.apache.hadoop.hbase.generated.master.master_jsp</servlet-name>
- <url-pattern>/master.jsp</url-pattern>
- </servlet-mapping>
-
- <servlet-mapping>
- <servlet-name>org.apache.hadoop.hbase.generated.master.table_jsp</servlet-name>
- <url-pattern>/table.jsp</url-pattern>
- </servlet-mapping>
-
- <servlet-mapping>
- <servlet-name>org.apache.hadoop.hbase.generated.master.regionhistorian_jsp</servlet-name>
- <url-pattern>/regionhistorian.jsp</url-pattern>
- </servlet-mapping>
-
-</web-app>
-
diff --git a/src/webapps/regionserver/WEB-INF/web.xml b/src/webapps/regionserver/WEB-INF/web.xml
deleted file mode 100644
index e5ed96a..0000000
--- a/src/webapps/regionserver/WEB-INF/web.xml
+++ /dev/null
@@ -1,23 +0,0 @@
-<?xml version="1.0" encoding="ISO-8859-1"?>
-
-<!DOCTYPE web-app
- PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
- "http://java.sun.com/dtd/web-app_2_3.dtd">
-<!--
-Automatically created by Tomcat JspC.
--->
-<web-app>
-
-
- <servlet>
- <servlet-name>org.apache.hadoop.hbase.generated.regionserver.regionserver_jsp</servlet-name>
- <servlet-class>org.apache.hadoop.hbase.generated.regionserver.regionserver_jsp</servlet-class>
- </servlet>
-
- <servlet-mapping>
- <servlet-name>org.apache.hadoop.hbase.generated.regionserver.regionserver_jsp</servlet-name>
- <url-pattern>/regionserver.jsp</url-pattern>
- </servlet-mapping>
-
-</web-app>
-
diff --git a/src/webapps/rest/WEB-INF/web.xml b/src/webapps/rest/WEB-INF/web.xml
deleted file mode 100644
index f9db246..0000000
--- a/src/webapps/rest/WEB-INF/web.xml
+++ /dev/null
@@ -1,14 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<web-app id="WebApp_ID" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
- <display-name>rest</display-name>
- <servlet>
- <description>Hbase REST Interface</description>
- <display-name>api</display-name>
- <servlet-name>api</servlet-name>
- <servlet-class>org.apache.hadoop.hbase.rest.Dispatcher</servlet-class>
- </servlet>
- <servlet-mapping>
- <servlet-name>api</servlet-name>
- <url-pattern>/*</url-pattern>
- </servlet-mapping>
-</web-app>