blob: 826c54eff861dd4bdf548c67213f828990c3164d [file] [log] [blame]
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Hadoop 1.2.1 Release Notes</title>
<STYLE type="text/css">
H1 {font-family: sans-serif}
H2 {font-family: sans-serif; margin-left: 7mm}
TABLE {margin-left: 7mm}
</STYLE>
</head>
<body>
<h1>Hadoop 1.2.1 Release Notes</h1>
These release notes include new developer and user-facing incompatibilities, features, and major improvements.
<a name="changes"/>
<h2>Changes since Hadoop 1.2.0</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3859">MAPREDUCE-3859</a>.
Major bug reported by sergeant and fixed by sergeant (capacity-sched)<br>
<b>CapacityScheduler incorrectly utilizes extra-resources of queue for high-memory jobs</b><br>
<blockquote> Fixed wrong CapacityScheduler resource allocation for high memory consumption jobs
</blockquote></li>
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9504">HADOOP-9504</a>.
Critical bug reported by xieliang007 and fixed by xieliang007 (metrics)<br>
<b>MetricsDynamicMBeanBase has concurrency issues in createMBeanInfo</b><br>
<blockquote>Please see HBASE-8416 for detail information.<br>we need to take care of the synchronization for HashMap put(), otherwise it may lead to spin loop.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9665">HADOOP-9665</a>.
Critical bug reported by zjshen and fixed by zjshen <br>
<b>BlockDecompressorStream#decompress will throw EOFException instead of return -1 when EOF</b><br>
<blockquote>BlockDecompressorStream#decompress ultimately calls rawReadInt, which will throw EOFException instead of return -1 when encountering end of a stream. Then, decompress will be called by read. However, InputStream#read is supposed to return -1 instead of throwing EOFException to indicate the end of a stream. This explains why in LineReader,<br>{code}<br> if (bufferPosn &gt;= bufferLength) {<br> startPosn = bufferPosn = 0;<br> if (prevCharCR)<br> ++bytesConsumed; //account for CR from ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9730">HADOOP-9730</a>.
Major bug reported by gkesavan and fixed by gkesavan (build)<br>
<b>fix hadoop.spec to add task-log4j.properties </b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4261">HDFS-4261</a>.
Major bug reported by szetszwo and fixed by djp (balancer)<br>
<b>TestBalancerWithNodeGroup times out</b><br>
<blockquote>When I manually ran TestBalancerWithNodeGroup, it always timed out in my machine. Looking at the Jerkins report [build #3573|https://builds.apache.org/job/PreCommit-HDFS-Build/3573//testReport/org.apache.hadoop.hdfs.server.balancer/], TestBalancerWithNodeGroup somehow was skipped so that the problem was not detected.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4581">HDFS-4581</a>.
Major bug reported by rohit_kochar and fixed by rohit_kochar (datanode)<br>
<b>DataNode#checkDiskError should not be called on network errors</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4699">HDFS-4699</a>.
Major bug reported by cnauroth and fixed by cnauroth (test)<br>
<b>TestPipelinesFailover#testPipelineRecoveryStress fails sporadically</b><br>
<blockquote>I have seen {{TestPipelinesFailover#testPipelineRecoveryStress}} fail sporadically due to timeout during {{loopRecoverLease}}, which waits for up to 30 seconds before timing out.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4880">HDFS-4880</a>.
Major bug reported by arpitagarwal and fixed by sureshms (namenode)<br>
<b>Diagnostic logging while loading name/edits files</b><br>
<blockquote>Add some minimal diagnostic logging to help determine location of the files being loaded.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4838">MAPREDUCE-4838</a>.
Major improvement reported by acmurthy and fixed by zjshen <br>
<b>Add extra info to JH files</b><br>
<blockquote>It will be useful to add more task-info to JH for analytics.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5148">MAPREDUCE-5148</a>.
Major bug reported by yeshavora and fixed by acmurthy (tasktracker)<br>
<b>Syslog missing from Map/Reduce tasks</b><br>
<blockquote>MAPREDUCE-4970 introduced incompatible change and causes syslog to be missing from tasktracker on old clusters which just have log4j.properties configured</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5206">MAPREDUCE-5206</a>.
Minor bug reported by acmurthy and fixed by acmurthy <br>
<b>JT can show the same job multiple times in Retired Jobs section</b><br>
<blockquote>JT can show the same job multiple times in Retired Jobs section since the RetireJobs thread has a bug which adds the same job multiple times to collection of retired jobs.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5256">MAPREDUCE-5256</a>.
Major bug reported by vinodkv and fixed by vinodkv <br>
<b>CombineInputFormat isn&apos;t thread safe affecting HiveServer</b><br>
<blockquote>This was originally fixed as part of MAPREDUCE-5038, but that got reverted now. Which uncovers this issue, breaking HiveServer. Originally reported by [~thejas].</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5260">MAPREDUCE-5260</a>.
Major bug reported by zhaoyunjiong and fixed by zhaoyunjiong (tasktracker)<br>
<b>Job failed because of JvmManager running into inconsistent state</b><br>
<blockquote>In our cluster, jobs failed due to randomly task initialization failed because of JvmManager running into inconsistent state and TaskTracker failed to exit:<br><br>java.lang.Throwable: Child Error<br> at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)<br>Caused by: java.lang.NullPointerException<br> at org.apache.hadoop.mapred.JvmManager$JvmManagerForType.getDetails(JvmManager.java:402)<br> at org.apache.hadoop.mapred.JvmManager$JvmManagerForType.reapJvm(JvmManager.java:387)<br> at org.apache.hadoop....</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5318">MAPREDUCE-5318</a>.
Minor bug reported by bohou and fixed by bohou (jobtracker)<br>
<b>Ampersand in JSPUtil.java is not escaped</b><br>
<blockquote>The malformed urls cause hue crash. The malformed urls are caused by the unescaped ampersand &quot;&amp;&quot;. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5351">MAPREDUCE-5351</a>.
Critical bug reported by sandyr and fixed by sandyr (jobtracker)<br>
<b>JobTracker memory leak caused by CleanupQueue reopening FileSystem</b><br>
<blockquote>When a job is completed, closeAllForUGI is called to close all the cached FileSystems in the FileSystem cache. However, the CleanupQueue may run after this occurs and call FileSystem.get() to delete the staging directory, adding a FileSystem to the cache that will never be closed.<br><br>People on the user-list have reported this causing their JobTrackers to OOME every two weeks.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5364">MAPREDUCE-5364</a>.
Major bug reported by kkambatl and fixed by kkambatl <br>
<b>Deadlock between RenewalTimerTask methods cancel() and run()</b><br>
<blockquote>MAPREDUCE-4860 introduced a local variable {{cancelled}} in {{RenewalTimerTask}} to fix the race where {{DelegationTokenRenewal}} attempts to renew a token even after the job is removed. However, the patch also makes {{run()}} and {{cancel()}} synchronized methods leading to a potential deadlock against {{run()}}&apos;s catch-block (error-path).<br><br>The deadlock stacks below:<br><br>{noformat}<br> - org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$RenewalTimerTask.cancel() @bci=0, line=240 (I...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5368">MAPREDUCE-5368</a>.
Major improvement reported by zhaoyunjiong and fixed by zhaoyunjiong (mrv1)<br>
<b>Save memory by set capacity, load factor and concurrency level for ConcurrentHashMap in TaskInProgress</b><br>
<blockquote>Below is histo from our JobTracker:<br><br> num #instances #bytes class name<br>----------------------------------------------<br> 1: 136048824 11347237456 [C<br> 2: 124156992 5959535616 java.util.concurrent.locks.ReentrantLock$NonfairSync<br> 3: 124156973 5959534704 java.util.concurrent.ConcurrentHashMap$Segment<br> 4: 135887753 5435510120 java.lang.String<br> 5: 124213692 3975044400 [Ljava.util.concurrent.ConcurrentHashMap$HashEntry;<br> 6: 637...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5375">MAPREDUCE-5375</a>.
Critical bug reported by venkatnrangan and fixed by venkatnrangan <br>
<b>Delegation Token renewal exception in jobtracker logs</b><br>
<blockquote>Filing on behalf of [~venkatnrangan] who found this originally and provided a patch.<br><br>Saw this in the JT logs while oozie tests were running with Hadoop.<br><br>When Oozie java action is executed, the following shows up in the job tracker log.<br><br>{code}<br>ERROR org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal: Exception renewing tokenIdent: 00 07 68 64 70 75 73 65 72 06 6d 61 70 72 65 64 26 6f 6f 7a 69 65 2f 63 6f 6e 64 6f 72 2d 73 65 63 2e 76 65 6e 6b 61 74 2e 6f 72 67 40 76 65 6e 6b ...</blockquote></li>
</ul>
<h2>Changes since Hadoop 1.1.2</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7698">HADOOP-7698</a>.
Critical bug reported by daryn and fixed by daryn (build)<br>
<b>jsvc target fails on x86_64</b><br>
<blockquote> The jsvc build target is now supported for Mac OSX and other platforms as well.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8164">HADOOP-8164</a>.
Major sub-task reported by sureshms and fixed by daryn (fs)<br>
<b>Handle paths using back slash as path separator for windows only</b><br>
<blockquote> This jira only allows providing paths using back slash as separator on Windows. The back slash on *nix system will be used as escape character. The support for paths using back slash as path separator will be removed in <a href="/jira/browse/HADOOP-8139" title="Path does not allow metachars to be escaped">HADOOP-8139</a> in release 23.3.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8817">HADOOP-8817</a>.
Major sub-task reported by djp and fixed by djp <br>
<b>Backport Network Topology Extension for Virtualization (HADOOP-8468) to branch-1</b><br>
<blockquote> A new 4-layer network topology NetworkToplogyWithNodeGroup is available to make Hadoop more robust and efficient in virtualized environment.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8971">HADOOP-8971</a>.
Major improvement reported by gopalv and fixed by gopalv (util)<br>
<b>Backport: hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data (HADOOP-8926)</b><br>
<blockquote> Backport cache-aware improvements for PureJavaCrc32 from trunk (<a href="/jira/browse/HADOOP-8926" title="hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data"><strike>HADOOP-8926</strike></a>)
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-385">HDFS-385</a>.
Major improvement reported by dhruba and fixed by dhruba <br>
<b>Design a pluggable interface to place replicas of blocks in HDFS</b><br>
<blockquote> New experimental API BlockPlacementPolicy allows investigating alternate rules for locating block replicas.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3697">HDFS-3697</a>.
Minor improvement reported by tlipcon and fixed by tlipcon (datanode, performance)<br>
<b>Enable fadvise readahead by default</b><br>
<blockquote> The datanode now performs 4MB readahead by default when reading data from its disks, if the native libraries are present. This has been shown to improve performance in many workloads. The feature may be disabled by setting dfs.datanode.readahead.bytes to &quot;0&quot;.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4071">HDFS-4071</a>.
Minor sub-task reported by jingzhao and fixed by jingzhao (datanode, namenode)<br>
<b>Add number of stale DataNodes to metrics for Branch-1</b><br>
<blockquote> This jira adds a new metric with name &quot;StaleDataNodes&quot; under metrics context &quot;dfs&quot; of type Gauge. This tracks the number of DataNodes marked as stale. A DataNode is marked stale when the heartbeat message from the DataNode is not received within the configured time &quot;&quot;dfs.namenode.stale.datanode.interval&quot;. <br/>
<br/>
<br/>
Please see hdfs-default.xml documentation corresponding to &quot;dfs.namenode.stale.datanode.interval&quot; for more details on how to configure this feature. When this feature is not configured, this metrics would return zero.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4122">HDFS-4122</a>.
Major bug reported by sureshms and fixed by sureshms (datanode, hdfs-client, namenode)<br>
<b>Cleanup HDFS logs and reduce the size of logged messages</b><br>
<blockquote> The change from this jira changes the content of some of the log messages. No log message are removed. Only the content of the log messages is changed to reduce the size. If you have a tool that depends on the exact content of the log, please look at the patch and make appropriate updates to the tool.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4320">HDFS-4320</a>.
Major improvement reported by mostafae and fixed by mostafae (datanode, namenode)<br>
<b>Add a separate configuration for namenode rpc address instead of only using fs.default.name</b><br>
<blockquote> The namenode RPC address is currently identified from configuration &quot;fs.default.name&quot;. In some setups where default FS is other than HDFS, the &quot;fs.default.name&quot; cannot be used to get the namenode address. When such a setup co-exists with HDFS, with this change namenode can be identified using a separate configuration parameter &quot;dfs.namenode.rpc-address&quot;. <br/>
<br/>
&quot;dfs.namenode.rpc-address&quot;, when configured, overrides fs.default.name for identifying namenode RPC address. <br/>
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4337">HDFS-4337</a>.
Major bug reported by djp and fixed by mgong@vmware.com (namenode)<br>
<b>Backport HDFS-4240 to branch-1: Make sure nodes are avoided to place replica if some replica are already under the same nodegroup.</b><br>
<blockquote> Backport <a href="/jira/browse/HDFS-4240" title="In nodegroup-aware case, make sure nodes are avoided to place replica if some replica are already under the same nodegroup"><strike>HDFS-4240</strike></a> to branch-1
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4350">HDFS-4350</a>.
Major bug reported by andrew.wang and fixed by andrew.wang <br>
<b>Make enabling of stale marking on read and write paths independent</b><br>
<blockquote> This patch makes an incompatible configuration change, as described below: <br/>
In releases 1.1.0 and other point releases 1.1.x, the configuration parameter &quot;dfs.namenode.check.stale.datanode&quot; could be used to turn on checking for the stale nodes. This configuration is no longer supported in release 1.2.0 onwards and is renamed as &quot;dfs.namenode.avoid.read.stale.datanode&quot;. <br/>
<br/>
How feature works and configuring this feature: <br/>
As described in <a href="/jira/browse/HDFS-3703" title="Decrease the datanode failure detection time"><strike>HDFS-3703</strike></a> release notes, datanode stale period can be configured using parameter &quot;dfs.namenode.stale.datanode.interval&quot; in seconds (default value is 30 seconds). NameNode can be configured to use this staleness information for reads using configuration &quot;dfs.namenode.avoid.read.stale.datanode&quot;. When this parameter is set to true, namenode picks a stale datanode as the last target to read from when returning block locations for reads. Using staleness information for writes is as described in the releases notes of <a href="/jira/browse/HDFS-3912" title="Detecting and avoiding stale datanodes for writing"><strike>HDFS-3912</strike></a>. <br/>
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4519">HDFS-4519</a>.
Major bug reported by cnauroth and fixed by cnauroth (datanode, scripts)<br>
<b>Support override of jsvc binary and log file locations when launching secure datanode.</b><br>
<blockquote> With this improvement the following options are available in release 1.2.0 and later on 1.x release stream: <br/>
1. jsvc location can be overridden by setting environment variable JSVC_HOME. Defaults to jsvc binary packaged within the Hadoop distro. <br/>
2. jsvc log output is directed to the file defined by JSVC_OUTFILE. Defaults to $HADOOP_LOG_DIR/jsvc.out. <br/>
3. jsvc error output is directed to the file defined by JSVC_ERRFILE file. Defaults to $HADOOP_LOG_DIR/jsvc.err. <br/>
<br/>
With this improvement the following options are available in release 2.0.4 and later on 2.x release stream: <br/>
1. jsvc log output is directed to the file defined by JSVC_OUTFILE. Defaults to $HADOOP_LOG_DIR/jsvc.out. <br/>
2. jsvc error output is directed to the file defined by JSVC_ERRFILE file. Defaults to $HADOOP_LOG_DIR/jsvc.err. <br/>
<br/>
For overriding jsvc location on 2.x releases, here is the release notes from <a href="/jira/browse/HDFS-2303" title="Unbundle jsvc"><strike>HDFS-2303</strike></a>: <br/>
To run secure Datanodes users must install jsvc for their platform and set JSVC_HOME to point to the location of jsvc in their environment. <br/>
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3678">MAPREDUCE-3678</a>.
Major new feature reported by bejoyks and fixed by qwertymaniac (mrv1, mrv2)<br>
<b>The Map tasks logs should have the value of input split it processed</b><br>
<blockquote> A map-task&#39;s syslogs now carries basic info on the InputSplit it processed.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4415">MAPREDUCE-4415</a>.
Major improvement reported by qwertymaniac and fixed by qwertymaniac (mrv1)<br>
<b>Backport the Job.getInstance methods from MAPREDUCE-1505 to branch-1</b><br>
<blockquote> Backported new APIs to get a Job object to 1.2.0 from 2.0.0. Job API static methods Job.getInstance(), Job.getInstance(Configuration) and Job.getInstance(Configuration, jobName) are now available across both releases to avoid porting pain.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4451">MAPREDUCE-4451</a>.
Major bug reported by erik.fang and fixed by erik.fang (contrib/fair-share)<br>
<b>fairscheduler fail to init job with kerberos authentication configured</b><br>
<blockquote> Using FairScheduler with security configured, job initialization fails. The problem is that threads in JobInitializer runs as RPC user instead of jobtracker, pre-start all the threads fix this bug
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4565">MAPREDUCE-4565</a>.
Major improvement reported by kkambatl and fixed by kkambatl <br>
<b>Backport MR-2855 to branch-1: ResourceBundle lookup during counter name resolution takes a lot of time</b><br>
<blockquote> Passing a cached class-loader to ResourceBundle creator to minimize counter names lookup time.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4737">MAPREDUCE-4737</a>.
Major bug reported by daijy and fixed by acmurthy <br>
<b> Hadoop does not close output file / does not call Mapper.cleanup if exception in map</b><br>
<blockquote> Ensure that mapreduce APIs are semantically consistent with mapred API w.r.t Mapper.cleanup and Reducer.cleanup; in the sense that cleanup is now called even if there is an error. The old mapred API already ensures that Mapper.close and Reducer.close are invoked during error handling. Note that it is an incompatible change, however end-users can override Mapper.run and Reducer.run to get the old (inconsistent) behaviour.
</blockquote></li>
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6496">HADOOP-6496</a>.
Minor bug reported by lars_francke and fixed by ivanmi <br>
<b>HttpServer sends wrong content-type for CSS files (and others)</b><br>
<blockquote>CSS files are send as text/html causing problems if the HTML page is rendered in standards mode. The HDFS interface for example still works because it is rendered in quirks mode, the HBase interface doesn&apos;t work because it is rendered in standards mode. See HBASE-2110 for more details.<br><br>I&apos;ve had a quick look at HttpServer but I&apos;m too unfamiliar with it to see the problem. I think this started happening with HADOOP-6441 which would lead me to believe that the filter is called for every request...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7096">HADOOP-7096</a>.
Major improvement reported by ahmed.radwan and fixed by ahmed.radwan <br>
<b>Allow setting of end-of-record delimiter for TextInputFormat</b><br>
<blockquote>The patch for https://issues.apache.org/jira/browse/MAPREDUCE-2254 required minor changes to the LineReader class to allow extensions (see attached 2.patch). Description copied below:<br><br>It will be useful to allow setting the end-of-record delimiter for TextInputFormat. The current implementation hardcodes &apos;\n&apos;, &apos;\r&apos; or &apos;\r\n&apos; as the only possible record delimiters. This is a problem if users have embedded newlines in their data fields (which is pretty common). This is also a problem for other ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7101">HADOOP-7101</a>.
Blocker bug reported by tlipcon and fixed by tlipcon (security)<br>
<b>UserGroupInformation.getCurrentUser() fails when called from non-Hadoop JAAS context</b><br>
<blockquote>If a Hadoop client is run from inside a container like Tomcat, and the current AccessControlContext has a Subject associated with it that is not created by Hadoop, then UserGroupInformation.getCurrentUser() will throw NoSuchElementException, since it assumes that any Subject will have a hadoop User principal.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7688">HADOOP-7688</a>.
Major improvement reported by szetszwo and fixed by umamaheswararao <br>
<b>When a servlet filter throws an exception in init(..), the Jetty server failed silently. </b><br>
<blockquote>When a servlet filter throws a ServletException in init(..), the exception is logged by Jetty but not re-throws to the caller. As a result, the Jetty server failed silently.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7754">HADOOP-7754</a>.
Major sub-task reported by tlipcon and fixed by tlipcon (native, performance)<br>
<b>Expose file descriptors from Hadoop-wrapped local FileSystems</b><br>
<blockquote>In HADOOP-7714, we determined that using fadvise inside of the MapReduce shuffle can yield very good performance improvements. But many parts of the shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and RawLocalFileSystems. This JIRA is to figure out how to allow RawLocalFileSystem to expose its FileDescriptor object without unnecessarily polluting the public APIs.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7827">HADOOP-7827</a>.
Trivial bug reported by davevr and fixed by davevr <br>
<b>jsp pages missing DOCTYPE</b><br>
<blockquote>The various jsp pages in the UI are all missing a DOCTYPE declaration. This causes the pages to render incorrectly on some browsers, such as IE9. Every UI page should have a valid tag, such as &lt;!DOCTYPE HTML&gt;, as their first line. There are 31 files that need to be changed, all in the core\src\webapps tree.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7836">HADOOP-7836</a>.
Minor bug reported by eli and fixed by daryn (ipc, test)<br>
<b>TestSaslRPC#testDigestAuthMethodHostBasedToken fails with hostname localhost.localdomain</b><br>
<blockquote>TestSaslRPC#testDigestAuthMethodHostBasedToken fails on branch-1 on some hosts.<br><br>null expected:&lt;localhost[]&gt; but was:&lt;localhost[.localdomain]&gt;<br>junit.framework.ComparisonFailure: null expected:&lt;localhost[]&gt; but was:&lt;localhost[.localdomain]&gt;<br><br>null expected:&lt;[localhost]&gt; but was:&lt;[eli-thinkpad]&gt;<br>junit.framework.ComparisonFailure: null expected:&lt;[localhost]&gt; but was:&lt;[eli-thinkpad]&gt;<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7868">HADOOP-7868</a>.
Major bug reported by javacruft and fixed by scurrilous (native)<br>
<b>Hadoop native fails to compile when default linker option is -Wl,--as-needed</b><br>
<blockquote>Recent releases of Ubuntu and Debian have switched to using --as-needed as default when linking binaries.<br><br>As a result the AC_COMPUTE_NEEDED_DSO fails to find the required DSO names during execution of configure resulting in a build failure.<br><br>Explicitly using &quot;-Wl,--no-as-needed&quot; in this macro when required resolves this issue.<br><br>See http://wiki.debian.org/ToolChain/DSOLinking for a few more details</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8023">HADOOP-8023</a>.
Critical new feature reported by tucu00 and fixed by tucu00 (conf)<br>
<b>Add unset() method to Configuration</b><br>
<blockquote>HADOOP-7001 introduced the *Configuration.unset(String)* method.<br><br>MAPREDUCE-3727 requires that method in order to be back-ported.<br><br>This is required to fix an issue manifested when running MR/Hive/Sqoop jobs from Oozie, details are in MAPREDUCE-3727.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8249">HADOOP-8249</a>.
Major bug reported by bcwalrus and fixed by tucu00 (security)<br>
<b>invalid hadoop-auth cookies should trigger authentication if info is avail before returning HTTP 401</b><br>
<blockquote>WebHdfs gives out cookies. But when the client passes them back, it&apos;d sometimes reject them and return a HTTP 401 instead. (&quot;Sometimes&quot; as in after a restart.) The interesting thing is that if the client doesn&apos;t pass the cookie back, WebHdfs will be totally happy.<br><br>The correct behaviour should be to ignore the cookie if it looks invalid, and attempt to proceed with the request handling.<br><br>I haven&apos;t tried HttpFs to see whether it handles restart better.<br><br>Reproducing it with curl:<br>{noformat}<br>###...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8355">HADOOP-8355</a>.
Minor bug reported by tucu00 and fixed by tucu00 (security)<br>
<b>SPNEGO filter throws/logs exception when authentication fails</b><br>
<blockquote>if the auth-token is NULL means the authenticator has not authenticated the request and it has already issue an UNAUTHORIZED response, there is no need to throw an exception and then immediately catch it and log it. The &apos;else throw&apos; can be removed.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8386">HADOOP-8386</a>.
Major bug reported by cberner and fixed by cberner (scripts)<br>
<b>hadoop script doesn&apos;t work if &apos;cd&apos; prints to stdout (default behavior in Ubuntu)</b><br>
<blockquote>if the &apos;hadoop&apos; script is run as &apos;bin/hadoop&apos; on a distro where the &apos;cd&apos; command prints to stdout, the script will fail due to this line: &apos;bin=`cd &quot;$bin&quot;; pwd`&apos;<br><br>Workaround: execute from the bin/ directory as &apos;./hadoop&apos;<br><br>Fix: change that line to &apos;bin=`cd &quot;$bin&quot; &gt; /dev/null; pwd`&apos;</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8423">HADOOP-8423</a>.
Major bug reported by jason98 and fixed by tlipcon (io)<br>
<b>MapFile.Reader.get() crashes jvm or throws EOFException on Snappy or LZO block-compressed data</b><br>
<blockquote>I am using Cloudera distribution cdh3u1.<br><br>When trying to check native codecs for better decompression<br>performance such as Snappy or LZO, I ran into issues with random<br>access using MapFile.Reader.get(key, value) method.<br>First call of MapFile.Reader.get() works but a second call fails.<br><br>Also I am getting different exceptions depending on number of entries<br>in a map file.<br>With LzoCodec and 10 record file, jvm gets aborted.<br><br>At the same time the DefaultCodec works fine for all cases, as well as<br>r...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8460">HADOOP-8460</a>.
Major bug reported by revans2 and fixed by revans2 (documentation)<br>
<b>Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR</b><br>
<blockquote>We should document that in a properly setup cluster HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR should not point to /tmp, but should point to a directory that normal users do not have access to.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8512">HADOOP-8512</a>.
Minor bug reported by tucu00 and fixed by tucu00 (security)<br>
<b>AuthenticatedURL should reset the Token when the server returns other than OK on authentication</b><br>
<blockquote>Currently the token is not being reset and if using AuthenticatedURL, it will keep sending the invalid token as Cookie. There is not security concern with this, the main inconvenience is the logging being generated on the server side.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8580">HADOOP-8580</a>.
Major bug reported by ekoontz and fixed by <br>
<b>ant compile-native fails with automake version 1.11.3</b><br>
<blockquote>The following:<br><br>{code}<br>ant -d -v -DskipTests -Dcompile.native=true clean compile-native<br>{code}<br><br>works with GNU automake version 1.11.1, but fails with automake version 1.11.3. <br><br>Relevant lines of failure seem to be these:<br><br>{code}<br>[exec] make[1]: Leaving directory `/tmp/hadoop-common/build/native/Linux-amd64-64&apos;<br> [exec] Current OS is Linux<br> [exec] Executing &apos;sh&apos; with arguments:<br> [exec] &apos;/tmp/hadoop-common/build/native/Linux-amd64-64/libtool&apos;<br> [exec] &apos;--mode=install&apos;<br> [exec]...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8586">HADOOP-8586</a>.
Major bug reported by eli and fixed by eli <br>
<b>Fixup a bunch of SPNEGO misspellings</b><br>
<blockquote>SPNEGO is misspelled as &quot;SPENGO&quot; a bunch of places.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8587">HADOOP-8587</a>.
Minor bug reported by eli and fixed by eli (fs)<br>
<b>HarFileSystem access of harMetaCache isn&apos;t threadsafe</b><br>
<blockquote>HarFileSystem&apos;s use of the static harMetaCache map is not threadsafe. Credit to Todd for pointing this out.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8606">HADOOP-8606</a>.
Major bug reported by daryn and fixed by daryn (fs)<br>
<b>FileSystem.get may return the wrong filesystem</b><br>
<blockquote>{{FileSystem.get(URI, conf)}} will return the default fs if the scheme is null, regardless of whether the authority is null too. This causes URIs of &quot;//authority/path&quot; to _always_ refer to &quot;/path&quot; on the default fs. To the user, this appears to &quot;work&quot; if the authority in the null-scheme URI matches the authority of the default fs. When the authorities don&apos;t match, the user is very surprised that the default fs is used.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8611">HADOOP-8611</a>.
Major bug reported by kihwal and fixed by robsparker (security)<br>
<b>Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails</b><br>
<blockquote>When the JNI-based users-group mapping is enabled, the process/command will fail if the native library, libhadoop.so, cannot be found. This mostly happens at client-side where users may use hadoop programatically. Instead of failing, falling back to the shell-based implementation will be desirable. Depending on how cluster is configured, use of the native netgroup mapping cannot be subsituted by the shell-based default. For this reason, this behavior must be configurable with the default bein...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8612">HADOOP-8612</a>.
Major bug reported by mattf and fixed by eli (fs)<br>
<b>Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)</b><br>
<blockquote>When FileSystem.getFileBlockLocations(file,start,len) is called with &quot;start&quot; argument equal to the file size, the response is not empty. See HADOOP-8599 for details and tiny patch.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8613">HADOOP-8613</a>.
Critical bug reported by daryn and fixed by daryn <br>
<b>AbstractDelegationTokenIdentifier#getUser() should set token auth type</b><br>
<blockquote>{{AbstractDelegationTokenIdentifier#getUser()}} returns the UGI associated with a token. The UGI&apos;s auth type will either be SIMPLE for non-proxy tokens, or PROXY (effective user) and SIMPLE (real user). Instead of SIMPLE, it needs to be TOKEN.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8711">HADOOP-8711</a>.
Major improvement reported by brandonli and fixed by brandonli (ipc)<br>
<b>provide an option for IPC server users to avoid printing stack information for certain exceptions</b><br>
<blockquote>Currently it&apos;s hard coded in the server that it doesn&apos;t print the exception stack for StandbyException. <br><br>Similarly, other components may have their own exceptions which don&apos;t need to save the stack trace in log. One example is HDFS-3817.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8767">HADOOP-8767</a>.
Minor bug reported by surfercrs4 and fixed by surfercrs4 (bin)<br>
<b>secondary namenode on slave machines</b><br>
<blockquote>when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs starting (with start-dfs.sh) creates secondary namenodes on all the machines in the file conf/slaves instead of conf/masters.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8781">HADOOP-8781</a>.
Major bug reported by tucu00 and fixed by tucu00 (scripts)<br>
<b>hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH</b><br>
<blockquote>Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path where snappy SO is. This is observed in setups that don&apos;t have an independent snappy installation (not installed by Hadoop)</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8786">HADOOP-8786</a>.
Major bug reported by tlipcon and fixed by tlipcon <br>
<b>HttpServer continues to start even if AuthenticationFilter fails to init</b><br>
<blockquote>As seen in HDFS-3904, if the AuthenticationFilter fails to initialize, the web server will continue to start up. We need to check for context initialization errors after starting the server.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8791">HADOOP-8791</a>.
Major bug reported by bdechoux and fixed by jingzhao (documentation)<br>
<b>rm &quot;Only deletes non empty directory and files.&quot;</b><br>
<blockquote>The documentation (1.0.3) is describing the opposite of what rm does.<br>It should be &quot;Only delete files and empty directories.&quot;<br><br>With regards to file, the size of the file should not matter, should it?<br><br>OR I am totally misunderstanding the semantic of this command and I am not the only one.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8819">HADOOP-8819</a>.
Major bug reported by brandonli and fixed by brandonli (fs)<br>
<b>Should use &amp;&amp; instead of &amp; in a few places in FTPFileSystem,FTPInputStream,S3InputStream,ViewFileSystem,ViewFs</b><br>
<blockquote>Should use &amp;&amp; instead of &amp; in a few places in FTPFileSystem,FTPInputStream,S3InputStream,ViewFileSystem,ViewFs.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8820">HADOOP-8820</a>.
Major new feature reported by djp and fixed by djp (net)<br>
<b>Backport HADOOP-8469 and HADOOP-8470: add &quot;NodeGroup&quot; layer in new NetworkTopology (also known as NetworkTopologyWithNodeGroup)</b><br>
<blockquote>This patch backport HADOOP-8469 and HADOOP-8470 to branch-1 and includes:<br>1. Make NetworkTopology class pluggable for extension.<br>2. Implement a 4-layer NetworkTopology class (named as NetworkTopologyWithNodeGroup) to use in virtualized environment (or other situation with additional layer between host and rack).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8832">HADOOP-8832</a>.
Major bug reported by brandonli and fixed by brandonli <br>
<b>backport serviceplugin to branch-1</b><br>
<blockquote>The original patch was only partially back ported to branch-1. This JIRA is to back port the rest of it.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8861">HADOOP-8861</a>.
Major bug reported by amareshwari and fixed by amareshwari (fs)<br>
<b>FSDataOutputStream.sync should call flush() if the underlying wrapped stream is not Syncable</b><br>
<blockquote>Currently FSDataOutputStream.sync is a no-op if the wrapped stream is not Syncable. Instead it should call flush() if the wrapped stream is not syncable.<br><br>This behavior is already present in trunk, but branch-1 does not have this.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8900">HADOOP-8900</a>.
Major bug reported by slavik_krassovsky and fixed by adi2 <br>
<b>BuiltInGzipDecompressor throws IOException - stored gzip size doesn&apos;t match decompressed size</b><br>
<blockquote>Encountered failure when processing large GZIP file<br>¥ Gz: Failed in 1hrs, 13mins, 57sec with the error:<br> üjava.io.IOException: IO error in map input file hdfs://localhost:9000/Halo4/json_m/gz/NewFileCat.txt.gz<br> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:242)<br> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:216)<br> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)<br> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.j...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8917">HADOOP-8917</a>.
Major bug reported by arpitgupta and fixed by arpitgupta <br>
<b>add LOCALE.US to toLowerCase in SecurityUtil.replacePattern</b><br>
<blockquote>Webhdfs and fsck when getting the kerberos principal use Locale.US in toLowerCase. We should do the same in replacePattern as this method is used when service prinicpals log in.<br><br>see https://issues.apache.org/jira/browse/HADOOP-8878?focusedCommentId=13472245&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13472245 for more details</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8931">HADOOP-8931</a>.
Trivial improvement reported by eli and fixed by eli <br>
<b>Add Java version to startup message</b><br>
<blockquote>I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8951">HADOOP-8951</a>.
Minor improvement reported by stevel@apache.org and fixed by stevel@apache.org (util)<br>
<b>RunJar to fail with user-comprehensible error message if jar missing</b><br>
<blockquote>When the RunJar JAR is missing or not a file, exit with a meaningful message.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8963">HADOOP-8963</a>.
Trivial bug reported by billie.rinaldi and fixed by arpitgupta <br>
<b>CopyFromLocal doesn&apos;t always create user directory</b><br>
<blockquote>When you use the command &quot;hadoop fs -copyFromLocal filename .&quot; before the /user/username directory has been created, the file is created with name /user/username instead of a directory being created with file /user/username/filename. The command &quot;hadoop fs -copyFromLocal filename filename&quot; works as expected, creating /user/username and /user/username/filename, and &quot;hadoop fs -copyFromLocal filename .&quot; works as expected if the /user/username directory already exists.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8968">HADOOP-8968</a>.
Major improvement reported by tucu00 and fixed by tucu00 <br>
<b>Add a flag to completely disable the worker version check</b><br>
<blockquote>The current logic in the TaskTracker and the DataNode to allow a relax version check with the JobTracker and NameNode works only if the versions of Hadoop are exactly the same.<br><br>We should add a switch to disable version checking completely, to enable rolling upgrades between compatible versions (typically patch versions).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8988">HADOOP-8988</a>.
Major new feature reported by jingzhao and fixed by jingzhao (conf)<br>
<b>Backport HADOOP-8343 to branch-1</b><br>
<blockquote>Backport HADOOP-8343 to branch-1 so as to specifically control the authorization requirements for accessing /jmx, /metrics, and /conf in branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9036">HADOOP-9036</a>.
Major bug reported by ivanmi and fixed by sureshms <br>
<b>TestSinkQueue.testConcurrentConsumers fails intermittently (Backports HADOOP-7292)</b><br>
<blockquote>org.apache.hadoop.metrics2.impl.TestSinkQueue.testConcurrentConsumers<br> <br><br>Error Message<br><br>should&apos;ve thrown<br>Stacktrace<br><br>junit.framework.AssertionFailedError: should&apos;ve thrown<br> at org.apache.hadoop.metrics2.impl.TestSinkQueue.shouldThrowCME(TestSinkQueue.java:229)<br> at org.apache.hadoop.metrics2.impl.TestSinkQueue.testConcurrentConsumers(TestSinkQueue.java:195)<br>Standard Output<br><br>2012-10-03 16:51:31,694 INFO impl.TestSinkQueue (TestSinkQueue.java:consume(243)) - sleeping<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9071">HADOOP-9071</a>.
Major improvement reported by gkesavan and fixed by gkesavan (build)<br>
<b>configure ivy log levels for resolve/retrieve</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9090">HADOOP-9090</a>.
Minor new feature reported by mostafae and fixed by mostafae (metrics)<br>
<b>Support on-demand publish of metrics</b><br>
<blockquote>Updated description based on feedback:<br><br>We have a need to publish metrics out of some short-living processes, which is not really well-suited to the current metrics system implementation which periodically publishes metrics asynchronously (a behavior that works great for long-living processes). Of course I could write my own metrics system, but it seems like such a waste to rewrite all the awesome code currently in the MetricsSystemImpl and supporting classes.<br>The way this JIRA solves this pr...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9095">HADOOP-9095</a>.
Minor bug reported by szetszwo and fixed by jingzhao (net)<br>
<b>TestNNThroughputBenchmark fails in branch-1</b><br>
<blockquote>{noformat}<br>java.lang.StringIndexOutOfBoundsException: String index out of range: 0<br> at java.lang.String.charAt(String.java:686)<br> at org.apache.hadoop.net.NetUtils.normalizeHostName(NetUtils.java:539)<br> at org.apache.hadoop.net.NetUtils.normalizeHostNames(NetUtils.java:562)<br> at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:88)<br> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1047)<br> ...<br> at org...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9098">HADOOP-9098</a>.
Blocker bug reported by tomwhite and fixed by arpitagarwal (build)<br>
<b>Add missing license headers</b><br>
<blockquote>There are missing license headers in some source files (e.g. TestUnderReplicatedBlocks.java is one) according to the RAT report.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9099">HADOOP-9099</a>.
Minor bug reported by ivanmi and fixed by ivanmi (test)<br>
<b>NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address</b><br>
<blockquote>I just hit this failure. We should use some more unique string for &quot;UnknownHost&quot;:<br><br>Testcase: testNormalizeHostName took 0.007 sec<br> FAILED<br>expected:&lt;[65.53.5.181]&gt; but was:&lt;[UnknownHost]&gt;<br>junit.framework.AssertionFailedError: expected:&lt;[65.53.5.181]&gt; but was:&lt;[UnknownHost]&gt;<br> at org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)<br><br>Will post a patch in a bit.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9124">HADOOP-9124</a>.
Minor bug reported by phunt and fixed by snihalani (io)<br>
<b>SortedMapWritable violates contract of Map interface for equals() and hashCode()</b><br>
<blockquote>This issue is similar to HADOOP-7153. It was found when using MRUnit - see MRUNIT-158, specifically https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985<br><br>--<br>o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it does not define an implementation of the equals() or hashCode() methods; instead the default implementations in java.lang.Object are used.<br><br>This violates...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9154">HADOOP-9154</a>.
Major bug reported by kkambatl and fixed by kkambatl (io)<br>
<b>SortedMapWritable#putAll() doesn&apos;t add key/value classes to the map</b><br>
<blockquote>In the following code from {{SortedMapWritable}}, #putAll() doesn&apos;t add key/value classes to the class-id maps.<br><br>{code}<br><br> @Override<br> public Writable put(WritableComparable key, Writable value) {<br> addToMap(key.getClass());<br> addToMap(value.getClass());<br> return instance.put(key, value);<br> }<br><br> @Override<br> public void putAll(Map&lt;? extends WritableComparable, ? extends Writable&gt; t){<br> for (Map.Entry&lt;? extends WritableComparable, ? extends Writable&gt; e:<br> t.entrySet()) {<br> <br> ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9174">HADOOP-9174</a>.
Major test reported by arpitagarwal and fixed by arpitagarwal (test)<br>
<b>TestSecurityUtil fails on Open JDK 7</b><br>
<blockquote>TestSecurityUtil.TestBuildTokenServiceSockAddr fails due to implicit dependency on the test case execution order.<br><br>Testcase: testBuildTokenServiceSockAddr took 0.003 sec<br> Caused an ERROR<br>expected:&lt;[127.0.0.1]:123&gt; but was:&lt;[localhost]:123&gt;<br> at org.apache.hadoop.security.TestSecurityUtil.testBuildTokenServiceSockAddr(TestSecurityUtil.java:133)<br><br><br>Similar bug exists in TestSecurityUtil.testBuildDTServiceName.<br><br>The root cause is that a helper routine (verifyAddress) used by some test cases has a ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9175">HADOOP-9175</a>.
Major test reported by arpitagarwal and fixed by arpitagarwal (test)<br>
<b>TestWritableName fails with Open JDK 7</b><br>
<blockquote>TestWritableName.testAddName fails due to a test order execution dependency on testSetName.<br><br>java.io.IOException: WritableName can&apos;t load class: mystring<br>at org.apache.hadoop.io.WritableName.getClass(WritableName.java:73)<br>at org.apache.hadoop.io.TestWritableName.testAddName(TestWritableName.java:92)<br>Caused by: java.lang.ClassNotFoundException: mystring<br>at java.net.URLClassLoader$1.run(URLClassLoader.java:366)<br>at java.net.URLClassLoader$1.run(URLClassLoader.java:355)<br>at java.security.AccessCon...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9179">HADOOP-9179</a>.
Major bug reported by brandonli and fixed by brandonli <br>
<b>TestFileSystem fails with open JDK7</b><br>
<blockquote>This is a test order-dependency bug as pointed out in HADOOP-8390. This JIRA is to track the fix in branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9191">HADOOP-9191</a>.
Major bug reported by arpitagarwal and fixed by arpitagarwal (test)<br>
<b>TestAccessControlList and TestJobHistoryConfig fail with JDK7</b><br>
<blockquote>Individual test cases have dependencies on a specific order of execution and fail when the order is changed.<br><br>TestAccessControlList.testNetGroups relies on Groups being initialized with a hard-coded test class that subsequent test cases depend on.<br><br>TestJobHistoryConfig.testJobHistoryLogging fails to shutdown the MiniDFSCluster on exit.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9253">HADOOP-9253</a>.
Major improvement reported by arpitgupta and fixed by arpitgupta <br>
<b>Capture ulimit info in the logs at service start time</b><br>
<blockquote>output of ulimit -a is helpful while debugging issues on the system.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9349">HADOOP-9349</a>.
Major bug reported by sandyr and fixed by sandyr (tools)<br>
<b>Confusing output when running hadoop version from one hadoop installation when HADOOP_HOME points to another</b><br>
<blockquote>Hadoop version X is downloaded to ~/hadoop-x, and Hadoop version Y is downloaded to ~/hadoop-y. HADOOP_HOME is set to hadoop-x. A user running hadoop-y/bin/hadoop might expect to be running the hadoop-y jars, but, because of HADOOP_HOME, will actually be running hadoop-x jars.<br><br>&quot;hadoop version&quot; could help clear this up a little by reporting the current HADOOP_HOME.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9369">HADOOP-9369</a>.
Major bug reported by kkambatl and fixed by kkambatl (net)<br>
<b>DNS#reverseDns() can return hostname with . appended at the end</b><br>
<blockquote>DNS#reverseDns uses javax.naming.InitialDirContext to do a reverse DNS lookup. This can sometimes return hostnames with a . at the end.<br><br>Saw this happen on hadoop-1: two nodes with tasktracker.dns.interface set to eth0</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9375">HADOOP-9375</a>.
Trivial bug reported by teledriver and fixed by sureshms (test)<br>
<b>Port HADOOP-7290 to branch-1 to fix TestUserGroupInformation failure</b><br>
<blockquote>Unit test failure in TestUserGroupInformation.testGetServerSideGroups. port HADOOP-7290 to branch-1.1 </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9379">HADOOP-9379</a>.
Trivial improvement reported by arpitgupta and fixed by arpitgupta <br>
<b>capture the ulimit info after printing the log to the console</b><br>
<blockquote>Based on the discussions in HADOOP-9253 people prefer if we dont print the ulimit info to the console but still have it in the logs.<br><br>Just need to move the head statement to before the capture of ulimit code.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9434">HADOOP-9434</a>.
Minor improvement reported by carp84 and fixed by carp84 (bin)<br>
<b>Backport HADOOP-9267 to branch-1</b><br>
<blockquote>Currently in hadoop 1.1.2, if user issue &quot;bin/hadoop help&quot; in command line, it will throw below exception. We can improve this to print the usage message.<br>===============================================<br>Exception in thread &quot;main&quot; java.lang.NoClassDefFoundError: help<br>===============================================<br><br>This issue is already resolved in HADOOP-9267 in trunk, so we only need to backport it into branch-1</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9451">HADOOP-9451</a>.
Major bug reported by djp and fixed by djp (net)<br>
<b>Node with one topology layer should be handled as fault topology when NodeGroup layer is enabled</b><br>
<blockquote>Currently, nodes with one layer topology are allowed to join in the cluster that with enabling NodeGroup layer which cause some exception cases. <br>When NodeGroup layer is enabled, the cluster should assumes that at least two layer (Rack/NodeGroup) is valid topology for each nodes, so should throw exceptions for one layer node in joining.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9458">HADOOP-9458</a>.
Critical bug reported by szetszwo and fixed by szetszwo (ipc)<br>
<b>In branch-1, RPC.getProxy(..) may call proxy.getProtocolVersion(..) without retry</b><br>
<blockquote>RPC.getProxy(..) may call proxy.getProtocolVersion(..) without retry even when client has specified retry in the conf.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9467">HADOOP-9467</a>.
Major bug reported by cnauroth and fixed by cnauroth (metrics)<br>
<b>Metrics2 record filtering (.record.filter.include/exclude) does not filter by name</b><br>
<blockquote>Filtering by record considers only the record&apos;s tag for filtering and not the record&apos;s name.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9473">HADOOP-9473</a>.
Trivial bug reported by gmazza and fixed by (fs)<br>
<b>typo in FileUtil copy() method</b><br>
<blockquote>typo:<br>{code}<br>Index: src/core/org/apache/hadoop/fs/FileUtil.java<br>===================================================================<br>--- src/core/org/apache/hadoop/fs/FileUtil.java (revision 1467295)<br>+++ src/core/org/apache/hadoop/fs/FileUtil.java (working copy)<br>@@ -178,7 +178,7 @@<br> // Check if dest is directory<br> if (!dstFS.exists(dst)) {<br> throw new IOException(&quot;`&quot; + dst +&quot;&apos;: specified destination directory &quot; +<br>- &quot;doest not exist&quot;);<br>+ ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9492">HADOOP-9492</a>.
Trivial bug reported by jingzhao and fixed by jingzhao (test)<br>
<b>Fix the typo in testConf.xml to make it consistent with FileUtil#copy()</b><br>
<blockquote>HADOOP-9473 fixed a typo in FileUtil#copy(). We need to fix the same typo in testConf.xml accordingly. Otherwise TestCLI will fail in branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9502">HADOOP-9502</a>.
Minor bug reported by rramya and fixed by szetszwo (fs)<br>
<b>chmod does not return error exit codes for some exceptions</b><br>
<blockquote>When some dfs operations fail due to SnapshotAccessControlException, valid exit codes are not returned.<br><br>E.g:<br>{noformat}<br>-bash-4.1$ hadoop dfs -chmod -R 755 /user/foo/hdfs-snapshots/test0/.snapshot/s0<br>chmod: changing permissions of &apos;hdfs://&lt;namenode&gt;:8020/user/foo/hdfs-snapshots/test0/.snapshot/s0&apos;:org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotAccessControlException: Modification on read-only snapshot is disallowed<br><br>-bash-4.1$ echo $?<br>0<br><br>-bash-4.1$ hadoop dfs -chown -R hdfs:users ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9537">HADOOP-9537</a>.
Major bug reported by arpitagarwal and fixed by arpitagarwal (security)<br>
<b>Backport AIX patches to branch-1</b><br>
<blockquote>Backport couple of trivial Jiras to branch-1.<br><br>HADOOP-9305 Add support for running the Hadoop client on 64-bit AIX<br>HADOOP-9283 Add support for running the Hadoop client on AIX<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9543">HADOOP-9543</a>.
Minor bug reported by szetszwo and fixed by szetszwo (test)<br>
<b>TestFsShellReturnCode may fail in branch-1</b><br>
<blockquote>There is a hardcoded username &quot;admin&quot; in TestFsShellReturnCode. If &quot;admin&quot; does not exist in the local fs, the test may fail. Before HADOOP-9502, the failure of the command is ignored silently, i.e. the command returns success even if it indeed failed.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9544">HADOOP-9544</a>.
Major bug reported by cnauroth and fixed by cnauroth (io)<br>
<b>backport UTF8 encoding fixes to branch-1</b><br>
<blockquote>The trunk code has received numerous bug fixes related to UTF8 encoding. I recently observed a branch-1-based cluster fail to load its fsimage due to these bugs. I&apos;ve confirmed that the bug fixes existing on trunk will resolve this, so I&apos;d like to backport to branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1957">HDFS-1957</a>.
Minor improvement reported by asrabkin and fixed by asrabkin (documentation)<br>
<b>Documentation for HFTP</b><br>
<blockquote>There should be some documentation for HFTP.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2533">HDFS-2533</a>.
Minor improvement reported by tlipcon and fixed by tlipcon (datanode, performance)<br>
<b>Remove needless synchronization on FSDataSet.getBlockFile</b><br>
<blockquote>HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2757">HDFS-2757</a>.
Major bug reported by jdcryans and fixed by jdcryans <br>
<b>Cannot read a local block that&apos;s being written to when using the local read short circuit</b><br>
<blockquote>When testing the tail&apos;ing of a local file with the read short circuit on, I get:<br><br>{noformat}<br>2012-01-06 00:17:31,598 WARN org.apache.hadoop.hdfs.DFSClient: BlockReaderLocal requested with incorrect offset: Offset 0 and length 8230400 don&apos;t match block blk_-2842916025951313698_454072 ( blockLen 124 )<br>2012-01-06 00:17:31,598 WARN org.apache.hadoop.hdfs.DFSClient: BlockReaderLocal: Removing blk_-2842916025951313698_454072 from cache because local file /export4/jdcryans/dfs/data/blocksBeingWritt...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2827">HDFS-2827</a>.
Major bug reported by umamaheswararao and fixed by umamaheswararao (namenode)<br>
<b>Cannot save namespace after renaming a directory above a file with an open lease</b><br>
<blockquote>When i execute the following operations and wait for checkpoint to complete.<br><br>fs.mkdirs(new Path(&quot;/test1&quot;));<br>FSDataOutputStream create = fs.create(new Path(&quot;/test/abc.txt&quot;)); //dont close<br>fs.rename(new Path(&quot;/test/&quot;), new Path(&quot;/test1/&quot;));<br><br>Check-pointing is failing with the following exception.<br><br>2012-01-23 15:03:14,204 ERROR namenode.FSImage (FSImage.java:run(795)) - Unable to save image for E:\HDFS-1623\hadoop-hdfs-project\hadoop-hdfs\build\test\data\dfs\name3<br>java.io.IOException: saveLease...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3163">HDFS-3163</a>.
Trivial improvement reported by brandonli and fixed by brandonli (test)<br>
<b>TestHDFSCLI.testAll fails if the user name is not all lowercase</b><br>
<blockquote>In the test resource file testHDFSConf.xml, the test comparators expect user name to be all lowercase. <br>If the user issuing the test has an uppercase in the username (e.g., Brandon instead of brandon), many RegexpComarator tests will fail. The following is one example:<br>{noformat} <br> &lt;comparator&gt;<br> &lt;type&gt;RegexpComparator&lt;/type&gt;<br> &lt;expected-output&gt;^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( )*/file1&lt;/expected-output&gt;<br>...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3402">HDFS-3402</a>.
Minor bug reported by benoyantony and fixed by benoyantony (scripts, security)<br>
<b>Fix hdfs scripts for secure datanodes</b><br>
<blockquote>Starting secure datanode gives out the following error :<br><br>Error thrown :<br>09/04/2012 12:09:30 2524 jsvc error: Invalid option -server<br>09/04/2012 12:09:30 2524 jsvc error: Cannot parse command line arguments</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3479">HDFS-3479</a>.
Major improvement reported by cmccabe and fixed by cmccabe <br>
<b>backport HDFS-3335 (check for edit log corruption at the end of the log) to branch-1</b><br>
<blockquote>backport HDFS-3335 (check for edit log corruption at the end of the log) to branch-1</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3515">HDFS-3515</a>.
Major new feature reported by eli2 and fixed by eli (namenode)<br>
<b>Port HDFS-1457 to branch-1</b><br>
<blockquote>Let&apos;s port HDFS-1457 (configuration option to enable limiting the transfer rate used when sending the image and edits for checkpointing) to branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3521">HDFS-3521</a>.
Major improvement reported by szetszwo and fixed by szetszwo (namenode)<br>
<b>Allow namenode to tolerate edit log corruption</b><br>
<blockquote>HDFS-3479 adds checking for edit log corruption. It uses a fixed UNCHECKED_REGION_LENGTH (=PREALLOCATION_LENGTH) so that the bytes at the end within the length is not checked. Instead of not checking the bytes, we should check everything and allow toleration.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3540">HDFS-3540</a>.
Major bug reported by szetszwo and fixed by szetszwo (namenode)<br>
<b>Further improvement on recovery mode and edit log toleration in branch-1</b><br>
<blockquote>*Recovery Mode*: HDFS-3479 backported HDFS-3335 to branch-1. However, the recovery mode feature in branch-1 is dramatically different from the recovery mode in trunk since the edit log implementations in these two branch are different. For example, there is UNCHECKED_REGION_LENGTH in branch-1 but not in trunk.<br><br>*Edit Log Toleration*: HDFS-3521 added this feature to branch-1 to remedy UNCHECKED_REGION_LENGTH and to tolerate edit log corruption.<br><br>There are overlaps between these two features....</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3595">HDFS-3595</a>.
Major bug reported by cmccabe and fixed by cmccabe (namenode)<br>
<b>TestEditLogLoading fails in branch-1</b><br>
<blockquote>TestEditLogLoading currently fails in branch-1, with this error message:<br>{code}<br>Testcase: testDisplayRecentEditLogOpCodes took 1.965 sec<br> FAILED<br>error message contains opcodes message<br>junit.framework.AssertionFailedError: error message contains opcodes message<br> at org.apache.hadoop.hdfs.server.namenode.TestEditLogLoading.testDisplayRecentEditLogOpCodes(TestEditLogLoading.java:75)<br>{code}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3596">HDFS-3596</a>.
Minor improvement reported by cmccabe and fixed by cmccabe <br>
<b>Improve FSEditLog pre-allocation in branch-1</b><br>
<blockquote>Implement HDFS-3510 in branch-1. This will improve FSEditLog preallocation to decrease the incidence of corrupted logs after disk full conditions. (See HDFS-3510 for a longer description.)</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3604">HDFS-3604</a>.
Minor improvement reported by eli and fixed by eli <br>
<b>Add dfs.webhdfs.enabled to hdfs-default.xml</b><br>
<blockquote>Let&apos;s add {{dfs.webhdfs.enabled}} to hdfs-default.xml.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3628">HDFS-3628</a>.
Blocker bug reported by qwertymaniac and fixed by qwertymaniac (datanode, namenode)<br>
<b>The dfsadmin -setBalancerBandwidth command on branch-1 does not check for superuser privileges</b><br>
<blockquote>The changes from HDFS-2202 for 0.20.x/1.x failed to add in a checkSuperuserPrivilege();, and hence any user (not admins alone) can reset the balancer bandwidth across the cluster if they wished to.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3647">HDFS-3647</a>.
Major improvement reported by hoffman60613 and fixed by qwertymaniac (datanode)<br>
<b>Backport HDFS-2868 (Add number of active transfer threads to the DataNode status) to branch-1</b><br>
<blockquote>Not sure if this is in a newer version of Hadoop, but in CDH3u3 it isn&apos;t there.<br><br>There is a lot of mystery surrounding how large to set dfs.datanode.max.xcievers. Most people say to just up it to 4096, but given that exceeding this will cause an HBase RegionServer shutdown (see Lars&apos; blog post here: http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html), it would be nice if we could expose the current count via the built-in metrics framework (most likely under dfs). In this way w...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3679">HDFS-3679</a>.
Minor bug reported by cmeyerisi and fixed by cmeyerisi (fuse-dfs)<br>
<b>fuse_dfs notrash option sets usetrash</b><br>
<blockquote>fuse_dfs sets usetrash option when the &quot;notrash&quot; flag is given. This is the exact opposite of the desired behavior. The &quot;usetrash&quot; flag sets usetrash as well, but this is correct. Here are the relevant lines from fuse_options.c, in latest HDFS HEAD[0]:<br><br>123 case KEY_USETRASH:<br>124 options.usetrash = 1;<br>125 break;<br>126 case KEY_NOTRASH:<br>127 options.usetrash = 1;<br>128 break;<br><br>This is a pretty trivial bug to fix. I&apos;m not familiar with the process here, but I can attach a patch i...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3698">HDFS-3698</a>.
Major bug reported by atm and fixed by atm (security)<br>
<b>TestHftpFileSystem is failing in branch-1 due to changed default secure port</b><br>
<blockquote>This test is failing since the default secure port changed to the HTTP port upon the commit of HDFS-2617.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3754">HDFS-3754</a>.
Major bug reported by eli and fixed by eli (datanode)<br>
<b>BlockSender doesn&apos;t shutdown ReadaheadPool threads</b><br>
<blockquote>The BlockSender doesn&apos;t shutdown the ReadaheadPool threads so when tests are run with native libraries some tests fail (time out) because shutdown hangs waiting for the outstanding threads to exit.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3817">HDFS-3817</a>.
Major improvement reported by brandonli and fixed by brandonli (namenode)<br>
<b>avoid printing stack information for SafeModeException</b><br>
<blockquote>When NN is in safemode, any namespace change request could cause a SafeModeException to be thrown and logged in the server log, which can make the server side log grow very quickly. <br><br>The server side log can be more concise if only the exception and error message will be printed but not the stack trace.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3819">HDFS-3819</a>.
Minor improvement reported by jingzhao and fixed by jingzhao <br>
<b>Should check whether invalidate work percentage default value is not greater than 1.0f</b><br>
<blockquote>In DFSUtil#getInvalidateWorkPctPerIteration we should also check that the configured value is not greater than 1.0f.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3838">HDFS-3838</a>.
Trivial improvement reported by brandonli and fixed by brandonli (namenode)<br>
<b>fix the typo in FSEditLog.java: isToterationEnabled should be isTolerationEnabled</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3912">HDFS-3912</a>.
Major sub-task reported by jingzhao and fixed by jingzhao <br>
<b>Detecting and avoiding stale datanodes for writing</b><br>
<blockquote>1. Make stale timeout adaptive to the number of nodes marked stale in the cluster.<br>2. Consider having a separate configuration for write skipping the stale nodes.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3940">HDFS-3940</a>.
Minor improvement reported by eli and fixed by sureshms <br>
<b>Add Gset#clear method and clear the block map when namenode is shutdown</b><br>
<blockquote>Per HDFS-3936 it would be useful if GSet has a clear method so BM#close could clear out the LightWeightGSet.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3941">HDFS-3941</a>.
Major new feature reported by djp and fixed by djp (namenode)<br>
<b>Backport HDFS-3498 and HDFS3601: update replica placement policy for new added &quot;NodeGroup&quot; layer topology</b><br>
<blockquote>With enabling additional layer of &quot;NodeGroup&quot;, the replica placement policy used in BlockPlacementPolicyWithNodeGroup is updated to following rules:<br>0. No more than one replica is placed within a NodeGroup (*)<br>1. First replica on the local node.<br>2. Second and third replicas are within the same rack but remote rack with 1st replica.<br>3. Other replicas on random nodes with restriction that no more than two replicas are placed in the same rack, if there is enough racks.<br><br>Also, this patch abstract...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3942">HDFS-3942</a>.
Major new feature reported by djp and fixed by djp (balancer)<br>
<b>Backport HDFS-3495: Update balancer policy for Network Topology with additional &apos;NodeGroup&apos; layer</b><br>
<blockquote>This is the backport work for HDFS-3495 and HDFS-4234.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3961">HDFS-3961</a>.
Major bug reported by jingzhao and fixed by jingzhao <br>
<b>FSEditLog preallocate() needs to reset the position of PREALLOCATE_BUFFER when more than 1MB size is needed</b><br>
<blockquote>In the new preallocate() function, when the required size is larger 1MB, we need to reset the position for PREALLOCATION_BUFFER every time when we have allocated 1MB. Otherwise seems only 1MB can be allocated even if need is larger than 1MB.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3963">HDFS-3963</a>.
Major bug reported by brandonli and fixed by brandonli <br>
<b>backport namenode/datanode serviceplugin to branch-1</b><br>
<blockquote>backport namenode/datanode serviceplugin to branch-1</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4057">HDFS-4057</a>.
Minor improvement reported by brandonli and fixed by brandonli (namenode)<br>
<b>NameNode.namesystem should be private. Use getNamesystem() instead.</b><br>
<blockquote>NameNode.namesystem should be private. One should use NameNode.getNamesystem() to get it instead.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4062">HDFS-4062</a>.
Minor improvement reported by jingzhao and fixed by jingzhao <br>
<b>In branch-1, FSNameSystem#invalidateWorkForOneNode and FSNameSystem#computeReplicationWorkForBlock should print logs outside of the namesystem lock</b><br>
<blockquote>Similar to HDFS-4052 for trunk, both FSNameSystem#invalidateWorkForOneNode and FSNameSystem#computeReplicationWorkForBlock in branch-1 should print long log info level information outside of the namesystem lock. We create this separate jira since the description and code is different for 1.x.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4072">HDFS-4072</a>.
Minor bug reported by jingzhao and fixed by jingzhao (namenode)<br>
<b>On file deletion remove corresponding blocks pending replication</b><br>
<blockquote>Currently when deleting a file, blockManager does not remove records that are corresponding to the file&apos;s blocks from pendingRelications. These records can only be removed after timeout (5~10 min).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4168">HDFS-4168</a>.
Major bug reported by szetszwo and fixed by jingzhao (namenode)<br>
<b>TestDFSUpgradeFromImage fails in branch-1</b><br>
<blockquote>{noformat}<br>java.lang.NullPointerException<br> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:2212)<br> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removePathAndBlocks(FSNamesystem.java:2225)<br> at org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedDelete(FSDirectory.java:645)<br> at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:833)<br> at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1024)<br>...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4180">HDFS-4180</a>.
Minor bug reported by szetszwo and fixed by jingzhao (test)<br>
<b>TestFileCreation fails in branch-1 but not branch-1.1</b><br>
<blockquote>{noformat}<br>Testcase: testFileCreation took 3.419 sec<br> Caused an ERROR<br>java.io.IOException: Cannot create /test_dir; already exists as a directory<br> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1374)<br> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1334)<br> ...<br> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)<br><br>org.apache.hadoop.ipc.RemoteException: java.io.IOException: Cannot create /test_dir; already e...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4207">HDFS-4207</a>.
Minor bug reported by stevel@apache.org and fixed by jingzhao (hdfs-client)<br>
<b>All hadoop fs operations fail if the default fs is down even if a different file system is specified in the command</b><br>
<blockquote>you can&apos;t do any {{hadoop fs}} commands against any hadoop filesystem (e.g, s3://, a remote hdfs://, webhdfs://) if the default FS of the client is offline. Only operations that need the local fs should be expected to fail in this situation</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4219">HDFS-4219</a>.
Major new feature reported by arpitgupta and fixed by arpitgupta <br>
<b>Port slive to branch-1</b><br>
<blockquote>Originally it was committed in HDFS-708 and MAPREDUCE-1804</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4222">HDFS-4222</a>.
Minor bug reported by teledriver and fixed by teledriver (namenode)<br>
<b>NN is unresponsive and loses heartbeats of DNs when Hadoop is configured to use LDAP and LDAP has issues</b><br>
<blockquote>For Hadoop clusters configured to access directory information by LDAP, the FSNamesystem calls on behave of DFS clients might hang due to LDAP issues (including LDAP access issues caused by networking issues) while holding the single lock of FSNamesystem. That will result in the NN unresponsive and loss of the heartbeats from DNs.<br><br>The places LDAP got accessed by FSNamesystem calls are the instantiation of FSPermissionChecker, which could be moved out of the lock scope since the instantiation...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4256">HDFS-4256</a>.
Major test reported by sureshms and fixed by sanjay.radia (namenode)<br>
<b>Backport concatenation of files into a single file to branch-1</b><br>
<blockquote>HDFS-222 added support concatenation of multiple files in a directory into a single file. This helps several use cases where writes can be parallelized and several folks have expressed in this functionality.<br><br>This jira intends to make changes equivalent from HDFS-222 into branch-1 to be made available release 1.2.0.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4351">HDFS-4351</a>.
Major bug reported by andrew.wang and fixed by andrew.wang (namenode)<br>
<b>Fix BlockPlacementPolicyDefault#chooseTarget when avoiding stale nodes</b><br>
<blockquote>There&apos;s a bug in {{BlockPlacementPolicyDefault#chooseTarget}} with stale node avoidance enabled (HDFS-3912). If a NotEnoughReplicasException is thrown in the call to {{chooseRandom()}}, {{numOfReplicas}} is not updated together with the partial result in {{result}} since it is pass by value. The retry call to {{chooseTarget}} then uses this incorrect value.<br><br>This can be seen if you enable stale node detection for {{TestReplicationPolicy#testChooseTargetWithMoreThanAvaiableNodes()}}.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4355">HDFS-4355</a>.
Major bug reported by brandonli and fixed by brandonli (test)<br>
<b>TestNameNodeMetrics.testCorruptBlock fails with open JDK7</b><br>
<blockquote>Argument(s) are different! Wanted:<br>metricsRecordBuilder.addGauge(<br>&quot;CorruptBlocks&quot;,<br>&lt;any&gt;,<br>1<br>);<br>-&gt; at org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:96)<br>Actual invocation has different arguments:<br>metricsRecordBuilder.addGauge(<br>&quot;FilesTotal&quot;,<br>&quot;&quot;,<br>4<br>);<br>-&gt; at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getMetrics(FSNamesystem.java:5818)<br><br>at java.lang.reflect.Constructor.newInstance(Constructor.java:525)<br>at org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsse...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4358">HDFS-4358</a>.
Major bug reported by arpitagarwal and fixed by arpitagarwal (test)<br>
<b>TestCheckpoint failure with JDK7</b><br>
<blockquote>testMultipleSecondaryNameNodes doesn&apos;t shutdown the SecondaryNameNode which causes testCheckpoint to fail.<br><br>Testcase: testCheckpoint took 2.736 sec<br> Caused an ERROR<br>Cannot lock storage C:\hdp1-2\build\test\data\dfs\namesecondary1. The directory is already locked.<br>java.io.IOException: Cannot lock storage C:\hdp1-2\build\test\data\dfs\namesecondary1. The directory is already locked.<br> at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:602)<br> at org.apache.hadoop.hd...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4413">HDFS-4413</a>.
Major bug reported by mostafae and fixed by mostafae (namenode)<br>
<b>Secondary namenode won&apos;t start if HDFS isn&apos;t the default file system</b><br>
<blockquote>If HDFS is not the default file system (fs.default.name is something other than hdfs://...), then secondary namenode throws early on in its initialization. This is a needless check as far as I can tell, and blocks scenarios where HDFS services are up but HDFS is not the default file system.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4444">HDFS-4444</a>.
Trivial bug reported by schu and fixed by schu <br>
<b>Add space between total transaction time and number of transactions in FSEditLog#printStatistics</b><br>
<blockquote>Currently, when we log statistics, we see something like<br>{code}<br>13/01/25 23:16:59 INFO namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0<br>{code}<br><br>Notice how the value for total transactions time and &quot;Number of transactions batched in Syncs&quot; needs a space to separate them.<br><br>FSEditLog#printStatistics:<br>{code}<br> private void printStatistics(boolean force) {<br> long now = now();<br> if (...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4466">HDFS-4466</a>.
Major bug reported by brandonli and fixed by brandonli (namenode, security)<br>
<b>Remove the deadlock from AbstractDelegationTokenSecretManager</b><br>
<blockquote>In HDFS-3374, new synchronization in AbstractDelegationTokenSecretManager.ExpiredTokenRemover was added to make sure the ExpiredTokenRemover thread can be interrupted in time. Otherwise TestDelegation fails intermittently because the MiniDFScluster thread could be shut down before tokenRemover thread. <br>However, as Todd pointed out in HDFS-3374, a potential deadlock was introduced by its patch:<br>{quote}<br> * FSNamesystem.saveNamespace (holding FSN lock) calls DTSM.saveSecretManagerState (which ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4479">HDFS-4479</a>.
Major bug reported by jingzhao and fixed by jingzhao <br>
<b>logSync() with the FSNamesystem lock held in commitBlockSynchronization</b><br>
<blockquote>In FSNamesystem#commitBlockSynchronization of branch-1, logSync() may be called when the FSNamesystem lock is held. Similar to HDFS-4186, this may cause some performance issue.<br><br>The following issue was observed in a cluster that was running a Hive job and was writing to 100,000 temporary files (each task is writing to 1000s of files). When this job is killed, a large number of files are left open for write. Eventually when the lease for open files expires, lease recovery is started for all th...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4518">HDFS-4518</a>.
Major bug reported by arpitagarwal and fixed by arpitagarwal <br>
<b>Finer grained metrics for HDFS capacity</b><br>
<blockquote>Namenode should export disk usage metrics in bytes via FSNamesystemMetrics.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4544">HDFS-4544</a>.
Major bug reported by amareshwari and fixed by arpitagarwal <br>
<b>Error in deleting blocks should not do check disk, for all types of errors</b><br>
<blockquote>The following code in Datanode.java <br><br>{noformat}<br> try {<br> if (blockScanner != null) {<br> blockScanner.deleteBlocks(toDelete);<br> }<br> data.invalidate(toDelete);<br> } catch(IOException e) {<br> checkDiskError();<br> throw e;<br> }<br>{noformat}<br><br>causes check disk to happen in case of any errors during invalidate.<br><br>We have seen errors like :<br><br>2013-03-02 00:08:28,849 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to delete bloc...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4551">HDFS-4551</a>.
Major improvement reported by mwagner and fixed by mwagner (webhdfs)<br>
<b>Change WebHDFS buffersize behavior to improve default performance</b><br>
<blockquote>Currently on 1.X branch, the buffer size used to copy bytes to network defaults to io.file.buffer.size. This causes performance problems if that buffersize is large.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4558">HDFS-4558</a>.
Critical bug reported by gujilangzi and fixed by djp (balancer)<br>
<b>start balancer failed with NPE</b><br>
<blockquote>start balancer failed with NPE<br> File this issue to track for QE and dev take a look<br><br>balancer.log:<br> 2013-03-06 00:19:55,174 ERROR org.apache.hadoop.hdfs.server.balancer.Balancer: java.lang.NullPointerException<br> at org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:165)<br> at org.apache.hadoop.hdfs.server.balancer.Balancer.checkReplicationPolicyCompatibility(Balancer.java:799)<br> at org.apache.hadoop.hdfs.server.balancer.Balancer.&lt;init&gt;(Balancer.java:...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4597">HDFS-4597</a>.
Major new feature reported by szetszwo and fixed by szetszwo (webhdfs)<br>
<b>Backport WebHDFS concat to branch-1</b><br>
<blockquote>HDFS-3598 adds cancat to WebHDFS. Let&apos;s also add it to branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4635">HDFS-4635</a>.
Major improvement reported by sureshms and fixed by sureshms (namenode)<br>
<b>Move BlockManager#computeCapacity to LightWeightGSet</b><br>
<blockquote>The computeCapacity in BlockManager that calculates the LightWeightGSet capacity as the percentage of total JVM memory should be moved to LightWeightGSet. This helps in other maps that are based on the GSet to make use of the same functionality.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4651">HDFS-4651</a>.
Major improvement reported by cnauroth and fixed by cnauroth (tools)<br>
<b>Offline Image Viewer backport to branch-1</b><br>
<blockquote>This issue tracks backporting the Offline Image Viewer tool to branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4715">HDFS-4715</a>.
Major bug reported by szetszwo and fixed by mwagner (webhdfs)<br>
<b>Backport HDFS-3577 and other related WebHDFS issue to branch-1</b><br>
<blockquote>The related JIRAs are HDFS-3577, HDFS-3318, and HDFS-3788. Backporting them can fix some WebHDFS performance issues in branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4774">HDFS-4774</a>.
Major new feature reported by yuzhihong@gmail.com and fixed by yuzhihong@gmail.com (hdfs-client, namenode)<br>
<b>Backport HDFS-4525 &apos;Provide an API for knowing whether file is closed or not&apos; to branch-1</b><br>
<blockquote>HDFS-4525 compliments lease recovery API which allows user to know whether the recovery has completed.<br><br>This JIRA backports the API to branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4776">HDFS-4776</a>.
Minor new feature reported by szetszwo and fixed by szetszwo (namenode)<br>
<b>Backport SecondaryNameNode web ui to branch-1</b><br>
<blockquote>The related JIRAs are<br>- HADOOP-3741: SecondaryNameNode has http server on dfs.secondary.http.address but without any contents <br>- HDFS-1728: SecondaryNameNode.checkpointSize is in byte but not MB.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-461">MAPREDUCE-461</a>.
Minor new feature reported by fhedberg and fixed by fhedberg <br>
<b>Enable ServicePlugins for the JobTracker</b><br>
<blockquote>Allow ServicePlugins (see HADOOP-5257) for the JobTracker.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-987">MAPREDUCE-987</a>.
Minor new feature reported by philip and fixed by ahmed.radwan (build, test)<br>
<b>Exposing MiniDFS and MiniMR clusters as a single process command-line</b><br>
<blockquote>It&apos;s hard to test non-Java programs that rely on significant mapreduce functionality. The patch I&apos;m proposing shortly will let you just type &quot;bin/hadoop jar hadoop-hdfs-hdfswithmr-test.jar minicluster&quot; to start a cluster (internally, it&apos;s using Mini{MR,HDFS}Cluster) with a specified number of daemons, etc. A test that checks how some external process interacts with Hadoop might start minicluster as a subprocess, run through its thing, and then simply kill the java subprocess.<br><br>I&apos;ve been usi...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1684">MAPREDUCE-1684</a>.
Major bug reported by amareshwari and fixed by knoguchi (capacity-sched)<br>
<b>ClusterStatus can be cached in CapacityTaskScheduler.assignTasks()</b><br>
<blockquote>Currently, CapacityTaskScheduler.assignTasks() calls getClusterStatus() thrice: once in assignTasks(), once in MapTaskScheduler and once in ReduceTaskScheduler. It can be cached in assignTasks() and re-used.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1806">MAPREDUCE-1806</a>.
Major bug reported by pauly and fixed by jira.shegalov (harchive)<br>
<b>CombineFileInputFormat does not work with paths not on default FS</b><br>
<blockquote>In generating the splits in CombineFileInputFormat, the scheme and authority are stripped out. This creates problems when trying to access the files while generating the splits, as without the har:/, the file won&apos;t be accessed through the HarFileSystem.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2217">MAPREDUCE-2217</a>.
Major bug reported by schen and fixed by kkambatl (jobtracker)<br>
<b>The expire launching task should cover the UNASSIGNED task</b><br>
<blockquote>The ExpireLaunchingTask thread kills the task that are scheduled but not responded.<br>Currently if a task is scheduled on tasktracker and for some reason tasktracker cannot put it to RUNNING.<br>The task will just hang in the UNASSIGNED status and JobTracker will keep waiting for it.<br><br>JobTracker.ExpireLaunchingTask should be able to kill this task.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2264">MAPREDUCE-2264</a>.
Major bug reported by akramer and fixed by devaraj.k (jobtracker)<br>
<b>Job status exceeds 100% in some cases </b><br>
<blockquote>I&apos;m looking now at my jobtracker&apos;s list of running reduce tasks. One of them is 120.05% complete, the other is 107.28% complete.<br><br>I understand that these numbers are estimates, but there is no case in which an estimate of 100% for a non-complete task is better than an estimate of 99.99%, nor is there any case in which an estimate greater than 100% is valid.<br><br>I suggest that whatever logic is computing these set 99.99% as a hard maximum.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2289">MAPREDUCE-2289</a>.
Major bug reported by tlipcon and fixed by ahmed.radwan (job submission)<br>
<b>Permissions race can make getStagingDir fail on local filesystem</b><br>
<blockquote>I&apos;ve observed the following race condition in TestFairSchedulerSystem which uses a MiniMRCluster on top of RawLocalFileSystem:<br>- two threads call getStagingDir at the same time<br>- Thread A checks fs.exists(stagingArea) and sees false<br>-- Calls mkdirs(stagingArea, JOB_DIR_PERMISSIONS)<br>--- mkdirs calls the Java mkdir API which makes the file with umask-based permissions<br>- Thread B runs, checks fs.exists(stagingArea) and sees true<br>-- checks permissions, sees the default permissions, and throws IOE...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2770">MAPREDUCE-2770</a>.
Trivial improvement reported by eli and fixed by sandyr (documentation)<br>
<b>Improve hadoop.job.history.location doc in mapred-default.xml</b><br>
<blockquote>The documentation for hadoop.job.history.location in mapred-default.xml should indicate that this parameter can be a URI and any file system that Hadoop supports (eg hdfs and file).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2931">MAPREDUCE-2931</a>.
Major improvement reported by forest520 and fixed by sandyr <br>
<b>CLONE - LocalJobRunner should support parallel mapper execution</b><br>
<blockquote>The LocalJobRunner currently supports only a single execution thread. Given the prevalence of multi-core CPUs, it makes sense to allow users to run multiple tasks in parallel for improved performance on small (local-only) jobs.<br><br>It is necessary to patch back MAPREDUCE-1367 into Hadoop 0.20.X version. Also, MapReduce-434 should be submitted together.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3727">MAPREDUCE-3727</a>.
Critical bug reported by tucu00 and fixed by tucu00 (security)<br>
<b>jobtoken location property in jobconf refers to wrong jobtoken file</b><br>
<blockquote>Oozie launcher job (for MR/Pig/Hive/Sqoop action) reads the location of the jobtoken file from the *HADOOP_TOKEN_FILE_LOCATION* ENV var and seeds it as the *mapreduce.job.credentials.binary* property in the jobconf that will be used to launch the real (MR/Pig/Hive/Sqoop) job.<br><br>The MR/Pig/Hive/Sqoop submission code (via Hadoop job submission) uses correctly the injected *mapreduce.job.credentials.binary* property to load the credentials and submit their MR jobs.<br><br>The problem is that the *mapre...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3993">MAPREDUCE-3993</a>.
Major bug reported by tlipcon and fixed by kkambatl (mrv1, mrv2)<br>
<b>Graceful handling of codec errors during decompression</b><br>
<blockquote>When using a compression codec for intermediate compression, some cases of corrupt data can cause the codec to throw exceptions other than IOException (eg java.lang.InternalError). This will currently cause the whole reduce task to fail, instead of simply treating it like another case of a failed fetch.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4036">MAPREDUCE-4036</a>.
Major bug reported by tucu00 and fixed by tucu00 (test)<br>
<b>Streaming TestUlimit fails on CentOS 6</b><br>
<blockquote>CentOS 6 seems to have higher memory requirements than other distros and together with the new MALLOC library makes the TestUlimit to fail with exit status 134.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4195">MAPREDUCE-4195</a>.
Critical bug reported by jira.shegalov and fixed by (jobtracker)<br>
<b>With invalid queueName request param, jobqueue_details.jsp shows NPE</b><br>
<blockquote>When you access /jobqueue_details.jsp manually, instead of via a link, it has queueName set to null internally and this goes for a lookup into the scheduling info maps as well.<br><br>As a result, if using FairScheduler, a Pool with String name = null gets created and this brings the scheduler down. I have not tested what happens to the CapacityScheduler, but ideally if no queueName is set in that jsp, it should fall back to &apos;default&apos;. Otherwise, this brings down the JobTracker completely.<br><br>FairSch...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4278">MAPREDUCE-4278</a>.
Major bug reported by araceli and fixed by sandyr <br>
<b>cannot run two local jobs in parallel from the same gateway.</b><br>
<blockquote>I cannot run two local mode jobs from Pig in parallel from the same gateway, this is a typical use case. If I re-run the tests sequentially, then the test pass. This seems to be a problem from Hadoop.<br><br>Additionally, the pig harness, expects to be able to run Pig-version-undertest against Pig-version-stable from the same gateway.<br><br><br>To replicate the error:<br><br>I have two clusters running from the same gateway.<br>If I run the Pig regression suites nightly.conf in local mode in paralell - once on each...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4315">MAPREDUCE-4315</a>.
Major bug reported by alo.alt and fixed by sandyr (jobhistoryserver)<br>
<b>jobhistory.jsp throws 500 when a .txt file is found in /done</b><br>
<blockquote>if a .txt file located in /done the parser throws an 500 error.<br>Trace:<br>java.lang.ArrayIndexOutOfBoundsException: 1<br> at org.apache.hadoop.mapred.jobhistory_jsp$2.compare(jobhistory_jsp.java:295)<br> at org.apache.hadoop.mapred.jobhistory_jsp$2.compare(jobhistory_jsp.java:279)<br> at java.util.Arrays.mergeSort(Arrays.java:1270)<br> at java.util.Arrays.mergeSort(Arrays.java:1282)<br> at java.util.Arrays.mergeSort(Arrays.java:1282)<br> at java.util.Arrays.mergeSort(Arra...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4317">MAPREDUCE-4317</a>.
Major bug reported by qwertymaniac and fixed by kkambatl (mrv1)<br>
<b>Job view ACL checks are too permissive</b><br>
<blockquote>The class that does view-based checks, JSPUtil.JobWithViewAccessCheck, has the following internal member:<br><br>{code}private boolean isViewAllowed = true;{code}<br><br>Note that its true.<br><br>Now, in the method that sets proper view-allowed rights, has:<br><br>{code}<br>if (user != null &amp;&amp; job != null &amp;&amp; jt.areACLsEnabled()) {<br> final UserGroupInformation ugi =<br> UserGroupInformation.createRemoteUser(user);<br> try {<br> ugi.doAs(new PrivilegedExceptionAction&lt;Void&gt;() {<br> public Void run() t...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4355">MAPREDUCE-4355</a>.
Major new feature reported by kkambatl and fixed by kkambatl (mrv1, mrv2)<br>
<b>Add RunningJob.getJobStatus()</b><br>
<blockquote>Usecase: Read the start/end-time of a particular job.<br><br>Currently, one has to iterate through JobClient.getAllJobStatuses() and iterate through them. JobClient.getJob(JobID) returns RunningJob, which doesn&apos;t hold the job&apos;s start time.<br><br>Adding RunningJob.getJobStatus() solves the issue.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4359">MAPREDUCE-4359</a>.
Major bug reported by tlipcon and fixed by tomwhite <br>
<b>Potential deadlock in Counters</b><br>
<blockquote>jcarder identified this deadlock in branch-1 (though it may also be present in trunk):<br>- Counters.size() is synchronized and locks Counters before Group<br>- Counters.Group.getCounterForName() is synchronized and calls through to Counters.size()<br><br>This creates a potential cycle which could cause a deadlock (though probably quite rare in practice)</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4385">MAPREDUCE-4385</a>.
Major bug reported by kkambatl and fixed by kkambatl <br>
<b>FairScheduler.maxTasksToAssign() should check for fairscheduler.assignmultiple.maps &lt; TaskTracker.availableSlots</b><br>
<blockquote>FairScheduler.maxTasksToAssign() can potentially return a value greater than the available slots. Currently, we rely on canAssignMaps()/canAssignReduces() to reject such requests.<br><br>These additional calls can be avoided by check against the available slots in maxTasksToAssign().</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4408">MAPREDUCE-4408</a>.
Major improvement reported by tucu00 and fixed by rkanter (mrv1, mrv2)<br>
<b>allow jobs to set a JAR that is in the distributed cached</b><br>
<blockquote>Setting a job JAR with JobConf.setJar(String) and Job.setJar(String) assumes that the JAR is local to the client submitting the job, thus it triggers copying the JAR to HDFS and injecting it to the distributed cached.<br><br>AFAIK, this is the only way to use uber JARs (JARs with JARs inside) in MR jobs.<br><br>For jobs launched by Oozie, all JARs are already in HDFS. In order for Oozie to suport uber JARs (OOZIE-654) there should be a way for specifying as JAR a JAR that is already in HDFS.<br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4434">MAPREDUCE-4434</a>.
Major bug reported by kkambatl and fixed by kkambatl (mrv1)<br>
<b>Backport MR-2779 (JobSplitWriter.java can&apos;t handle large job.split file) to branch-1</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4463">MAPREDUCE-4463</a>.
Blocker bug reported by tomwhite and fixed by tomwhite (mrv1)<br>
<b>JobTracker recovery fails with HDFS permission issue</b><br>
<blockquote>Recovery fails when the job user is different to the JT owner (i.e. on anything bigger than a pseudo-distributed cluster).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4464">MAPREDUCE-4464</a>.
Minor improvement reported by heathcd and fixed by heathcd (task)<br>
<b>Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()</b><br>
<blockquote>If DNS does not resolve hostnames properly, reduce tasks can fail with a very misleading exception.<br><br>as per my peer Ahmed&apos;s diagnosis:<br><br>In ReduceTask, it seems that event.getTaskTrackerHttp() returns a malformed URI, and so host from:<br>{code}<br>String host = u.getHost();<br>{code}<br>is evaluated to null and the NullPointerException is thrown afterwards in the ConcurrentHashMap.<br><br>I have written a patch to check for a null hostname condition when getHost is called in the getMapCompletionEvents method a...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4499">MAPREDUCE-4499</a>.
Major improvement reported by nroberts and fixed by knoguchi (mrv1, performance)<br>
<b>Looking for speculative tasks is very expensive in 1.x</b><br>
<blockquote>When there are lots of jobs and tasks active in a cluster, the process of figuring out whether or not to launch a speculative task becomes very expensive. <br><br>I could be missing something but it certainly looks like on every heartbeat we could be scanning 10&apos;s of thousands of tasks looking for something which might need to be speculatively executed. In most cases, nothing gets chosen so we completely trashed our data cache and didn&apos;t even find a task to schedule, just to do it all over again on...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4556">MAPREDUCE-4556</a>.
Minor improvement reported by kkambatl and fixed by kkambatl (contrib/fair-share)<br>
<b>FairScheduler: PoolSchedulable#updateDemand() has potential redundant computation</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4572">MAPREDUCE-4572</a>.
Major bug reported by ahmed.radwan and fixed by ahmed.radwan (tasktracker, webapps)<br>
<b>Can not access user logs - Jetty is not configured by default to serve aliases/symlinks</b><br>
<blockquote>The task log servlet can no longer access user logs because MAPREDUCE-2415 introduce symlinks to the logs and jetty is not configured by default to serve symlinks. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4576">MAPREDUCE-4576</a>.
Major bug reported by revans2 and fixed by revans2 <br>
<b>Large dist cache can block tasktracker heartbeat</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4595">MAPREDUCE-4595</a>.
Critical bug reported by kkambatl and fixed by kkambatl <br>
<b>TestLostTracker failing - possibly due to a race in JobHistory.JobHistoryFilesManager#run()</b><br>
<blockquote>The source for occasional failure of TestLostTracker seems like the following:<br><br>On job completion, JobHistoryFilesManager#run() spawns another thread to move history files to done folder. TestLostTracker waits for job completion, before checking the file format of the history file. However, the history files move might be in the process or might not have started in the first place.<br><br>The attachment (force-TestLostTracker-failure.patch) helps reproducing the error locally, by increasing the cha...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4629">MAPREDUCE-4629</a>.
Major bug reported by kkambatl and fixed by kkambatl <br>
<b>Remove JobHistory.DEBUG_MODE</b><br>
<blockquote>Remove JobHistory.DEBUG_MODE for the following reasons:<br><br>1. No one seems to be using it - the config parameter corresponding to enabling it does not even exist in mapred-default.xml<br>2. The logging being done in DEBUG_MODE needs to move to LOG.debug() and LOG.trace()<br>3. Buggy handling of helper methods in DEBUG_MODE; e.g. directoryTime() and timestampDirectoryComponent().</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4643">MAPREDUCE-4643</a>.
Major bug reported by kkambatl and fixed by sandyr (jobhistoryserver)<br>
<b>Make job-history cleanup-period configurable</b><br>
<blockquote>Job history cleanup should be made configurable. Currently, it is set to 1 month by default. The DEBUG_MODE (to be removed, see MAPREDUCE-4629) sets it to 20 minutes, but it should be configurable.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4652">MAPREDUCE-4652</a>.
Major bug reported by ahmed.radwan and fixed by ahmed.radwan (examples, mrv1)<br>
<b>ValueAggregatorJob sets the wrong job jar</b><br>
<blockquote>Using branch-1 tarball, if the user tries to submit an example aggregatewordcount, the job fails with the following error:<br><br>{code}<br>ahmed@ubuntu:~/demo/deploy/hadoop-1.2.0-SNAPSHOT$ bin/hadoop jar hadoop-examples-1.2.0-SNAPSHOT.jar aggregatewordcount input examples-output/aggregatewordcount 2 textinputformat<br>12/09/12 17:09:46 INFO mapred.JobClient: originalJarPath: /home/ahmed/demo/deploy/hadoop-1.2.0-SNAPSHOT/hadoop-core-1.2.0-SNAPSHOT.jar<br>12/09/12 17:09:48 INFO mapred.JobClient: submitJarFil...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4660">MAPREDUCE-4660</a>.
Major new feature reported by djp and fixed by djp (jobtracker, mrv1, scheduler)<br>
<b>Update task placement policy for NetworkTopology with &apos;NodeGroup&apos; layer</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4662">MAPREDUCE-4662</a>.
Major bug reported by tgraves and fixed by kihwal (jobhistoryserver)<br>
<b>JobHistoryFilesManager thread pool never expands</b><br>
<blockquote>The job history file manager creates a threadpool with core size 1 thread, max pool size 3. It never goes beyond 1 thread though because its using a LinkedBlockingQueue which doesn&apos;t have a max size. <br><br> void start() {<br> executor = new ThreadPoolExecutor(1, 3, 1,<br> TimeUnit.HOURS, new LinkedBlockingQueue&lt;Runnable&gt;());<br> }<br><br>According to the ThreadPoolExecutor java doc page it only increases the number of threads when the queue is full. Since the queue we are using has no max ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4703">MAPREDUCE-4703</a>.
Major improvement reported by ahmed.radwan and fixed by ahmed.radwan (mrv1, mrv2, test)<br>
<b>Add the ability to start the MiniMRClientCluster using the configurations used before it is being stopped.</b><br>
<blockquote>The objective here is to enable starting back the cluster, after being stopped, using the same configurations/port numbers used before stopping.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4706">MAPREDUCE-4706</a>.
Critical bug reported by kkambatl and fixed by kkambatl (contrib/fair-share)<br>
<b>FairScheduler#dump(): Computing of # running maps and reduces is commented out</b><br>
<blockquote>In FairScheduler#dump(), we conveniently comment the updating of number of running maps and reduces. It needs to be fixed for the dump to throw out meaningful information.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4765">MAPREDUCE-4765</a>.
Minor bug reported by rkanter and fixed by rkanter (jobtracker, mrv1)<br>
<b>Restarting the JobTracker programmatically can cause DelegationTokenRenewal to throw an exception</b><br>
<blockquote>The DelegationTokenRenewal class has a global Timer; when you stop the JobTracker by calling {{stopTracker()}} on it (or {{stopJobTracker()}} in MiniMRCluster), the JobTracker will call {{close()}} on DelegationTokenRenewal, which cancels the Timer. If you then start up the JobTracker again by calling {{startTracker()}} on it (or {{startJobTracker()}} in MiniMRCluster), the Timer won&apos;t necessarily be re-created; and DelegationTokenRenewal will later throw an exception when it tries to use th...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4778">MAPREDUCE-4778</a>.
Major bug reported by sandyr and fixed by sandyr (jobtracker, scheduler)<br>
<b>Fair scheduler event log is only written if directory exists on HDFS</b><br>
<blockquote>The fair scheduler event log is supposed to be written to the local filesystem, at {hadoop.log.dir}/fairscheduler. The event log will not be written unless this directory exists on HDFS.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4806">MAPREDUCE-4806</a>.
Major bug reported by kkambatl and fixed by kkambatl (mrv1)<br>
<b>Cleanup: Some (5) private methods in JobTracker.RecoveryManager are not used anymore after MAPREDUCE-3837</b><br>
<blockquote>MAPREDUCE-3837 re-organized the job recovery code, moving out the code that was using the methods in RecoveryManager.<br><br>Now, the following methods in {{JobTracker.RecoveryManager}}seem to be unused:<br># {{updateJob()}}<br># {{updateTip()}}<br># {{createTaskAttempt()}}<br># {{addSuccessfulAttempt()}}<br># {{addUnsuccessfulAttempt()}}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4824">MAPREDUCE-4824</a>.
Major new feature reported by tomwhite and fixed by tomwhite (mrv1)<br>
<b>Provide a mechanism for jobs to indicate they should not be recovered on restart</b><br>
<blockquote>Some jobs (like Sqoop or HBase jobs) are not idempotent, so should not be recovered on jobtracker restart. MAPREDUCE-2702 solves this problem for MR2, however the approach there is not applicable for MR1, since even if we only use the job-level part of the patch and add a isRecoverySupported method to OutputCommitter, there is no way to use that information from the JT (which initiates recovery), since the JT does not instantiate OutputCommitters - and it shouldn&apos;t since they are user-level c...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4837">MAPREDUCE-4837</a>.
Major improvement reported by acmurthy and fixed by acmurthy <br>
<b>Add webservices for jobtracker</b><br>
<blockquote>Add MR-AM web-services to branch-1</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4838">MAPREDUCE-4838</a>.
Major improvement reported by acmurthy and fixed by zjshen <br>
<b>Add extra info to JH files</b><br>
<blockquote>It will be useful to add more task-info to JH for analytics.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4843">MAPREDUCE-4843</a>.
Critical bug reported by zhaoyunjiong and fixed by kkambatl (tasktracker)<br>
<b>When using DefaultTaskController, JobLocalizer not thread safe</b><br>
<blockquote>In our cluster, some times job will failed due to below exception:<br>2012-12-03 23:11:54,811 WARN org.apache.hadoop.mapred.TaskTracker: Error initializing attempt_201212031626_1115_r_000023_0:<br>org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/$username/jobcache/job_201212031626_1115/job.xml in any of the configured local directories<br> at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:424)<br> at org.apache.hadoop....</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4845">MAPREDUCE-4845</a>.
Major improvement reported by sandyr and fixed by sandyr (client)<br>
<b>ClusterStatus.getMaxMemory() and getUsedMemory() exist in MR1 but not MR2 </b><br>
<blockquote>For backwards compatibility, these methods should exist in both MR1 and MR2.<br><br>Confusingly, these methods return the max memory and used memory of the jobtracker, not the entire cluster.<br><br>I&apos;d propose to add them to MR2 and return -1, and deprecate them in both MR1 and MR2. Alternatively, I could add plumbing to get the resource manager memory stats.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4850">MAPREDUCE-4850</a>.
Major bug reported by tomwhite and fixed by tomwhite (mrv1)<br>
<b>Job recovery may fail if staging directory has been deleted</b><br>
<blockquote>The job staging directory is deleted in the job cleanup task, which happens before the job-info file is deleted from the system directory (by the JobInProgress garbageCollect() method). If the JT shuts down between these two operations, then when the JT restarts and tries to recover the job, it fails since the job.xml and splits are no longer available.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4860">MAPREDUCE-4860</a>.
Major bug reported by kkambatl and fixed by kkambatl (security)<br>
<b>DelegationTokenRenewal attempts to renew token even after a job is removed</b><br>
<blockquote>mapreduce.security.token.DelegationTokenRenewal synchronizes on removeDelegationToken, but fails to synchronize on addToken, and renewing tokens in run().<br><br>This inconsistency is exposed by frequent failures of TestDelegationTokenRenewal:<br>{noformat}<br>Error Message<br><br>renew wasn&apos;t called as many times as expected expected:&lt;4&gt; but was:&lt;5&gt;<br>Stacktrace<br><br>junit.framework.AssertionFailedError: renew wasn&apos;t called as many times as expected expected:&lt;4&gt; but was:&lt;5&gt;<br> at org.apache.hadoop.mapreduce.security....</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4904">MAPREDUCE-4904</a>.
Major bug reported by mgong@vmware.com and fixed by djp (test)<br>
<b>TestMultipleLevelCaching failed in branch-1</b><br>
<blockquote>TestMultipleLevelCaching will failed:<br>{noformat}<br>Testcase: testMultiLevelCaching took 30.406 sec<br> FAILED<br>Number of local maps expected:&lt;0&gt; but was:&lt;1&gt;<br>junit.framework.AssertionFailedError: Number of local maps expected:&lt;0&gt; but was:&lt;1&gt;<br> at org.apache.hadoop.mapred.TestRackAwareTaskPlacement.launchJobAndTestCounters(TestRackAwareTaskPlacement.java:78)<br> at org.apache.hadoop.mapred.TestMultipleLevelCaching.testCachingAtLevel(TestMultipleLevelCaching.java:113)<br> at org.a...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4907">MAPREDUCE-4907</a>.
Major improvement reported by sandyr and fixed by sandyr (mrv1, tasktracker)<br>
<b>TrackerDistributedCacheManager issues too many getFileStatus calls</b><br>
<blockquote>TrackerDistributedCacheManager issues a number of redundant getFileStatus calls when determining the timestamps and visibilities of files in the distributed cache. 300 distributed cache files deep in the directory structure can hammer HDFS with a couple thousand requests.<br><br>A couple optimizations can reduce this load:<br>1. determineTimestamps and determineCacheVisibilities both call getFileStatus on every file. We could cache the results of the former and use them for the latter.<br>2. determineC...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4909">MAPREDUCE-4909</a>.
Major bug reported by arpitagarwal and fixed by arpitagarwal (test)<br>
<b>TestKeyValueTextInputFormat fails with Open JDK 7 on Windows</b><br>
<blockquote>TestKeyValueTextInputFormat.testFormat fails with Open JDK 7. The root cause appears to be a failure to delete in-use files via LocalFileSystem.delete (RawLocalFileSystem.delete).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4914">MAPREDUCE-4914</a>.
Major bug reported by brandonli and fixed by brandonli (test)<br>
<b>TestMiniMRDFSSort fails with openJDK7</b><br>
<blockquote><br>{noformat}<br>Testcase: testJvmReuse took 0.063 sec<br> Caused an ERROR<br>Input path does not exist: hdfs://127.0.0.1:62473/sort/input<br>org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://127.0.0.1:62473/sort/input<br> at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)<br> at org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:40)<br> at org.apache.hadoop.mapred.FileInputFormat.getSplit...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4915">MAPREDUCE-4915</a>.
Major bug reported by brandonli and fixed by brandonli (test)<br>
<b>TestShuffleExceptionCount fails with open JDK7</b><br>
<blockquote>{noformat}<br>Testcase: testShuffleExceptionTrailingSize took 0.203 sec<br>Testcase: testExceptionCount took 0 sec<br>Testcase: testShuffleExceptionTrailing took 0 sec<br>Testcase: testCheckException took 0 sec<br> FAILED<br>abort called when set to off<br>junit.framework.AssertionFailedError: abort called when set to off<br> at org.apache.hadoop.mapred.TestShuffleExceptionCount.testCheckException(TestShuffleExceptionCount.java:57)<br>{noformat}<br><br>This is a test order-dependency bug. The static variable ab...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4916">MAPREDUCE-4916</a>.
Major bug reported by acmurthy and fixed by xgong <br>
<b>TestTrackerDistributedCacheManager is flaky due to other badly written tests in branch-1</b><br>
<blockquote>Credit to Xuan figuring this: TestTrackerDistributedCacheManager is flaky due to other badly written tests since it checks for existence of a directory upfront which might have bad perms.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4923">MAPREDUCE-4923</a>.
Minor bug reported by sandyr and fixed by sandyr (mrv1, mrv2, task)<br>
<b>Add toString method to TaggedInputSplit</b><br>
<blockquote>Per MAPREDUCE-3678, map task logs now contain information about the input split being processed. Because TaggedInputSplit has no overridden toString method, nothing useful gets printed out.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4924">MAPREDUCE-4924</a>.
Trivial bug reported by rkanter and fixed by rkanter (mrv1)<br>
<b>flakey test: org.apache.hadoop.mapred.TestClusterMRNotification.testMR</b><br>
<blockquote>I occasionally get a failure like this on {{org.apache.hadoop.mapred.TestClusterMRNotification.testMR}}<br><br>{code}<br>junit.framework.AssertionFailedError: expected:&lt;6&gt; but was:&lt;4&gt;<br> at junit.framework.Assert.fail(Assert.java:47)<br> at junit.framework.Assert.failNotEquals(Assert.java:283)<br> at junit.framework.Assert.assertEquals(Assert.java:64)<br> at junit.framework.Assert.assertEquals(Assert.java:195)<br> at junit.framework.Assert.assertEquals(Assert.java:201)<br> at org.apache.hadoop.mapred.NotificationTestC...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4929">MAPREDUCE-4929</a>.
Major bug reported by sandyr and fixed by sandyr (mrv1)<br>
<b>mapreduce.task.timeout is ignored</b><br>
<blockquote>In MR1, only mapred.task.timeout works. Both should be made to work.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4930">MAPREDUCE-4930</a>.
Major bug reported by kkambatl and fixed by kkambatl (examples)<br>
<b>Backport MAPREDUCE-4678 and MAPREDUCE-4925 to branch-1</b><br>
<blockquote>MAPREDUCE-4678 adds convenient arguments to Pentomino, which would be nice to have in other branches as well.<br><br>However, MR-4678 introduces a bug - MR-4925 addresses this bug for all branches.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4933">MAPREDUCE-4933</a>.
Major bug reported by sandyr and fixed by sandyr (mrv1, task)<br>
<b>MR1 final merge asks for length of file it just wrote before flushing it</b><br>
<blockquote>createKVIterator in ReduceTask contains the following code:<br>{code}<br><br> try {<br> Merger.writeFile(rIter, writer, reporter, job);<br> addToMapOutputFilesOnDisk(fs.getFileStatus(outputPath));<br> } catch (Exception e) {<br> if (null != outputPath) {<br> fs.delete(outputPath, true);<br> }<br> throw new IOException(&quot;Final merge failed&quot;, e);<br> } finally {<br> if (null != writer) {<br> writer.close();<br> ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4962">MAPREDUCE-4962</a>.
Major bug reported by sandyr and fixed by sandyr (jobtracker, mrv1)<br>
<b>jobdetails.jsp uses display name instead of real name to get counters</b><br>
<blockquote>jobdetails.jsp displays details for a job including its counters. Counters may have different real names and display names, but the display names are used to look the counter values up, so counter values can incorrectly show up as 0.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4963">MAPREDUCE-4963</a>.
Major bug reported by rkanter and fixed by rkanter (mrv1)<br>
<b>StatisticsCollector improperly keeps track of &quot;Last Day&quot; and &quot;Last Hour&quot; statistics for new TaskTrackers</b><br>
<blockquote>The StatisticsCollector keeps track of updates to the &quot;Total Tasks Last Day&quot;, &quot;Succeed Tasks Last Day&quot;, &quot;Total Tasks Last Hour&quot;, and &quot;Succeeded Tasks Last Hour&quot; per Task Tracker which is displayed on the JobTracker web UI. It uses buckets to manage when to shift task counts from &quot;Last Hour&quot; to &quot;Last Day&quot; and out of &quot;Last Day&quot;. After the JT has been running for a while, the connected TTs will have the max number of buckets and will keep shifting them at each update. If a new TT connects (or...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4967">MAPREDUCE-4967</a>.
Major bug reported by cnauroth and fixed by kkambatl (tasktracker, test)<br>
<b>TestJvmReuse fails on assertion</b><br>
<blockquote>{{TestJvmReuse}} on branch-1 consistently fails on an assertion.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4969">MAPREDUCE-4969</a>.
Major bug reported by arpitagarwal and fixed by arpitagarwal (test)<br>
<b>TestKeyValueTextInputFormat test fails with Open JDK 7</b><br>
<blockquote>RawLocalFileSystem.delete fails on Windows even when the files are not expected to be in use. It does not reproduce with Sun JDK 6.<br><br><br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4970">MAPREDUCE-4970</a>.
Major bug reported by sandyr and fixed by sandyr <br>
<b>Child tasks (try to) create security audit log files</b><br>
<blockquote>After HADOOP-8552, MR child tasks will attempt to create security audit log files with their user names. On an insecure cluster, this has no effect, but on a secure cluster, log4j will try to create log files for tasks with names like SecurityAuth-joeuser.log.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5008">MAPREDUCE-5008</a>.
Major bug reported by sandyr and fixed by sandyr <br>
<b>Merger progress miscounts with respect to EOF_MARKER</b><br>
<blockquote>After MAPREDUCE-2264, a segment&apos;s raw data length is calculated without the EOF_MARKER bytes. However, when the merge is counting how many bytes it processed, it includes the marker. This can cause the merge progress to go above 100%.<br><br>Whether these EOF_MARKER bytes should count should be consistent between the two.<br><br>This a JIRA instead of an amendment because MAPREDUCE-2264 already went into 2.0.3.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5028">MAPREDUCE-5028</a>.
Critical bug reported by kkambatl and fixed by kkambatl <br>
<b>Maps fail when io.sort.mb is set to high value</b><br>
<blockquote>Verified the problem exists on branch-1 with the following configuration:<br><br>Pseudo-dist mode: 2 maps/ 1 reduce, mapred.child.java.opts=-Xmx2048m, io.sort.mb=1280, dfs.block.size=2147483648<br><br>Run teragen to generate 4 GB data<br>Maps fail when you run wordcount on this configuration with the following error: <br>{noformat}<br>java.io.IOException: Spill failed<br> at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1031)<br> at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTa...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5035">MAPREDUCE-5035</a>.
Major bug reported by tomwhite and fixed by tomwhite (mrv1)<br>
<b>Update MR1 memory configuration docs</b><br>
<blockquote>The pmem/vmem settings in the docs (http://hadoop.apache.org/docs/r1.1.1/cluster_setup.html#Memory+monitoring) have not been supported for a long time. The docs should be updated to reflect the new settings (mapred.cluster.map.memory.mb etc).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5049">MAPREDUCE-5049</a>.
Major bug reported by sandyr and fixed by sandyr <br>
<b>CombineFileInputFormat counts all compressed files non-splitable</b><br>
<blockquote>In branch-1, CombineFileInputFormat doesn&apos;t take SplittableCompressionCodec into account and thinks that all compressible input files aren&apos;t splittable. This is a regression from when handling for non-splitable compression codecs was originally added in MAPREDUCE-1597, and seems to have somehow gotten in when the code was pulled from 0.22 to branch-1.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5066">MAPREDUCE-5066</a>.
Major bug reported by ivanmi and fixed by ivanmi <br>
<b>JobTracker should set a timeout when calling into job.end.notification.url</b><br>
<blockquote>In current code, timeout is not specified when JobTracker (JobEndNotifier) calls into the notification URL. When the given URL points to a server that will not respond for a long time, job notifications are completely stuck (given that we have only a single thread processing all notifications). We&apos;ve seen this cause noticeable delays in job execution in components that rely on job end notifications (like Oozie workflows). <br><br>I propose we introduce a configurable timeout option and set a defaul...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5081">MAPREDUCE-5081</a>.
Major new feature reported by szetszwo and fixed by szetszwo (distcp)<br>
<b>Backport DistCpV2 and the related JIRAs to branch-1</b><br>
<blockquote>Here is a list of DistCpV2 JIRAs:<br>- MAPREDUCE-2765: DistCpV2 main jira<br>- HADOOP-8703: turn CRC checking off for 0 byte size <br>- HDFS-3054: distcp -skipcrccheck has no effect.<br>- HADOOP-8431: Running distcp without args throws IllegalArgumentException<br>- HADOOP-8775: non-positive value to -bandwidth<br>- MAPREDUCE-4654: TestDistCp is ignored<br>- HADOOP-9022: distcp fails to copy file if -m 0 specified<br>- HADOOP-9025: TestCopyListing failing<br>- MAPREDUCE-5075: DistCp leaks input file handles<br>- distcp par...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5129">MAPREDUCE-5129</a>.
Minor new feature reported by billie.rinaldi and fixed by billie.rinaldi <br>
<b>Add tag info to JH files</b><br>
<blockquote>It will be useful to add tags to the existing workflow info logged by JH. This will allow jobs to be filtered/grouped for analysis more easily.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5131">MAPREDUCE-5131</a>.
Major bug reported by acmurthy and fixed by acmurthy <br>
<b>Provide better handling of job status related apis during JT restart</b><br>
<blockquote>I&apos;ve seen pig/hive applications bork during JT restart since they get NPEs - this is due to fact that jobs are not really inited, but are submitted.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5154">MAPREDUCE-5154</a>.
Major bug reported by sandyr and fixed by sandyr (jobtracker)<br>
<b>staging directory deletion fails because delegation tokens have been cancelled</b><br>
<blockquote>In a secure setup, the jobtracker needs the job&apos;s delegation tokens to delete the staging directory. MAPREDUCE-4850 made it so that job cleanup staging directory deletion occurs asynchronously, so that it could order it with system directory deletion. This introduced the issue that a job&apos;s delegation tokens could be cancelled before the cleanup thread got around to deleting it, causing the deletion to fail.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5158">MAPREDUCE-5158</a>.
Major bug reported by yeshavora and fixed by mayank_bansal (jobtracker)<br>
<b>Cleanup required when mapreduce.job.restart.recover is set to false</b><br>
<blockquote>When mapred.jobtracker.restart.recover is set as true and mapreduce.job.restart.recover is set to false for a MR job, Job clean up never happens for that job if JT restarts while job is running.<br><br>.staging and job-info file for that job remains on HDFS forever. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5166">MAPREDUCE-5166</a>.
Blocker bug reported by hagleitn and fixed by sandyr <br>
<b>ConcurrentModificationException in LocalJobRunner</b><br>
<blockquote>With the latest version hive unit tests fail in various places with the following stack trace. The problem seems related to: MAPREDUCE-2931<br><br>{noformat}<br> [junit] java.util.ConcurrentModificationException<br> [junit] at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)<br> [junit] at java.util.HashMap$ValueIterator.next(HashMap.java:822)<br> [junit] at org.apache.hadoop.mapred.Counters.incrAllCounters(Counters.java:505)<br> [junit] at org.apache.hadoop.mapred.Counters.sum(Counte...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5169">MAPREDUCE-5169</a>.
Major bug reported by arpitgupta and fixed by acmurthy <br>
<b>Job recovery fails if job tracker is restarted after the job is submitted but before its initialized</b><br>
<blockquote>This was noticed when within 5 seconds of submitting a word count job, the job tracker was restarted. Upon restart the job failed to recover</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5198">MAPREDUCE-5198</a>.
Major bug reported by arpitgupta and fixed by arpitgupta (tasktracker)<br>
<b>Race condition in cleanup during task tracker renint with LinuxTaskController</b><br>
<blockquote>This was noticed when job tracker would be restarted while jobs were running and would ask the task tracker to reinitialize. <br><br>Tasktracker would fail with an error like<br><br>{code}<br>013-04-27 20:19:09,627 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /grid/0/hdp/mapred/local,/grid/1/hdp/mapred/local,/grid/2/hdp/mapred/local,/grid/3/hdp/mapred/local,/grid/4/hdp/mapred/local,/grid/5/hdp/mapred/local<br>2013-04-27 20:19:09,628 INFO org.apache.hadoop.ipc.Server: IPC Server...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5202">MAPREDUCE-5202</a>.
Major bug reported by owen.omalley and fixed by owen.omalley <br>
<b>Revert MAPREDUCE-4397 to avoid using incorrect config files</b><br>
<blockquote>MAPREDUCE-4397 added the capability to switch the location of the taskcontroller.cfg file, which weakens security.</blockquote></li>
</ul>
<h2>Changes since Hadoop 1.1.1</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8567">HADOOP-8567</a>.
Major new feature reported by djp and fixed by jingzhao (conf)<br>
<b>Port conf servlet to dump running configuration to branch 1.x</b><br>
<blockquote> Users can use the conf servlet to get the server-side configuration. Users can <br/>
<br/>
1) connect to http_server_url/conf or http_server_url/conf?format=xml and get XML-based configuration description; <br/>
2) connect to http_server_url/conf?format=json and get JSON-based configuration description.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9115">HADOOP-9115</a>.
Blocker bug reported by arpitgupta and fixed by jingzhao <br>
<b>Deadlock in configuration when writing configuration to hdfs</b><br>
<blockquote> This fixes a bug where Hive could trigger a deadlock condition in the Hadoop configuration management code.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4478">MAPREDUCE-4478</a>.
Major bug reported by liangly and fixed by liangly <br>
<b>TaskTracker&apos;s heartbeat is out of control</b><br>
<blockquote> Fixed a bug in TaskTracker&#39;s heartbeat to keep it under control.
</blockquote></li>
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8418">HADOOP-8418</a>.
Major bug reported by vicaya and fixed by crystal_gaoyu (security)<br>
<b>Fix UGI for IBM JDK running on Windows</b><br>
<blockquote>The login module and user principal classes are different for 32 and 64-bit Windows in IBM J9 JDK 6 SR10. Hadoop 1.0.3 does not run on either because it uses the 32 bit login module and the 64-bit user principal class.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8419">HADOOP-8419</a>.
Major bug reported by vicaya and fixed by carp84 (io)<br>
<b>GzipCodec NPE upon reset with IBM JDK</b><br>
<blockquote>The GzipCodec will NPE upon reset after finish when the native zlib codec is not loaded. When the native zlib is loaded the codec creates a CompressorOutputStream that doesn&apos;t have the problem, otherwise, the GZipCodec uses GZIPOutputStream which is extended to provide the resetState method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, GZIPOutputStream#finish will release the underlying deflater, which causes NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJD...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8561">HADOOP-8561</a>.
Major improvement reported by vicaya and fixed by crystal_gaoyu (security)<br>
<b>Introduce HADOOP_PROXY_USER for secure impersonation in child hadoop client processes</b><br>
<blockquote>To solve the problem for an authenticated user to type hadoop shell commands in a web console, we can introduce an HADOOP_PROXY_USER environment variable to allow proper impersonation in the child hadoop client processes.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8880">HADOOP-8880</a>.
Major bug reported by gkesavan and fixed by gkesavan <br>
<b>Missing jersey jars as dependency in the pom causes hive tests to fail</b><br>
<blockquote>ivy.xml has the dependency included where as the same dependency is not updated in the pom template.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9051">HADOOP-9051</a>.
Minor test reported by mgong@vmware.com and fixed by vicaya (test)<br>
<b>Òant testÓ will build failed for trying to delete a file</b><br>
<blockquote>Run &quot;ant test&quot; on branch-1 of hadoop-common.<br>When the test process reach &quot;test-core-excluding-commit-and-smoke&quot;<br><br>It will invoke the &quot;macro-test-runner&quot; to clear and rebuild the test environment.<br>Then the ant task command &lt;delete dir=&quot;@{test.dir}/logs&quot; /&gt;<br>failed for trying to delete an non-existent file.<br><br>following is the test result logs:<br>test-core-excluding-commit-and-smoke:<br> [delete] Deleting: /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/testsfailed<br> [delete] Dele...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9111">HADOOP-9111</a>.
Minor improvement reported by jingzhao and fixed by jingzhao (test)<br>
<b>Fix failed testcases with @ignore annotation In branch-1</b><br>
<blockquote>Currently in branch-1, several failed testcases have @ignore annotation which does not take effect because these testcases are still using JUnit3. This jira plans to change these testcases to JUnit4 to let @ignore work.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3727">HDFS-3727</a>.
Major bug reported by atm and fixed by atm (namenode)<br>
<b>When using SPNEGO, NN should not try to log in using KSSL principal</b><br>
<blockquote>When performing a checkpoint with security enabled, the NN will attempt to relogin from its keytab before making an HTTP request back to the 2NN to fetch the newly-merged image. However, it always attempts to log in using the KSSL principal, even if SPNEGO is configured to be used.<br><br>This issue was discovered by Stephen Chu.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4208">HDFS-4208</a>.
Critical bug reported by brandonli and fixed by brandonli (namenode)<br>
<b>NameNode could be stuck in SafeMode due to never-created blocks</b><br>
<blockquote>In one test case, NameNode allocated a block and then was killed before the client got the addBlock response. After NameNode restarted, it couldn&apos;t get out of SafeMode waiting for the block which was never created. In trunk, NameNode can get out of SafeMode since it only counts complete blocks. However branch-1 doesn&apos;t have the clear notion of under-constructioned-block in Namenode. <br><br>JIRA HDFS-4212 is to track the never-created-block issue and this JIRA is to fix NameNode in branch-1 so it c...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4252">HDFS-4252</a>.
Major improvement reported by sureshms and fixed by jingzhao (namenode)<br>
<b>Improve confusing log message that prints exception when editlog read is completed</b><br>
<blockquote>Namenode prints a log with an exception to indicate successful completion of reading of logs. This causes misunderstanding where people have interpreted it as failure to load editlog. The log message could be better.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4423">HDFS-4423</a>.
Blocker bug reported by chenfolin and fixed by cnauroth (namenode)<br>
<b>Checkpoint exception causes fatal damage to fsimage.</b><br>
<blockquote>The impact of class is org.apache.hadoop.hdfs.server.namenode.FSImage.java<br>{code}<br>boolean loadFSImage(MetaRecoveryContext recovery) throws IOException {<br>...<br>latestNameSD.read();<br> needToSave |= loadFSImage(getImageFile(latestNameSD, NameNodeFile.IMAGE));<br> LOG.info(&quot;Image file of size &quot; + imageSize + &quot; loaded in &quot; <br> + (FSNamesystem.now() - startTime)/1000 + &quot; seconds.&quot;);<br> <br> // Load latest edits<br> if (latestNameCheckpointTime &gt; latestEditsCheckpointTime)<br> // the image i...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2374">MAPREDUCE-2374</a>.
Major bug reported by tlipcon and fixed by adi2 <br>
<b>&quot;Text File Busy&quot; errors launching MR tasks</b><br>
<blockquote>Some very small percentage of tasks fail with a &quot;Text file busy&quot; error.<br><br>The following was the original diagnosis:<br>{quote}<br>Our use of PrintWriter in TaskController.writeCommand is unsafe, since that class swallows all IO exceptions. We&apos;re not currently checking for errors, which I&apos;m seeing result in occasional task failures with the message &quot;Text file busy&quot; - assumedly because the close() call is failing silently for some reason.<br>{quote}<br>.. but turned out to be another issue as well (see below)</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4272">MAPREDUCE-4272</a>.
Major bug reported by vicaya and fixed by crystal_gaoyu (task)<br>
<b>SortedRanges.Range#compareTo is not spec compliant</b><br>
<blockquote>SortedRanges.Range#compareTo does not satisfy the requirement of Comparable#compareTo, where &quot;the implementor must ensure {noformat}sgn(x.compareTo(y)) == -sgn(y.compareTo(x)){noformat} for all x and y.&quot;<br><br>This is manifested as TestStreamingBadRecords failures in alternative JDKs.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4396">MAPREDUCE-4396</a>.
Minor bug reported by vicaya and fixed by crystal_gaoyu (client)<br>
<b>Make LocalJobRunner work with private distributed cache</b><br>
<blockquote>Some LocalJobRunner related unit tests fails if user directory permission and/or umask is too restrictive.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4397">MAPREDUCE-4397</a>.
Major improvement reported by vicaya and fixed by crystal_gaoyu (task-controller)<br>
<b>Introduce HADOOP_SECURITY_CONF_DIR for task-controller</b><br>
<blockquote>The linux task controller currently hard codes the directory in which to look for its config file at compile time (via the HADOOP_CONF_DIR macro). Adding a new environment variable to look for task-controller&apos;s conf dir (with strict permission checks) would make installation much more flexible.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4696">MAPREDUCE-4696</a>.
Minor bug reported by gopalv and fixed by gopalv <br>
<b>TestMRServerPorts throws NullReferenceException</b><br>
<blockquote>TestMRServerPorts throws <br><br>{code}<br>java.lang.NullPointerException<br> at org.apache.hadoop.mapred.TestMRServerPorts.canStartJobTracker(TestMRServerPorts.java:99)<br> at org.apache.hadoop.mapred.TestMRServerPorts.testJobTrackerPorts(TestMRServerPorts.java:152)<br>{code}<br><br>Use the JobTracker.startTracker(string, string, boolean initialize) factory method to get a pre-initialized JobTracker for the test.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4697">MAPREDUCE-4697</a>.
Minor bug reported by gopalv and fixed by gopalv <br>
<b>TestMapredHeartbeat fails assertion on HeartbeatInterval</b><br>
<blockquote>TestMapredHeartbeat fails test on heart beat interval<br><br>{code}<br> FAILED<br>expected:&lt;300&gt; but was:&lt;500&gt;<br>junit.framework.AssertionFailedError: expected:&lt;300&gt; but was:&lt;500&gt;<br> at org.apache.hadoop.mapred.TestMapredHeartbeat.testJobDirCleanup(TestMapredHeartbeat.java:68)<br>{code}<br><br>Replicate math for getNextHeartbeatInterval() in the test-case to ensure MRConstants changes do not break test-case.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4699">MAPREDUCE-4699</a>.
Minor bug reported by gopalv and fixed by gopalv <br>
<b>TestFairScheduler &amp; TestCapacityScheduler fails due to JobHistory exception</b><br>
<blockquote>TestFairScheduler fails due to exception from mapred.JobHistory<br><br>{code}<br>null<br>java.lang.NullPointerException<br> at org.apache.hadoop.mapred.JobHistory$JobInfo.logJobPriority(JobHistory.java:1975)<br> at org.apache.hadoop.mapred.JobInProgress.setPriority(JobInProgress.java:895)<br> at org.apache.hadoop.mapred.TestFairScheduler.testFifoPool(TestFairScheduler.java:2617)<br>{code}<br><br>TestCapacityScheduler fails due to<br><br>{code}<br>java.lang.NullPointerException<br> at org.apache.hadoop.mapred.JobHistory$JobInfo.log...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4798">MAPREDUCE-4798</a>.
Minor bug reported by sam liu and fixed by (jobhistoryserver, test)<br>
<b>TestJobHistoryServer fails some times with &apos;java.lang.AssertionError: Address already in use&apos;</b><br>
<blockquote>UT Failure in IHC 1.0.3: org.apache.hadoop.mapred.TestJobHistoryServer. This UT fails sometimes.<br><br>The error message is:<br>&apos;Testcase: testHistoryServerStandalone took 5.376 sec<br> Caused an ERROR<br>Address already in use<br>java.lang.AssertionError: Address already in use<br> at org.apache.hadoop.mapred.TestJobHistoryServer.testHistoryServerStandalone(TestJobHistoryServer.java:113)&apos;</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4858">MAPREDUCE-4858</a>.
Major bug reported by acmurthy and fixed by acmurthy <br>
<b>TestWebUIAuthorization fails on branch-1</b><br>
<blockquote>TestWebUIAuthorization fails on branch-1</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4859">MAPREDUCE-4859</a>.
Major bug reported by acmurthy and fixed by acmurthy <br>
<b>TestRecoveryManager fails on branch-1</b><br>
<blockquote>Looks like the tests are extremely flaky and just hang.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4888">MAPREDUCE-4888</a>.
Blocker bug reported by revans2 and fixed by vinodkv (mrv1)<br>
<b>NLineInputFormat drops data in 1.1 and beyond</b><br>
<blockquote>When trying to root cause why MAPREDUCE-4782 did not cause us issues on 1.0.2, I found out that HADOOP-7823 introduced essentially the exact same error into org.apache.hadoop.mapred.lib.NLineInputFormat.<br><br>In 1.X org.apache.hadoop.mapred.lib.NLineInputFormat and org.apache.hadoop.mapreduce.lib.input.NLineInputFormat are separate implementations. The latter had an off by one error in it until MAPREDUCE-4782 fixed it. The former had no error in it until HADOOP-7823 introduced it in 1.1 and MAPR...</blockquote></li>
</ul>
<h2>Changes since Hadoop 1.1.0</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
None.
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8745">HADOOP-8745</a>.
Minor bug reported by mafr and fixed by mafr <br>
<b>Incorrect version numbers in hadoop-core POM</b><br>
<blockquote>The hadoop-core POM as published to Maven central has different dependency versions than Hadoop actually has on its runtime classpath. This can lead to client code working in unit tests but failing on the cluster and vice versa.<br><br>The following version numbers are incorrect: jackson-mapper-asl, kfs, and jets3t. There&apos;s also a duplicate dependency to commons-net.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8823">HADOOP-8823</a>.
Major improvement reported by szetszwo and fixed by szetszwo (build)<br>
<b>ant package target should not depend on cn-docs</b><br>
<blockquote>In branch-1, the package target depends on cn-docs but the doc is already outdated.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8878">HADOOP-8878</a>.
Major bug reported by arpitgupta and fixed by arpitgupta <br>
<b>uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on</b><br>
<blockquote>This was noticed on a secure cluster where the namenode had an upper case hostname and the following command was issued<br><br>hadoop dfs -ls webhdfs://NN:PORT/PATH<br><br>the above command failed because delegation token retrieval failed.<br><br>Upon looking at the kerberos logs it was determined that we tried to get the ticket for kerberos principal with upper case hostnames and that host did not exit in kerberos. We should convert the hostnames to lower case. Take a look at HADOOP-7988 where the same fix wa...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8882">HADOOP-8882</a>.
Major bug reported by arpitgupta and fixed by arpitgupta <br>
<b>uppercase namenode host name causes fsck to fail when useKsslAuth is on</b><br>
<blockquote>{code}<br> public static void fetchServiceTicket(URL remoteHost) throws IOException {<br> if(!UserGroupInformation.isSecurityEnabled())<br> return;<br> <br> String serviceName = &quot;host/&quot; + remoteHost.getHost();<br>{code}<br><br>the hostname should be converted to lower case. Saw this in branch 1, will look at trunk and update the bug accordingly.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8995">HADOOP-8995</a>.
Minor bug reported by jingzhao and fixed by jingzhao <br>
<b>Remove unnecessary bogus exception log from Configuration</b><br>
<blockquote>In Configuration#Configuration(boolean) and Configuration#Configuration(Configuration), bogus exceptions are thrown when Log level is DEBUG.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9017">HADOOP-9017</a>.
Major bug reported by gkesavan and fixed by gkesavan (build)<br>
<b>fix hadoop-client-pom-template.xml and hadoop-client-pom-template.xml for version </b><br>
<blockquote>hadoop-client-pom-template.xml and hadoop-client-pom-template.xml references to project.version variable, instead they should refer to @version token.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-528">HDFS-528</a>.
Major new feature reported by tlipcon and fixed by tlipcon (scripts)<br>
<b>Add ability for safemode to wait for a minimum number of live datanodes</b><br>
<blockquote>When starting up a fresh cluster programatically, users often want to wait until DFS is &quot;writable&quot; before continuing in a script. &quot;dfsadmin -safemode wait&quot; doesn&apos;t quite work for this on a completely fresh cluster, since when there are 0 blocks on the system, 100% of them are accounted for before any DNs have reported.<br><br>This JIRA is to add a command which waits until a certain number of DNs have reported as alive to the NN.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1108">HDFS-1108</a>.
Major sub-task reported by dhruba and fixed by tlipcon (ha, name-node)<br>
<b>Log newly allocated blocks</b><br>
<blockquote>The current HDFS design says that newly allocated blocks for a file are not persisted in the NN transaction log when the block is allocated. Instead, a hflush() or a close() on the file persists the blocks into the transaction log. It would be nice if we can immediately persist newly allocated blocks (as soon as they are allocated) for specific files.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1539">HDFS-1539</a>.
Major improvement reported by dhruba and fixed by dhruba (data-node, hdfs client, name-node)<br>
<b>prevent data loss when a cluster suffers a power loss</b><br>
<blockquote>we have seen an instance where a external outage caused many datanodes to reboot at around the same time. This resulted in many corrupted blocks. These were recently written blocks; the current implementation of HDFS Datanodes do not sync the data of a block file when the block is closed.<br><br>1. Have a cluster-wide config setting that causes the datanode to sync a block file when a block is finalized.<br>2. Introduce a new parameter to the FileSystem.create() to trigger the new behaviour, i.e. cau...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2815">HDFS-2815</a>.
Critical bug reported by umamaheswararao and fixed by umamaheswararao (name-node)<br>
<b>Namenode is not coming out of safemode when we perform ( NN crash + restart ) . Also FSCK report shows blocks missed.</b><br>
<blockquote>When tested the HA(internal) with continuous switch with some 5mins gap, found some *blocks missed* and namenode went into safemode after next switch.<br> <br> After the analysis, i found that this files already deleted by clients. But i don&apos;t see any delete commands logs namenode log files. But namenode added that blocks to invalidateSets and DNs deleted the blocks.<br> When restart of the namenode, it went into safemode and expecting some more blocks to come out of safemode.<br><br> Here the reaso...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3658">HDFS-3658</a>.
Major bug reported by eli and fixed by szetszwo <br>
<b>TestDFSClientRetries#testNamenodeRestart failed</b><br>
<blockquote>Saw the following fail on a jenkins run:<br><br>{noformat}<br>Error Message<br><br>expected:&lt;MD5-of-0MD5-of-512CRC32:f397fb3d9133d0a8f55854ea2bb268b0&gt; but was:&lt;MD5-of-0MD5-of-0CRC32:70bc8f4b72a86921468bf8e8441dce51&gt;<br>Stacktrace<br><br>junit.framework.AssertionFailedError: expected:&lt;MD5-of-0MD5-of-512CRC32:f397fb3d9133d0a8f55854ea2bb268b0&gt; but was:&lt;MD5-of-0MD5-of-0CRC32:70bc8f4b72a86921468bf8e8441dce51&gt;<br> at junit.framework.Assert.fail(Assert.java:47)<br> at junit.framework.Assert.failNotEquals(Assert.java:283)<br> at jun...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3791">HDFS-3791</a>.
Major bug reported by umamaheswararao and fixed by umamaheswararao (name-node)<br>
<b>Backport HDFS-173 to Branch-1 : Recursively deleting a directory with millions of files makes NameNode unresponsive for other commands until the deletion completes</b><br>
<blockquote>Backport HDFS-173. <br>see the [comment|https://issues.apache.org/jira/browse/HDFS-2815?focusedCommentId=13422007&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13422007] for more details</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3846">HDFS-3846</a>.
Major bug reported by szetszwo and fixed by brandonli (name-node)<br>
<b>Namenode deadlock in branch-1</b><br>
<blockquote>Jitendra found out the following problem:<br>1. Handler : Acquires namesystem lock waits on SafemodeInfo lock at SafeModeInfo.isOn()<br>2. SafemodeMonitor : Calls SafeModeInfo.canLeave() which is synchronized so SafemodeInfo lock is acquired, but this method also causes following call sequence needEnter() -&gt; getNumLiveDataNodes() -&gt; getNumberOfDatanodes() -&gt; getDatanodeListForReport() -&gt; getDatanodeListForReport() . The getDatanodeListForReport is synchronized with FSNamesystem lock.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4105">HDFS-4105</a>.
Major bug reported by arpitgupta and fixed by arpitgupta <br>
<b>the SPNEGO user for secondary namenode should use the web keytab</b><br>
<blockquote>This is similar to HDFS-3466 where we made sure the namenode checks for the web keytab before it uses the namenode keytab.<br><br>The same needs to be done for secondary namenode as well.<br><br>{code}<br>String httpKeytab = <br> conf.get(DFSConfigKeys.DFS_SECONDARY_NAMENODE_KEYTAB_FILE_KEY);<br> if (httpKeytab != null &amp;&amp; !httpKeytab.isEmpty()) {<br> params.put(&quot;kerberos.keytab&quot;, httpKeytab);<br> }<br>{code}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4134">HDFS-4134</a>.
Minor bug reported by stevel@apache.org and fixed by (name-node)<br>
<b>hadoop namenode &amp; datanode entry points should return negative exit code on bad arguments</b><br>
<blockquote>When you go {{hadoop namenode start}} (or some other bad argument to the namenode), a usage message is generated -but the script returns 0. <br><br>This stops it being a robust command to invoke from other scripts -and is inconsistent with the JT &amp; TT entry points, that do return -1 on a usage message</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4161">HDFS-4161</a>.
Major bug reported by sureshms and fixed by szetszwo (hdfs client)<br>
<b>HDFS keeps a thread open for every file writer</b><br>
<blockquote>In 1.0 release DFSClient uses a thread per file writer. In some use cases (dynamic partions in hive) that use a large number of file writers a large number of threads are created. The file writer thread has the following stack:<br>{noformat}<br>at java.lang.Thread.sleep(Native Method)<br>at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1462)<br>at java.lang.Thread.run(Thread.java:662)<br>{noformat}<br><br>This problem has been fixed in later releases. This jira will post a consolidated patch fr...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-4174">HDFS-4174</a>.
Major improvement reported by jingzhao and fixed by jingzhao <br>
<b>Backport HDFS-1031 to branch-1: to list a few of the corrupted files in WebUI</b><br>
<blockquote>1. Add getCorruptFiles method to FSNamesystem (the getCorruptFiles method is in branch-0.21 but not in branch-1).<br><br>2. Backport HDFS-1031: display corrupt files in WebUI.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4749">MAPREDUCE-4749</a>.
Major bug reported by arpitgupta and fixed by arpitgupta <br>
<b>Killing multiple attempts of a task taker longer as more attempts are killed</b><br>
<blockquote>The following was noticed on a mr job running on hadoop 1.1.0<br><br>1. Start an mr job with 1 mapper<br><br>2. Wait for a min<br><br>3. Kill the first attempt of the mapper and then subsequently kill the other 3 attempts in order to fail the job<br><br>The time taken to kill the task grew exponentially.<br><br>1st attempt was killed immediately.<br>2nd attempt took a little over a min<br>3rd attempt took approx. 20 mins<br>4th attempt took around 3 hrs.<br><br>The command used to kill the attempt was &quot;hadoop job -fail-task&quot;<br><br>Note that ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4782">MAPREDUCE-4782</a>.
Blocker bug reported by mark.fuhs and fixed by mark.fuhs (client)<br>
<b>NLineInputFormat skips first line of last InputSplit</b><br>
<blockquote>NLineInputFormat creates FileSplits that are then used by LineRecordReader to generate Text values. To deal with an idiosyncrasy of LineRecordReader, the begin and length fields of the FileSplit are constructed differently for the first FileSplit vs. the rest.<br><br>After looping through all lines of a file, the final FileSplit is created, but the creation does not respect the difference of how the first vs. the rest of the FileSplits are created.<br><br>This results in the first line of the final Input...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4792">MAPREDUCE-4792</a>.
Major bug reported by asanjar and fixed by asanjar (test)<br>
<b>Unit Test TestJobTrackerRestartWithLostTracker fails with ant-1.8.4</b><br>
<blockquote>Problem:<br>JUnit tag @Ignore is not recognized since the testcase is JUnit3 and not JUnit4:<br>Solution:<br>Migrate the testcase to JUnit4, including:<br>* Remove extends TestCase&quot;<br>* Remove import junit.framework.TestCase;<br>* Add import org.junit.*; <br>* Use appropriate annotations such as @After, @Before, @Test.</blockquote></li>
</ul>
<h2>Changes since Hadoop 1.0.3</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5464">HADOOP-5464</a>.
Major bug reported by rangadi and fixed by rangadi <br>
<b>DFSClient does not treat write timeout of 0 properly</b><br>
<blockquote> Zero values for dfs.socket.timeout and dfs.datanode.socket.write.timeout are now respected. Previously zero values for these parameters resulted in a 5 second timeout.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6995">HADOOP-6995</a>.
Minor improvement reported by tlipcon and fixed by tlipcon (security)<br>
<b>Allow wildcards to be used in ProxyUsers configurations</b><br>
<blockquote> When configuring proxy users and hosts, the special wildcard value &quot;*&quot; may be specified to match any host or any user.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8230">HADOOP-8230</a>.
Major improvement reported by eli2 and fixed by eli <br>
<b>Enable sync by default and disable append</b><br>
<blockquote> Append is not supported in Hadoop 1.x. Please upgrade to 2.x if you need append. If you enabled dfs.support.append for HBase, you&#39;re OK, as durable sync (why HBase required dfs.support.append) is now enabled by default. If you really need the previous functionality, to turn on the append functionality set the flag &quot;dfs.support.broken.append&quot; to true.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8365">HADOOP-8365</a>.
Blocker improvement reported by eli2 and fixed by eli <br>
<b>Add flag to disable durable sync</b><br>
<blockquote> This patch enables durable sync by default. Installation where HBase was not used, that used to run without setting &quot;dfs.support.append&quot; or setting it to false explicitly in the configuration, must add a new flag &quot;dfs.durable.sync&quot; and set it to false to preserve the previous semantics.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2465">HDFS-2465</a>.
Major improvement reported by tlipcon and fixed by tlipcon (data-node, performance)<br>
<b>Add HDFS support for fadvise readahead and drop-behind</b><br>
<blockquote> HDFS now has the ability to use posix_fadvise and sync_data_range syscalls to manage the OS buffer cache. This support is currently considered experimental, and may be enabled by configuring the following keys: <br/>
dfs.datanode.drop.cache.behind.writes - set to true to drop data out of the buffer cache after writing <br/>
dfs.datanode.drop.cache.behind.reads - set to true to drop data out of the buffer cache when performing sequential reads <br/>
dfs.datanode.sync.behind.writes - set to true to trigger dirty page writeback immediately after writing data <br/>
dfs.datanode.readahead.bytes - set to a non-zero value to trigger readahead for sequential reads
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2617">HDFS-2617</a>.
Major improvement reported by jghoman and fixed by jghoman (security)<br>
<b>Replaced Kerberized SSL for image transfer and fsck with SPNEGO-based solution</b><br>
<blockquote> Due to the requirement that KSSL use weak encryption types for Kerberos tickets, HTTP authentication to the NameNode will now use SPNEGO by default. This will require users of previous branch-1 releases with security enabled to modify their configurations and create new Kerberos principals in order to use SPNEGO. The old behavior of using KSSL can optionally be enabled by setting the configuration option &quot;hadoop.security.use-weak-http-crypto&quot; to &quot;true&quot;.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2741">HDFS-2741</a>.
Minor bug reported by markus17 and fixed by <br>
<b>dfs.datanode.max.xcievers missing in 0.20.205.0</b><br>
<blockquote> Document and raise the maximum allowed transfer threads on a DataNode to 4096. This helps Apache HBase in particular.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3044">HDFS-3044</a>.
Major improvement reported by eli2 and fixed by cmccabe (name-node)<br>
<b>fsck move should be non-destructive by default</b><br>
<blockquote> The fsck &quot;move&quot; option is no longer destructive. It copies the accessible blocks of corrupt files to lost and found as before, but no longer deletes the corrupt files after copying the blocks. The original, destructive behavior can be enabled by specifying both the &quot;move&quot; and &quot;delete&quot; options.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3055">HDFS-3055</a>.
Minor new feature reported by cmccabe and fixed by cmccabe <br>
<b>Implement recovery mode for branch-1</b><br>
<blockquote> This is a new feature. It is documented in hdfs_user_guide.xml.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3094">HDFS-3094</a>.
Major improvement reported by arpitgupta and fixed by arpitgupta <br>
<b>add -nonInteractive and -force option to namenode -format command</b><br>
<blockquote> The &#39;namenode -format&#39; command now supports the flags &#39;-nonInteractive&#39; and &#39;-force&#39; to improve usefulness without user input.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3518">HDFS-3518</a>.
Major bug reported by bikassaha and fixed by szetszwo (hdfs client)<br>
<b>Provide API to check HDFS operational state</b><br>
<blockquote> Add a utility method HdfsUtils.isHealthy(uri) for checking if the given HDFS is healthy.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3522">HDFS-3522</a>.
Major bug reported by brandonli and fixed by brandonli (name-node)<br>
<b>If NN is in safemode, it should throw SafeModeException when getBlockLocations has zero locations</b><br>
<blockquote> getBlockLocations(), and hence open() for read, will now throw SafeModeException if the NameNode is still in safe mode and there are no replicas reported yet for one of the blocks in the file.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3703">HDFS-3703</a>.
Major improvement reported by nkeywal and fixed by jingzhao (data-node, name-node)<br>
<b>Decrease the datanode failure detection time</b><br>
<blockquote> This jira adds a new DataNode state called &quot;stale&quot; at the NameNode. DataNodes are marked as stale if it does not send heartbeat message to NameNode within the timeout configured using the configuration parameter &quot;dfs.namenode.stale.datanode.interval&quot; in seconds (default value is 30 seconds). NameNode picks a stale datanode as the last target to read from when returning block locations for reads. <br/>
<br/>
This feature is by default turned * off *. To turn on the feature, set the HDFS configuration &quot;dfs.namenode.check.stale.datanode&quot; to true. <br/>
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3814">HDFS-3814</a>.
Major improvement reported by sureshms and fixed by jingzhao (name-node)<br>
<b>Make the replication monitor multipliers configurable in 1.x</b><br>
<blockquote> This change adds two new configuration parameters. <br/>
# {{dfs.namenode.invalidate.work.pct.per.iteration}} for controlling deletion rate of blocks. <br/>
# {{dfs.namenode.replication.work.multiplier.per.iteration}} for controlling replication rate. This in turn allows controlling the time it takes for decommissioning. <br/>
<br/>
Please see hdfs-default.xml for detailed description.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1906">MAPREDUCE-1906</a>.
Major improvement reported by scott_carey and fixed by tlipcon (jobtracker, performance, tasktracker)<br>
<b>Lower default minimum heartbeat interval for tasktracker &gt; Jobtracker</b><br>
<blockquote> The default minimum heartbeat interval has been dropped from 3 seconds to 300ms to increase scheduling throughput on small clusters. Users may tune mapreduce.jobtracker.heartbeats.in.second to adjust this value.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2517">MAPREDUCE-2517</a>.
Major task reported by vinaythota and fixed by vinaythota (contrib/gridmix)<br>
<b>Porting Gridmix v3 system tests into trunk branch.</b><br>
<blockquote> Adds system tests to Gridmix. These system tests cover various features like job types (load and sleep), user resolvers (round-robin, submitter-user, echo) and submission modes (stress, replay and serial).
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3008">MAPREDUCE-3008</a>.
Major sub-task reported by amar_kamat and fixed by amar_kamat (contrib/gridmix)<br>
<b>[Gridmix] Improve cumulative CPU usage emulation for short running tasks</b><br>
<blockquote> Improves cumulative CPU emulation for short running tasks.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3118">MAPREDUCE-3118</a>.
Major new feature reported by ravidotg and fixed by ravidotg (contrib/gridmix, tools/rumen)<br>
<b>Backport Gridmix and Rumen features from trunk to Hadoop 0.20 security branch</b><br>
<blockquote> Backports latest features from trunk to 0.20.206 branch.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3597">MAPREDUCE-3597</a>.
Major improvement reported by ravidotg and fixed by ravidotg (tools/rumen)<br>
<b>Provide a way to access other info of history file from Rumentool</b><br>
<blockquote> Rumen now provides {{Parsed*}} objects. These objects provide extra information that are not provided by {{Logged*}} objects.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4087">MAPREDUCE-4087</a>.
Major bug reported by ravidotg and fixed by ravidotg <br>
<b>[Gridmix] GenerateDistCacheData job of Gridmix can become slow in some cases</b><br>
<blockquote> Fixes the issue of GenerateDistCacheData job slowness.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4673">MAPREDUCE-4673</a>.
Major bug reported by arpitgupta and fixed by arpitgupta (test)<br>
<b>make TestRawHistoryFile and TestJobHistoryServer more robust</b><br>
<blockquote> Fixed TestRawHistoryFile and TestJobHistoryServer to not write to /tmp.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4675">MAPREDUCE-4675</a>.
Major bug reported by arpitgupta and fixed by bikassaha (test)<br>
<b>TestKillSubProcesses fails as the process is still alive after the job is done</b><br>
<blockquote> Fixed a race condition caused in TestKillSubProcesses caused due to a recent commit.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4698">MAPREDUCE-4698</a>.
Minor bug reported by gopalv and fixed by gopalv <br>
<b>TestJobHistoryConfig throws Exception in testJobHistoryLogging</b><br>
<blockquote> Optionally call initialize/initializeFileSystem in JobTracker::startTracker() to allow for proper initialization when offerService is not being called.
</blockquote></li>
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5836">HADOOP-5836</a>.
Major bug reported by nowland and fixed by nowland (fs/s3)<br>
<b>Bug in S3N handling of directory markers using an object with a trailing &quot;/&quot; causes jobs to fail</b><br>
<blockquote>Some tools which upload to S3 and use a object terminated with a &quot;/&quot; as a directory marker, for instance &quot;s3n://mybucket/mydir/&quot;. If asked to iterate that &quot;directory&quot; via listStatus(), then the current code will return an empty file &quot;&quot;, which the InputFormatter happily assigns to a split, and which later causes a task to fail, and probably the job to fail. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6527">HADOOP-6527</a>.
Major bug reported by jghoman and fixed by ivanmi (security)<br>
<b>UserGroupInformation::createUserForTesting clobbers already defined group mappings</b><br>
<blockquote>In UserGroupInformation::createUserForTesting the follow code creates a new groups instance, obliterating any groups that have been previously defined in the static groups field.<br>{code} if (!(groups instanceof TestingGroups)) {<br> groups = new TestingGroups();<br> }<br>{code}<br>This becomes a problem in tests that start a Mini{DFS,MR}Cluster and then create a testing user. The user that started the user (generally the real user running the test) immediately has their groups wiped out and is...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6546">HADOOP-6546</a>.
Major bug reported by cjjefcoat and fixed by cjjefcoat (io)<br>
<b>BloomMapFile can return false negatives</b><br>
<blockquote>BloomMapFile can return false negatives when using keys of varying sizes. If the amount of data written by the write() method of your key class differs between instance of your key, your BloomMapFile may return false negatives.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6947">HADOOP-6947</a>.
Major bug reported by tlipcon and fixed by tlipcon (security)<br>
<b>Kerberos relogin should set refreshKrb5Config to true</b><br>
<blockquote>In working on securing a daemon that uses two different principals from different threads, I found that I wasn&apos;t able to login from a second keytab after I&apos;d logged in from the first. This is because we don&apos;t set the refreshKrb5Config in the Configuration for the Krb5LoginModule - hence it won&apos;t switch over to the correct keytab file if it&apos;s different than the first.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7154">HADOOP-7154</a>.
Minor improvement reported by tlipcon and fixed by tlipcon (scripts)<br>
<b>Should set MALLOC_ARENA_MAX in hadoop-config.sh</b><br>
<blockquote>New versions of glibc present in RHEL6 include a new arena allocator design. In several clusters we&apos;ve seen this new allocator cause huge amounts of virtual memory to be used, since when multiple threads perform allocations, they each get their own memory arena. On a 64-bit system, these arenas are 64M mappings, and the maximum number of arenas is 8 times the number of cores. We&apos;ve observed a DN process using 14GB of vmem for only 300M of resident set. This causes all kinds of nasty issues fo...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7297">HADOOP-7297</a>.
Trivial bug reported by nonop92 and fixed by qwertymaniac (documentation)<br>
<b>Error in the documentation regarding Checkpoint/Backup Node</b><br>
<blockquote>On http://hadoop.apache.org/common/docs/r0.20.203.0/hdfs_user_guide.html#Checkpoint+Node: the command bin/hdfs namenode -checkpoint required to launch the backup/checkpoint node does not exist.<br>I have removed this from the docs.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7509">HADOOP-7509</a>.
Trivial improvement reported by raviprak and fixed by raviprak <br>
<b>Improve message when Authentication is required</b><br>
<blockquote>The message when security is enabled and authentication is configured to be simple is not explicit enough. It simply prints out &quot;Authentication is required&quot; and prints out a stack trace. The message should be &quot;Authorization (hadoop.security.authorization) is enabled but authentication (hadoop.security.authentication) is configured as simple. Please configure another method.&quot;</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7621">HADOOP-7621</a>.
Critical bug reported by tucu00 and fixed by atm (security)<br>
<b>alfredo config should be in a file not readable by users</b><br>
<blockquote>[thxs ATM for point this one out]<br><br>Alfredo configuration currently is stored in the core-site.xml file, this file is readable by users (it must be as Configuration defaults must be loaded).<br><br>One of Alfredo config values is a secret which is used by all nodes to sign/verify the authentication cookie.<br><br>A user could get hold of this secret and forge authentication cookies for other users.<br><br>Because of this the Alfredo configuration, should be move to a user non-readable file.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7629">HADOOP-7629</a>.
Major bug reported by phunt and fixed by tlipcon <br>
<b>regression with MAPREDUCE-2289 - setPermission passed immutable FsPermission (rpc failure)</b><br>
<blockquote>MAPREDUCE-2289 introduced the following change:<br><br>{noformat}<br>+ fs.setPermission(stagingArea, JOB_DIR_PERMISSION);<br>{noformat}<br><br>JOB_DIR_PERMISSION is an immutable FsPermission which cannot be used in RPC calls, it results in the following exception:<br><br>{noformat}<br>2011-09-08 16:31:45,187 WARN org.apache.hadoop.ipc.Server: Unable to read call parameters for client 127.0.0.1<br>java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.permission.FsPermission$2.&lt;init&gt;()<br> ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7634">HADOOP-7634</a>.
Minor bug reported by eli and fixed by eli (documentation, security)<br>
<b>Cluster setup docs specify wrong owner for task-controller.cfg </b><br>
<blockquote>The cluster setup docs indicate task-controller.cfg must be owned by the user running TaskTracker but the code checks for root. We should update the docs to reflect the real requirement.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7653">HADOOP-7653</a>.
Minor bug reported by natty and fixed by natty (build)<br>
<b>tarball doesn&apos;t include .eclipse.templates</b><br>
<blockquote>The hadoop tarball doesn&apos;t include .eclipse.templates. This results in a failure to successfully run ant eclipse-files:<br><br>eclipse-files:<br><br>BUILD FAILED<br>/home/natty/Downloads/hadoop-0.20.2/build.xml:1606: /home/natty/Downloads/hadoop-0.20.2/.eclipse.templates not found.<br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7665">HADOOP-7665</a>.
Major bug reported by atm and fixed by atm (security)<br>
<b>branch-0.20-security doesn&apos;t include SPNEGO settings in core-default.xml</b><br>
<blockquote>Looks like back-port of HADOOP-7119 to branch-0.20-security missed the changes to {{core-default.xml}}.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7666">HADOOP-7666</a>.
Major bug reported by atm and fixed by atm (security)<br>
<b>branch-0.20-security doesn&apos;t include o.a.h.security.TestAuthenticationFilter</b><br>
<blockquote>Looks like the back-port of HADOOP-7119 to branch-0.20-security missed {{o.a.h.security.TestAuthenticationFilter}}.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7745">HADOOP-7745</a>.
Major bug reported by raviprak and fixed by raviprak <br>
<b>I switched variable names in HADOOP-7509</b><br>
<blockquote>As Aaron pointed out on https://issues.apache.org/jira/browse/HADOOP-7509?focusedCommentId=13126725&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13126725 I stupidly swapped CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION with CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION.<br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7753">HADOOP-7753</a>.
Major sub-task reported by tlipcon and fixed by tlipcon (io, native, performance)<br>
<b>Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class</b><br>
<blockquote>This JIRA adds JNI wrappers for sync_data_range and posix_fadvise. It also implements a ReadaheadPool class for future use from HDFS and MapReduce.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7806">HADOOP-7806</a>.
Major new feature reported by qwertymaniac and fixed by qwertymaniac (util)<br>
<b>Support binding to sub-interfaces</b><br>
<blockquote>Right now, with the {{DNS}} class, we can look up IPs of provided interface names ({{eth0}}, {{vm1}}, etc.). However, it would be useful if the I/F -&gt; IP lookup also took a look at subinterfaces ({{eth0:1}}, etc.) and allowed binding to only a specified subinterface / virtual interface.<br><br>This should be fairly easy to add, by matching against all available interfaces&apos; subinterfaces via Java.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7823">HADOOP-7823</a>.
Major new feature reported by tbroberg and fixed by apurtell <br>
<b>port HADOOP-4012 to branch-1 (splitting support for bzip2)</b><br>
<blockquote>Please see HADOOP-4012 - Providing splitting support for bzip2 compressed files.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7870">HADOOP-7870</a>.
Major bug reported by jmhsieh and fixed by jmhsieh <br>
<b>fix SequenceFile#createWriter with boolean createParent arg to respect createParent.</b><br>
<blockquote>After HBASE-6840, one set of calls to createNonRecursive(...) seems fishy - the new boolean createParent variable from the signature isn&apos;t used at all. <br><br>{code}<br>+ public static Writer<br>+ createWriter(FileSystem fs, Configuration conf, Path name,<br>+ Class keyClass, Class valClass, int bufferSize,<br>+ short replication, long blockSize, boolean createParent,<br>+ CompressionType compressionType, CompressionCodec codec,<br>+ Metadata meta...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7879">HADOOP-7879</a>.
Trivial bug reported by jmhsieh and fixed by jmhsieh <br>
<b>DistributedFileSystem#createNonRecursive should also incrementWriteOps statistics.</b><br>
<blockquote>This method:<br><br>{code}<br> public FSDataOutputStream createNonRecursive(Path f, FsPermission permission,<br> boolean overwrite,<br> int bufferSize, short replication, long blockSize, <br> Progressable progress) throws IOException {<br> return new FSDataOutputStream<br> (dfs.create(getPathName(f), permission, <br> overwrite, false, replication, blockSize, progress, bufferSize), <br> statistics);<br> }<br>{code}<br><br>Needs a statistics.incrementWriteOps(1);</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7898">HADOOP-7898</a>.
Minor bug reported by sureshms and fixed by sureshms (security)<br>
<b>Fix javadoc warnings in AuthenticationToken.java</b><br>
<blockquote>Fix the following javadoc warning:<br>[WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java:33: warning - Tag @link: reference not found: HttpServletRequest<br>[WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationToken.java...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7908">HADOOP-7908</a>.
Trivial bug reported by eli and fixed by eli (documentation)<br>
<b>Fix three javadoc warnings on branch-1</b><br>
<blockquote>Fix 3 javadoc warnings on branch-1:<br><br> [javadoc] /home/eli/src/hadoop-branch-1/src/core/org/apache/hadoop/io/Sequence<br>File.java:428: warning - @param argument &quot;progress&quot; is not a parameter name.<br><br> [javadoc] /home/eli/src/hadoop-branch-1/src/core/org/apache/hadoop/util/ChecksumUtil.java:32: warning - @param argument &quot;chunkOff&quot; is not a parameter name.<br><br> [javadoc] /home/eli/src/hadoop-branch-1/src/mapred/org/apache/hadoop/mapred/QueueAclsInfo.java:52: warning - @param argument &quot;queue&quot; is not ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7942">HADOOP-7942</a>.
Major test reported by gkesavan and fixed by jnp <br>
<b>enabling clover coverage reports fails hadoop unit test compilation</b><br>
<blockquote>enabling clover reports fails compiling the following junit tests.<br>link to the console output of jerkins :<br>https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-1-Code-Coverage/13/console<br><br><br><br>{noformat}<br>[javac] /tmp/clover50695626838999169.tmp/org/apache/hadoop/security/TestUserGroupInformation.java:224: cannot find symbol<br>......<br> [javac] /tmp/clover50695626838999169.tmp/org/apache/hadoop/security/TestUserGroupInformation.java:225: cannot find symbol<br>......<br><br> [javac] /tmp/clover50695626...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7982">HADOOP-7982</a>.
Major bug reported by tlipcon and fixed by tlipcon (security)<br>
<b>UserGroupInformation fails to login if thread&apos;s context classloader can&apos;t load HadoopLoginModule</b><br>
<blockquote>In a few hard-to-reproduce situations, we&apos;ve seen a problem where the UGI login call causes a failure to login exception with the following cause:<br><br>Caused by: javax.security.auth.login.LoginException: unable to find <br>LoginModule class: org.apache.hadoop.security.UserGroupInformation <br>$HadoopLoginModule<br><br>After a bunch of debugging, I determined that this happens when the login occurs in a thread whose Context ClassLoader has been set to null.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7988">HADOOP-7988</a>.
Major bug reported by jnp and fixed by jnp <br>
<b>Upper case in hostname part of the principals doesn&apos;t work with kerberos.</b><br>
<blockquote>Kerberos doesn&apos;t like upper case in the hostname part of the principals.<br>This issue has been seen in 23 as well as 1.0.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8154">HADOOP-8154</a>.
Major bug reported by eli2 and fixed by eli (conf)<br>
<b>DNS#getIPs shouldn&apos;t silently return the local host IP for bogus interface names</b><br>
<blockquote>DNS#getIPs silently returns the local host IP for bogus interface names. In this case let&apos;s throw an UnknownHostException. This is technically an incompatbile change. I suspect the current behavior was origininally introduced so the interface name &quot;default&quot; works w/o explicitly checking for it. It may also be used in cases where someone is using a shared config file and an option like &quot;dfs.datanode.dns.interface&quot; or &quot;hbase.master.dns.interface&quot; and eg interface &quot;eth3&quot; that some hosts don&apos;t ha...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8159">HADOOP-8159</a>.
Major bug reported by cmccabe and fixed by cmccabe <br>
<b>NetworkTopology: getLeaf should check for invalid topologies</b><br>
<blockquote>Currently, in NetworkTopology, getLeaf doesn&apos;t do too much validation on the InnerNode object itself. This results in us getting ClassCastException sometimes when the network topology is invalid. We should have a less confusing exception message for this case.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8209">HADOOP-8209</a>.
Major improvement reported by eli2 and fixed by eli <br>
<b>Add option to relax build-version check for branch-1</b><br>
<blockquote>In 1.x DNs currently refuse to connect to NNs if their build *revision* (ie svn revision) do not match. TTs refuse to connect to JTs if their build *version* (version, revision, user, and source checksum) do not match.<br><br>This prevents rolling upgrades, which is intentional, see the discussion in HADOOP-5203. The primary motivation in that jira was (1) it&apos;s difficult to guarantee every build on a large cluster got deployed correctly, builds don&apos;t get rolled back to old versions by accident etc,...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8269">HADOOP-8269</a>.
Trivial bug reported by eli2 and fixed by eli (documentation)<br>
<b>Fix some javadoc warnings on branch-1</b><br>
<blockquote>There are some javadoc warnings on branch-1, let&apos;s fix them.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8314">HADOOP-8314</a>.
Major bug reported by tucu00 and fixed by tucu00 (security)<br>
<b>HttpServer#hasAdminAccess should return false if authorization is enabled but user is not authenticated</b><br>
<blockquote>If the user is not authenticated (request.getRemoteUser() returns NULL) or there is not authentication filter configured (thus returning also NULL), hasAdminAccess should return false. Note that a filter could allow anonymous access, thus the first case.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8329">HADOOP-8329</a>.
Major bug reported by kumarr and fixed by eli (build)<br>
<b>Build fails with Java 7</b><br>
<blockquote>I am seeing the following message running IBM Java 7 running branch-1.0 code.<br>compile:<br>[echo] contrib: gridmix<br>[javac] Compiling 31 source files to /home/hadoop/branch-1.0_0427/build/contrib/gridmix/classes<br>[javac] /home/hadoop/branch-1.0_0427/src/contrib/gridmix/src/java/org/apache/hadoop/mapred/gridmix/Gridmix.java:396: error: type argument ? extends T is not within bounds of type-variable E<br>[javac] private &lt;T&gt; String getEnumValues(Enum&lt;? extends T&gt;[] e) {<br>[javac] ^<br>[javac] where T,E are ty...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8399">HADOOP-8399</a>.
Major bug reported by cos and fixed by cos (build)<br>
<b>Remove JDK5 dependency from Hadoop 1.0+ line</b><br>
<blockquote>This issues has been fixed in Hadoop starting from 0.21 (see HDFS-1552).<br>I propose to make the same fix for 1.0 line and get rid of JDK5 dependency all together.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8417">HADOOP-8417</a>.
Major bug reported by zhihyu@ebaysf.com and fixed by zhihyu@ebaysf.com <br>
<b>HADOOP-6963 didn&apos;t update hadoop-core-pom-template.xml</b><br>
<blockquote>HADOOP-6963 introduced commons-io 2.1 in ivy.xml but forgot to update the hadoop-core-pom-template.xml.<br><br>This has caused map reduce jobs in downstream projects to fail with:<br>{code}<br>Caused by: java.lang.ClassNotFoundException: org.apache.commons.io.FileUtils<br> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)<br> at java.security.AccessController.doPrivileged(Native Method)<br> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)<br> at java.lang.ClassLoader.loadClass(ClassLoader.java:3...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8430">HADOOP-8430</a>.
Major improvement reported by eli2 and fixed by eli <br>
<b>Backport new FileSystem methods introduced by HADOOP-8014 to branch-1 </b><br>
<blockquote>Per HADOOP-8422 let&apos;s backport the new FileSystem methods from HADOOP-8014 to branch-1 so users can transition over in Hadoop 1.x releases, which helps upstream projects like HBase work against federation (see HBASE-6067). </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8445">HADOOP-8445</a>.
Major bug reported by raviprak and fixed by raviprak (security)<br>
<b>Token should not print the password in toString</b><br>
<blockquote>This JIRA is for porting HADOOP-6622 to branch-1 since 6622 is already closed.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8552">HADOOP-8552</a>.
Major bug reported by kkambatl and fixed by kkambatl (conf, security)<br>
<b>Conflict: Same security.log.file for multiple users. </b><br>
<blockquote>In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. In the presence of multiple users, this can lead to a potential conflict.<br><br>Adding username to the log file would avoid this scenario.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8617">HADOOP-8617</a>.
Major bug reported by brandonli and fixed by brandonli (performance)<br>
<b>backport pure Java CRC32 calculator changes to branch-1</b><br>
<blockquote>Multiple efforts have been made gradually to improve the CRC performance in Hadoop. This JIRA is to back port these changes to branch-1, which include HADOOP-6166, HADOOP-6148, HADOOP-7333.<br><br>The related HDFS and MAPREDUCE patches are uploaded to their original JIRAs HDFS-496 and MAPREDUCE-782.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8656">HADOOP-8656</a>.
Minor improvement reported by stevel@apache.org and fixed by rvs (bin)<br>
<b>backport forced daemon shutdown of HADOOP-8353 into branch-1</b><br>
<blockquote>the init.d service shutdown code doesn&apos;t work if the daemon is hung -backporting the portion of HADOOP-8353 that edits bin/hadoop-daemon.sh corrects this</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8748">HADOOP-8748</a>.
Minor improvement reported by acmurthy and fixed by acmurthy (io)<br>
<b>Move dfsclient retry to a util class</b><br>
<blockquote>HDFS-3504 introduced mechanisms to retry RPCs. I want to move that to common to allow MAPREDUCE-4603 to share it too. Should be a trivial patch.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-496">HDFS-496</a>.
Minor improvement reported by tlipcon and fixed by tlipcon (data-node, hdfs client, performance)<br>
<b>Use PureJavaCrc32 in HDFS</b><br>
<blockquote>Common now has a pure java CRC32 implementation which is more efficient than java.util.zip.CRC32. This issue is to make use of it.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1378">HDFS-1378</a>.
Major improvement reported by tlipcon and fixed by cmccabe (name-node)<br>
<b>Edit log replay should track and report file offsets in case of errors</b><br>
<blockquote>Occasionally there are bugs or operational mistakes that result in corrupt edit logs which I end up having to repair by hand. In these cases it would be very handy to have the error message also print out the file offsets of the last several edit log opcodes so it&apos;s easier to find the right place to edit in the OP_INVALID marker. We could also use this facility to provide a rough estimate of how far along edit log replay the NN is during startup (handy when a 2NN has died and replay takes a w...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1910">HDFS-1910</a>.
Minor bug reported by slukog and fixed by (name-node)<br>
<b>when dfs.name.dir and dfs.name.edits.dir are same fsimage will be saved twice every time</b><br>
<blockquote>when image and edits dir are configured same, the fsimage flushing from memory to disk will be done twice whenever saveNamespace is done. this may impact the performance of backupnode/snn where it does a saveNamespace during every checkpointing time.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2305">HDFS-2305</a>.
Major bug reported by atm and fixed by atm (name-node)<br>
<b>Running multiple 2NNs can result in corrupt file system</b><br>
<blockquote>Here&apos;s the scenario:<br><br>* You run the NN and 2NN (2NN A) on the same machine.<br>* You don&apos;t have the address of the 2NN configured, so it&apos;s defaulting to 127.0.0.1.<br>* There&apos;s another 2NN (2NN B) running on a second machine.<br>* When a 2NN is done checkpointing, it says &quot;hey NN, I have an updated fsimage for you. You can download it from this URL, which includes my IP address, which is x&quot;<br><br>And here&apos;s the steps that occur to cause this issue:<br><br># Some edits happen.<br># 2NN A (on the NN machine) does a c...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2332">HDFS-2332</a>.
Major test reported by tlipcon and fixed by tlipcon (test)<br>
<b>Add test for HADOOP-7629: using an immutable FsPermission as an IPC parameter</b><br>
<blockquote>HADOOP-7629 fixes a bug where an immutable FsPermission would throw an error if used as the argument to fs.setPermission(). This JIRA is to add a test case for the common bugfix.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2541">HDFS-2541</a>.
Major bug reported by qwertymaniac and fixed by qwertymaniac (data-node)<br>
<b>For a sufficiently large value of blocks, the DN Scanner may request a random number with a negative seed value.</b><br>
<blockquote>Running off 0.20-security, I noticed that one could get the following exception when scanners are used:<br><br>{code}<br>DataXceiver <br>java.lang.IllegalArgumentException: n must be positive <br>at java.util.Random.nextInt(Random.java:250) <br>at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.getNewBlockScanTime(DataBlockScanner.java:251) <br>at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:268) <br>at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(Da...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2547">HDFS-2547</a>.
Trivial bug reported by qwertymaniac and fixed by qwertymaniac (name-node)<br>
<b>ReplicationTargetChooser has incorrect block placement comments</b><br>
<blockquote>{code}<br>/** The class is responsible for choosing the desired number of targets<br> * for placing block replicas.<br> * The replica placement strategy is that if the writer is on a datanode,<br> * the 1st replica is placed on the local machine, <br> * otherwise a random datanode. The 2nd replica is placed on a datanode<br> * that is on a different rack. The 3rd replica is placed on a datanode<br> * which is on the same rack as the **first replca**.<br> */<br>{code}<br><br>That should read &quot;second replica&quot;. The test cases c...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2637">HDFS-2637</a>.
Major bug reported by eli and fixed by eli (hdfs client)<br>
<b>The rpc timeout for block recovery is too low </b><br>
<blockquote>The RPC timeout for block recovery does not take into account that it issues multiple RPCs itself. This can cause recovery to fail if the network is congested or DNs are busy.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2638">HDFS-2638</a>.
Minor improvement reported by eli and fixed by eli (name-node)<br>
<b>Improve a block recovery log</b><br>
<blockquote>It would be useful to know whether an attempt to recover a block is failing because the block was already recovered (has a new GS) or the block is missing.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2653">HDFS-2653</a>.
Major improvement reported by eli and fixed by eli (data-node)<br>
<b>DFSClient should cache whether addrs are non-local when short-circuiting is enabled</b><br>
<blockquote>Something Todd mentioned to me off-line.. currently DFSClient doesn&apos;t cache the fact that non-local reads are non-local, so if short-circuiting is enabled every time we create a block reader we&apos;ll go through the isLocalAddress code path. We should cache the fact that an addr is non-local as well.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2654">HDFS-2654</a>.
Major improvement reported by eli and fixed by eli (data-node)<br>
<b>Make BlockReaderLocal not extend RemoteBlockReader2</b><br>
<blockquote>The BlockReaderLocal code paths are easier to understand (especially true on branch-1 where BlockReaderLocal inherits code from BlockerReader and FSInputChecker) if the local and remote block reader implementations are independent, and they&apos;re not really sharing much code anyway. If for some reason they start to share significant code we can make the BlockReader interface an abstract class.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2728">HDFS-2728</a>.
Minor bug reported by qwertymaniac and fixed by qwertymaniac (name-node)<br>
<b>Remove dfsadmin -printTopology from branch-1 docs since it does not exist</b><br>
<blockquote>It is documented we have -printTopology but we do not really have it in this branch. Possible docs mixup from somewhere in security branch pre-merge?<br><br>{code}<br>? branch-1 grep printTopology -R .<br>./src/docs/src/documentation/content/xdocs/.svn/text-base/hdfs_user_guide.xml.svn-base: &lt;code&gt;-printTopology&lt;/code&gt;<br>./src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml: &lt;code&gt;-printTopology&lt;/code&gt;<br>{code}<br><br>Lets remove the reference.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2751">HDFS-2751</a>.
Major bug reported by tlipcon and fixed by tlipcon (data-node)<br>
<b>Datanode drops OS cache behind reads even for short reads</b><br>
<blockquote>HDFS-2465 has some code which attempts to disable the &quot;drop cache behind reads&quot; functionality when the reads are &lt;256KB (eg HBase random access). But this check was missing in the {{close()}} function, so it always drops cache behind reads regardless of the size of the read. This hurts HBase random read performance when this patch is enabled.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2790">HDFS-2790</a>.
Minor bug reported by arpitgupta and fixed by arpitgupta <br>
<b>FSNamesystem.setTimes throws exception with wrong configuration name in the message</b><br>
<blockquote>the api throws this message when hdfs is not configured for accessTime<br><br>&quot;Access time for hdfs is not configured. Please set dfs.support.accessTime configuration parameter.&quot;<br><br><br>The property name should be dfs.access.time.precision</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2869">HDFS-2869</a>.
Minor bug reported by qwertymaniac and fixed by qwertymaniac (webhdfs)<br>
<b>Error in Webhdfs documentation for mkdir</b><br>
<blockquote>Reported over the lists by user Stuti Awasthi:<br><br>{quote}<br><br>I have tried the webhdfs functionality of Hadoop-1.0.0 and it is working fine.<br>Just a small change is required in the documentation :<br><br>Make a Directory declaration in documentation:<br>curl -i -X PUT &quot;http://&lt;HOST&gt;:&lt;PORT&gt;/&lt;PATH&gt;?op=MKDIRS[&amp;permission=&lt;OCTAL&gt;]&quot;<br><br>Gives following error :<br>HTTP/1.1 405 HTTP method PUT is not supported by this URL<br>Content-Length: 0<br>Server: Jetty(6.1.26)<br><br>Correction Required : This works for me<br>curl -i -X PUT &quot;ht...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2872">HDFS-2872</a>.
Major improvement reported by tlipcon and fixed by cmccabe (name-node)<br>
<b>Add sanity checks during edits loading that generation stamps are non-decreasing</b><br>
<blockquote>In 0.23 and later versions, we have a txid per edit, and the loading process verifies that there are no gaps. Lacking this in 1.0, we can use generation stamps as a proxy - the OP_SET_GENERATION_STAMP opcode should never result in a decreased genstamp. If it does, that would indicate that the edits are corrupt, or older edits are being applied to a newer checkpoint, for example.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2877">HDFS-2877</a>.
Major bug reported by tlipcon and fixed by tlipcon (name-node)<br>
<b>If locking of a storage dir fails, it will remove the other NN&apos;s lock file on exit</b><br>
<blockquote>In {{Storage.tryLock()}}, we call {{lockF.deleteOnExit()}} regardless of whether we successfully lock the directory. So, if another NN has the directory locked, then we&apos;ll fail to lock it the first time we start another NN. But our failed start attempt will still remove the other NN&apos;s lockfile, and a second attempt will erroneously start.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3008">HDFS-3008</a>.
Major bug reported by eli2 and fixed by eli (hdfs client)<br>
<b>Negative caching of local addrs doesn&apos;t work</b><br>
<blockquote>HDFS-2653 added negative caching of local addrs, however it still goes through the fall through path every time if the address is non-local. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3078">HDFS-3078</a>.
Major bug reported by eli2 and fixed by eli <br>
<b>2NN https port setting is broken</b><br>
<blockquote>The code in SecondaryNameNode.java to set the https port is broken, if the port is set it sets the bind addr to &quot;addr:addr:port&quot; which is bogus. Even if it did work it uses port 0 instead of port 50490 (default listed in ./src/packages/templates/conf/hdfs-site.xml).<br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3129">HDFS-3129</a>.
Minor test reported by cmccabe and fixed by cmccabe <br>
<b>NetworkTopology: add test that getLeaf should check for invalid topologies</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3131">HDFS-3131</a>.
Minor improvement reported by szetszwo and fixed by brandonli <br>
<b>Improve TestStorageRestore</b><br>
<blockquote>Aaron has the following comments on TestStorageRestore in HDFS-3127.<br><br># removeStorageAccess, restoreAccess, and numStorageDirs can all be made private<br># numStorageDirs can be made static<br># Rather than do set(Readable/Executable/Writable), use FileUtil.chmod(...).<br># Please put the contents of the test in a try/finally, with the calls to shutdown the cluster and the 2NN in the finally block.<br># Some lines are over 80 chars.<br># No need for the numDatanodes variable - it&apos;s only used in one place.<br>#...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3148">HDFS-3148</a>.
Major new feature reported by eli2 and fixed by eli (hdfs client, performance)<br>
<b>The client should be able to use multiple local interfaces for data transfer</b><br>
<blockquote>HDFS-3147 covers using multiple interfaces on the server (Datanode) side. Clients should also be able to utilize multiple *local* interfaces for outbound connections instead of always using the interface for the local hostname. This can be accomplished with a new configuration parameter ({{dfs.client.local.interfaces}}) that accepts a list of interfaces the client should use. Acceptable configuration values are the same as the {{dfs.datanode.available.interfaces}} parameter. The client binds ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3150">HDFS-3150</a>.
Major new feature reported by eli2 and fixed by eli (data-node, hdfs client)<br>
<b>Add option for clients to contact DNs via hostname</b><br>
<blockquote>The DN listens on multiple IP addresses (the default {{dfs.datanode.address}} is the wildcard) however per HADOOP-6867 only the source address (IP) of the registration is given to clients. HADOOP-985 made clients access datanodes by IP primarily to avoid the latency of a DNS lookup, this had the side effect of breaking DN multihoming (the client can not route the IP exposed by the NN if the DN registers with an interface that has a cluster-private IP). To fix this let&apos;s add back the option fo...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3176">HDFS-3176</a>.
Major bug reported by kihwal and fixed by kihwal (hdfs client)<br>
<b>JsonUtil should not parse the MD5MD5CRC32FileChecksum bytes on its own.</b><br>
<blockquote>Currently JsonUtil used by webhdfs parses MD5MD5CRC32FileChecksum binary bytes on its own and contructs a MD5MD5CRC32FileChecksum. It should instead call MD5MD5CRC32FileChecksum.readFields().</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3330">HDFS-3330</a>.
Critical bug reported by tlipcon and fixed by tlipcon (name-node)<br>
<b>If GetImageServlet throws an Error or RTE, response has HTTP &quot;OK&quot; status</b><br>
<blockquote>Currently in GetImageServlet, we catch Exception but not other Errors or RTEs. So, if the code ends up throwing one of these exceptions, the &quot;response.sendError()&quot; code doesn&apos;t run, but the finally clause does run. This results in the servlet returning HTTP 200 OK and an empty response, which causes the client to think it got a successful image transfer.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3453">HDFS-3453</a>.
Major bug reported by kihwal and fixed by kihwal (hdfs client)<br>
<b>HDFS does not use ClientProtocol in a backward-compatible way</b><br>
<blockquote>HDFS-617 was brought into branch-0.20-security/branch-1 to support non-recursive create, along with HADOOP-6840 and HADOOP-6886. However, the changes in HDFS was done in an incompatible way, making the client unusable against older clusters, even when plain old create() is called. This is because DFS now internally calls create() through the newly introduced method. By simply changing how the methods are wired internally, we can remove this limitation. We may eventually switch back to the app...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3461">HDFS-3461</a>.
Major bug reported by owen.omalley and fixed by owen.omalley <br>
<b>HFTP should use the same port &amp; protocol for getting the delegation token</b><br>
<blockquote>Currently, hftp uses http to the Namenode&apos;s https port, which doesn&apos;t work.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3466">HDFS-3466</a>.
Major bug reported by owen.omalley and fixed by owen.omalley (name-node, security)<br>
<b>The SPNEGO filter for the NameNode should come out of the web keytab file</b><br>
<blockquote>Currently, the spnego filter uses the DFS_NAMENODE_KEYTAB_FILE_KEY to find the keytab. It should use the DFS_WEB_AUTHENTICATION_KERBEROS_KEYTAB_KEY to do it.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3504">HDFS-3504</a>.
Major improvement reported by sseth and fixed by szetszwo <br>
<b>Configurable retry in DFSClient</b><br>
<blockquote>When NN maintenance is performed on a large cluster, jobs end up failing. This is particularly bad for long running jobs. The client retry policy could be made configurable so that jobs don&apos;t need to be restarted.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3516">HDFS-3516</a>.
Major improvement reported by szetszwo and fixed by szetszwo (hdfs client)<br>
<b>Check content-type in WebHdfsFileSystem</b><br>
<blockquote>WebHdfsFileSystem currently tries to parse the response as json. It may be a good idea to check the content-type before parsing it.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3551">HDFS-3551</a>.
Major bug reported by szetszwo and fixed by szetszwo (webhdfs)<br>
<b>WebHDFS CREATE does not use client location for redirection</b><br>
<blockquote>CREATE currently redirects client to a random datanode but not using the client location information.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3596">HDFS-3596</a>.
Minor improvement reported by cmccabe and fixed by cmccabe <br>
<b>Improve FSEditLog pre-allocation in branch-1</b><br>
<blockquote>Implement HDFS-3510 in branch-1. This will improve FSEditLog preallocation to decrease the incidence of corrupted logs after disk full conditions. (See HDFS-3510 for a longer description.)</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3617">HDFS-3617</a>.
Major improvement reported by mattf and fixed by qwertymaniac <br>
<b>Port HDFS-96 to branch-1 (support blocks greater than 2GB)</b><br>
<blockquote>Please see HDFS-96.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3652">HDFS-3652</a>.
Blocker bug reported by tlipcon and fixed by tlipcon (name-node)<br>
<b>1.x: FSEditLog failure removes the wrong edit stream when storage dirs have same name</b><br>
<blockquote>In {{FSEditLog.removeEditsForStorageDir}}, we iterate over the edits streams trying to find the stream corresponding to a given dir. To check equality, we currently use the following condition:<br>{code}<br> File parentDir = getStorageDirForStream(idx);<br> if (parentDir.getName().equals(sd.getRoot().getName())) {<br>{code}<br>... which is horribly incorrect. If two or more storage dirs happen to have the same terminal path component (eg /data/1/nn and /data/2/nn) then it will pick the wrong strea...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3667">HDFS-3667</a>.
Major improvement reported by szetszwo and fixed by szetszwo (webhdfs)<br>
<b>Add retry support to WebHdfsFileSystem</b><br>
<blockquote>DFSClient (i.e. DistributedFileSystem) has a configurable retry policy and it retries on exceptions such as connection failure, safemode. WebHdfsFileSystem should have similar retry support.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3696">HDFS-3696</a>.
Critical bug reported by kihwal and fixed by szetszwo <br>
<b>Create files with WebHdfsFileSystem goes OOM when file size is big</b><br>
<blockquote>When doing &quot;fs -put&quot; to a WebHdfsFileSystem (webhdfs://), the FsShell goes OOM if the file size is large. When I tested, 20MB files were fine, but 200MB didn&apos;t work. <br><br>I also tried reading a large file by issuing &quot;-cat&quot; and piping to a slow sink in order to force buffering. The read path didn&apos;t have this problem. The memory consumption stayed the same regardless of progress.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3698">HDFS-3698</a>.
Major bug reported by atm and fixed by atm (security)<br>
<b>TestHftpFileSystem is failing in branch-1 due to changed default secure port</b><br>
<blockquote>This test is failing since the default secure port changed to the HTTP port upon the commit of HDFS-2617.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3701">HDFS-3701</a>.
Critical bug reported by nkeywal and fixed by nkeywal (hdfs client)<br>
<b>HDFS may miss the final block when reading a file opened for writing if one of the datanode is dead</b><br>
<blockquote>When the file is opened for writing, the DFSClient calls one of the datanode owning the last block to get its size. If this datanode is dead, the socket exception is shallowed and the size of this last block is equals to zero. This seems to be fixed on trunk, but I didn&apos;t find a related Jira. On 1.0.3, it&apos;s not fixed. It&apos;s on the same area as HDFS-1950 or HDFS-3222.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3871">HDFS-3871</a>.
Minor improvement reported by acmurthy and fixed by acmurthy (hdfs client)<br>
<b>Change NameNodeProxies to use HADOOP-8748</b><br>
<blockquote>Change NameNodeProxies to use util method introduced via HADOOP-8748.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3966">HDFS-3966</a>.
Minor bug reported by jingzhao and fixed by jingzhao <br>
<b>For branch-1, TestFileCreation should use JUnit4 to make assumeTrue work</b><br>
<blockquote>Currently in TestFileCreation for branch-1, assumeTrue() is used by two test cases in order to check if the OS is Linux. Thus JUnit 4 should be used to enable assumeTrue.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-782">MAPREDUCE-782</a>.
Minor improvement reported by tlipcon and fixed by tlipcon (performance)<br>
<b>Use PureJavaCrc32 in mapreduce spills</b><br>
<blockquote>HADOOP-6148 implemented a Pure Java implementation of CRC32 which performs better than the built-in one. This issue is to make use of it in the mapred package</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1740">MAPREDUCE-1740</a>.
Major bug reported by tlipcon and fixed by ahmed.radwan (jobtracker)<br>
<b>NPE in getMatchingLevelForNodes when node locations are variable depth</b><br>
<blockquote>In getMatchingLevelForNodes, we assume that both nodes have the same &quot;depth&quot; (ie number of path components). If the user provides a topology script that assigns one node a path like /foo/bar/baz and another node a path like /foo/blah, this function will throw an NPE.<br><br>I&apos;m not sure if there are other places where we assume that all node locations have a constant number of paths. If so we should check the output of the topology script aggressively to be sure this is the case. Otherwise I think ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2073">MAPREDUCE-2073</a>.
Trivial test reported by tlipcon and fixed by tlipcon (distributed-cache, test)<br>
<b>TestTrackerDistributedCacheManager should be up-front about requirements on build environment</b><br>
<blockquote>TestTrackerDistributedCacheManager will fail on a system where the build directory is in any path where an ancestor doesn&apos;t have a+x permissions. On one of our hudson boxes, for example, hudson&apos;s workspace had 700 permissions and caused this test to fail reliably, but not in an obvious manner. It would be helpful if the test failed with a more obvious error message during setUp() when the build environment is misconfigured.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2103">MAPREDUCE-2103</a>.
Trivial improvement reported by tlipcon and fixed by tlipcon (task-controller)<br>
<b>task-controller shouldn&apos;t require o-r permissions</b><br>
<blockquote>The task-controller currently checks that &quot;other&quot; users don&apos;t have read permissions. This is unnecessary - we just need to make it&apos;s not executable. The debian policy manual explains it well:<br><br>{quote}<br>Setuid and setgid executables should be mode 4755 or 2755 respectively, and owned by the appropriate user or group. They should not be made unreadable (modes like 4711 or 2711 or even 4111); doing so achieves no extra security, because anyone can find the binary in the freely available Debian pa...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2129">MAPREDUCE-2129</a>.
Major bug reported by xiaokang and fixed by subrotosanyal (jobtracker)<br>
<b>Job may hang if mapreduce.job.committer.setup.cleanup.needed=false and mapreduce.map/reduce.failures.maxpercent&gt;0</b><br>
<blockquote>Job may hang at RUNNING state if mapreduce.job.committer.setup.cleanup.needed=false and mapreduce.map/reduce.failures.maxpercent&gt;0. It happens when some tasks fail but havent reached failures.maxpercent.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2376">MAPREDUCE-2376</a>.
Major bug reported by tlipcon and fixed by tlipcon (task-controller, test)<br>
<b>test-task-controller fails if run as a userid &lt; 1000</b><br>
<blockquote>test-task-controller tries to verify that the task-controller won&apos;t run on behalf of users with uid &lt; 1000. This makes the test fail when running in some test environments - eg our hudson jobs internally run as a system user with uid 101.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2377">MAPREDUCE-2377</a>.
Major bug reported by tlipcon and fixed by benoyantony (task-controller)<br>
<b>task-controller fails to parse configuration if it doesn&apos;t end in \n</b><br>
<blockquote>If the task-controller.cfg file doesn&apos;t end in a newline, it fails to parse properly.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2835">MAPREDUCE-2835</a>.
Major improvement reported by tomwhite and fixed by tomwhite <br>
<b>Make per-job counter limits configurable</b><br>
<blockquote>The per-job counter limits introduced in MAPREDUCE-1943 are fixed, except for the total number allowed per job (mapreduce.job.counters.limit). It would be useful to make them all configurable.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2836">MAPREDUCE-2836</a>.
Minor improvement reported by jwfbean and fixed by ahmed.radwan (contrib/fair-share)<br>
<b>Provide option to fail jobs when submitted to non-existent pools.</b><br>
<blockquote>In some environments, it might be desirable to explicitly specify the fair scheduler pools and to explicitly fail jobs that are not submitted to any of the pools. <br><br>Current behavior of the fair scheduler is to submit jobs to a default pool if a pool name isn&apos;t specified or to create a pool with the new name if the pool name doesn&apos;t already exist. There should be a configuration option for the fair scheduler that causes it to noisily fail the job if it&apos;s submitted to a pool that isn&apos;t pre-spec...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2850">MAPREDUCE-2850</a>.
Major sub-task reported by eli and fixed by ravidotg (tasktracker)<br>
<b>Add test for TaskTracker disk failure handling (MR-2413)</b><br>
<blockquote>MR-2413 doesn&apos;t have any test coverage that eg tests that the TT can survive disk failure.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2903">MAPREDUCE-2903</a>.
Major bug reported by devaraj.k and fixed by devaraj.k (jobtracker)<br>
<b>Map Tasks graph is throwing XML Parse error when Job is executed with 0 maps</b><br>
<blockquote>{code:xml}<br>XML Parsing Error: no element found<br>Location: http://10.18.52.170:50030/taskgraph?type=map&amp;jobid=job_201108291536_0001<br>Line Number 1, Column 1:<br>^<br>{code}<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2905">MAPREDUCE-2905</a>.
Major bug reported by jwfbean and fixed by jwfbean (contrib/fair-share)<br>
<b>CapBasedLoadManager incorrectly allows assignment when assignMultiple is true (was: assignmultiple per job)</b><br>
<blockquote>We encountered a situation where in the same cluster, large jobs benefit from mapred.fairscheduler.assignmultiple, but small jobs with small numbers of mappers do not: the mappers all clump to fully occupy just a few nodes, which causes those nodes to saturate and bottleneck. The desired behavior is to spread the job across more nodes so that a relatively small job doesn&apos;t saturate any node in the cluster.<br><br>Testing has shown that setting mapred.fairscheduler.assignmultiple to false gives the ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2919">MAPREDUCE-2919</a>.
Minor improvement reported by eli and fixed by qwertymaniac (jobtracker)<br>
<b>The JT web UI should show job start times </b><br>
<blockquote>It would be helpful if the list of jobs in the main JT web UI (running, completed, failed..) had a column with the start time. Clicking into each job detail can get tedious.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2932">MAPREDUCE-2932</a>.
Trivial bug reported by qwertymaniac and fixed by qwertymaniac (tasktracker)<br>
<b>Missing instrumentation plugin class shouldn&apos;t crash the TT startup per design</b><br>
<blockquote>Per the implementation of the TaskTracker instrumentation plugin implementation (from 2008), a ClassNotFoundException during loading up of an configured TaskTracker instrumentation class shouldn&apos;t have hampered TT start up at all.<br><br>But, there is one class-fetching call outside try/catch, which makes TT fall down with a RuntimeException if there&apos;s a class not found. Would be good to include this line into the try/catch itself.<br><br>Strace would appear as:<br><br>{code}<br>2011-08-25 11:45:38,470 ERROR org....</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2957">MAPREDUCE-2957</a>.
Major sub-task reported by eli and fixed by eli (tasktracker)<br>
<b>The TT should not re-init if it has no good local dirs</b><br>
<blockquote>The TT will currently try to re-init itself on disk failure even if it has no good local dirs. It should shutdown instead.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3015">MAPREDUCE-3015</a>.
Major sub-task reported by eli and fixed by eli (tasktracker)<br>
<b>Add local dir failure info to metrics and the web UI</b><br>
<blockquote>Like HDFS-811/HDFS-1850 but for the TT.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3278">MAPREDUCE-3278</a>.
Major improvement reported by tlipcon and fixed by tlipcon (mrv1, performance, task)<br>
<b>0.20: avoid a busy-loop in ReduceTask scheduling</b><br>
<blockquote>Looking at profiling results, it became clear that the ReduceTask has the following busy-loop which was causing it to suck up 100% of CPU in the fetch phase in some configurations:<br>- the number of reduce fetcher threads is configured to more than the number of hosts<br>- therefore &quot;busyEnough()&quot; never returns true<br>- the &quot;scheduling&quot; portion of the code can&apos;t schedule any new fetches, since all of the pending fetches in the mapLocations buffer correspond to hosts that are already being fetched (t...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3289">MAPREDUCE-3289</a>.
Major improvement reported by tlipcon and fixed by tlipcon (mrv2, nodemanager, performance)<br>
<b>Make use of fadvise in the NM&apos;s shuffle handler</b><br>
<blockquote>Using the new NativeIO fadvise functions, we can make the NodeManager prefetch map output before it&apos;s send over the socket, and drop it out of the fs cache once it&apos;s been sent (since it&apos;s very rare for an output to have to be re-sent). This improves IO efficiency and reduces cache pollution.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3365">MAPREDUCE-3365</a>.
Trivial improvement reported by sho.shimauchi and fixed by sho.shimauchi (contrib/fair-share)<br>
<b>Uncomment eventlog settings from the documentation</b><br>
<blockquote>Two fair scheduler debug options &quot;mapred.fairscheduler.eventlog.enabled&quot; and &quot;mapred.fairscheduler.dump.interval&quot; are commented out in fair scheduler doc file.<br>It&apos;s useful for debugging.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3394">MAPREDUCE-3394</a>.
Trivial improvement reported by tlipcon and fixed by tlipcon (task)<br>
<b>Add log guard for a debug message in ReduceTask</b><br>
<blockquote>There&apos;s a LOG.debug message in ReduceTask that stringifies a task ID and uses a non-negligible amount of CPU in some cases. We should guard it with {{isDebugEnabled}}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3395">MAPREDUCE-3395</a>.
Trivial improvement reported by eli and fixed by eli (documentation)<br>
<b>Add mapred.disk.healthChecker.interval to mapred-default.xml</b><br>
<blockquote>Let&apos;s add mapred.disk.healthChecker.interval to mapred-default.xml.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3405">MAPREDUCE-3405</a>.
Critical bug reported by tlipcon and fixed by tlipcon (capacity-sched, contrib/fair-share)<br>
<b>MAPREDUCE-3015 broke compilation of contrib scheduler tests</b><br>
<blockquote>MAPREDUCE-3015 added a new argument to the TaskTrackerStatus constructor, which is used by a few of the scheduler tests, but didn&apos;t update those tests. So, the contrib test build is now failing on 0.20-security</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3419">MAPREDUCE-3419</a>.
Major bug reported by eli and fixed by eli (tasktracker, test)<br>
<b>Don&apos;t mark exited TT threads as dead in MiniMRCluster </b><br>
<blockquote>MAPREDUCE-2850 flagged all TT threads that exited in the MiniMRCluster as dead, this breaks a number of the other tests that use MiniMRCluster across restart.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3424">MAPREDUCE-3424</a>.
Minor sub-task reported by eli and fixed by eli (tasktracker)<br>
<b>Some LinuxTaskController cleanup</b><br>
<blockquote>MR-2415 had some tabs and weird indenting and spacing. Also would be more clear if LTC explicitly overrides createLogDir. Let&apos;s clean this up. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3674">MAPREDUCE-3674</a>.
Critical bug reported by qwertymaniac and fixed by qwertymaniac (jobtracker)<br>
<b>If invoked with no queueName request param, jobqueue_details.jsp injects a null queue name into schedulers.</b><br>
<blockquote>When you access /jobqueue_details.jsp manually, instead of via a link, it has queueName set to null internally and this goes for a lookup into the scheduling info maps as well.<br><br>As a result, if using FairScheduler, a Pool with String name = null gets created and this brings the scheduler down. I have not tested what happens to the CapacityScheduler, but ideally if no queueName is set in that jsp, it should fall back to &apos;default&apos;. Otherwise, this brings down the JobTracker completely.<br><br>FairSch...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3789">MAPREDUCE-3789</a>.
Critical bug reported by qwertymaniac and fixed by qwertymaniac (capacity-sched, scheduler)<br>
<b>CapacityTaskScheduler may perform unnecessary reservations in heterogenous tracker environments</b><br>
<blockquote>Briefly, to reproduce:<br><br>* Run JT with CapacityTaskScheduler [Say, Cluster max map = 8G, Cluster map = 2G]<br>* Run two TTs but with varied capacity, say, one with 4 map slot, another with 3 map slots.<br>* Run a job with two tasks, each demanding mem worth 4 slots at least (Map mem = 7G or so).<br>* Job will begin running on TT #1, but will also end up reserving the 3 slots on TT #2 cause it does not check for the maximum limit of slots when reserving (as it goes greedy, and hopes to gain more slots i...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3837">MAPREDUCE-3837</a>.
Major new feature reported by mayank_bansal and fixed by mayank_bansal <br>
<b>Job tracker is not able to recover job in case of crash and after that no user can submit job.</b><br>
<blockquote>If job tracker is crashed while running , and there were some jobs are running , so if job tracker&apos;s property mapreduce.jobtracker.restart.recover is true then it should recover the job.<br><br>However the current behavior is as follows<br>jobtracker try to restore the jobs but it can not . And after that jobtracker closes its handle to hdfs and nobody else can submit job. <br><br>Thanks,<br>Mayank</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3992">MAPREDUCE-3992</a>.
Major bug reported by tlipcon and fixed by tlipcon (mrv1)<br>
<b>Reduce fetcher doesn&apos;t verify HTTP status code of response</b><br>
<blockquote>Currently, the reduce fetch code doesn&apos;t check the HTTP status code of the response. This can lead to the following situation:<br>- the map output servlet gets an IOException after setting the headers but before the first call to flush()<br>- this causes it to send a response with a non-OK result code, including the exception text as the response body (response.sendError() does this if the response isn&apos;t committed)<br>- it will still include the response headers indicating it&apos;s a valid response<br><br>In th...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4001">MAPREDUCE-4001</a>.
Minor improvement reported by qwertymaniac and fixed by qwertymaniac (capacity-sched)<br>
<b>Improve MAPREDUCE-3789&apos;s fix logic by looking at job&apos;s slot demands instead</b><br>
<blockquote>In MAPREDUCE-3789, the fix had unfortunately only covered the first time assignment scenario, and the test had not really caught the mistake of using the condition of looking at available TT slots (instead of looking for how many slots a job&apos;s task demands).<br><br>We should change the condition of reservation in such a manner:<br><br>{code}<br> if ((getPendingTasks(j) != 0 &amp;&amp;<br> !hasSufficientReservedTaskTrackers(j)) &amp;&amp;<br>- (taskTracker.getAvailableSlots(type) !=<br>+ ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4088">MAPREDUCE-4088</a>.
Critical bug reported by raviprak and fixed by raviprak (mrv1)<br>
<b>Task stuck in JobLocalizer prevented other tasks on the same node from committing</b><br>
<blockquote>We saw that as a result of HADOOP-6963, one task was stuck in this<br><br>Thread 23668: (state = IN_NATIVE)<br> - java.io.UnixFileSystem.getBooleanAttributes0(java.io.File) @bci=0 (Compiled frame; information may be imprecise)<br> - java.io.UnixFileSystem.getBooleanAttributes(java.io.File) @bci=2, line=228 (Compiled frame)<br> - java.io.File.exists() @bci=20, line=733 (Compiled frame)<br> - org.apache.hadoop.fs.FileUtil.getDU(java.io.File) @bci=3, line=446 (Compiled frame)<br> - org.apache.hadoop.fs.FileUtil.getD...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4095">MAPREDUCE-4095</a>.
Major bug reported by eli2 and fixed by cmccabe <br>
<b>TestJobInProgress#testLocality uses a bogus topology</b><br>
<blockquote>The following in TestJobInProgress#testLocality:<br><br>{code}<br> Node r2n4 = new NodeBase(&quot;/default/rack2/s1/node4&quot;);<br> nt.add(r2n4);<br>{code}<br><br>violates the check introduced by HADOOP-8159:<br><br>{noformat}<br>Testcase: testLocality took 0.005 sec<br> Caused an ERROR<br>Invalid network topology. You cannot have a rack and a non-rack node at the same level of the network topology.<br>org.apache.hadoop.net.NetworkTopology$InvalidTopologyException: Invalid network topology. You cannot have a rack and a non-ra...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4241">MAPREDUCE-4241</a>.
Major bug reported by abayer and fixed by abayer (build, examples)<br>
<b>Pipes examples do not compile on Ubuntu 12.04</b><br>
<blockquote>-lssl alone won&apos;t work for compiling the pipes examples on 12.04. -lcrypto needs to be added explicitly.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4328">MAPREDUCE-4328</a>.
Major improvement reported by acmurthy and fixed by acmurthy (mrv1)<br>
<b>Add the option to quiesce the JobTracker</b><br>
<blockquote>In several failure scenarios it would be very handy to have an option to quiesce the JobTracker.<br><br>Recently, we saw a case where the NameNode had to be rebooted at a customer due to a random hardware failure - in such a case it would have been nice to not lose jobs by quiescing the JobTracker.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4399">MAPREDUCE-4399</a>.
Major bug reported by vicaya and fixed by vicaya (performance, tasktracker)<br>
<b>Fix performance regression in shuffle </b><br>
<blockquote>There is a significant (up to 3x) performance regression in shuffle (vs 0.20.2) in the Hadoop 1.x series. Most noticeable with high-end switches.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4400">MAPREDUCE-4400</a>.
Major bug reported by vicaya and fixed by vicaya (performance, task)<br>
<b>Fix performance regression for small jobs/workflows</b><br>
<blockquote>There is a significant performance regression for small jobs/workflows (vs 0.20.2) in the Hadoop 1.x series. Most noticeable with Hive and Pig jobs. PigMix has an average 40% regression against 0.20.2.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4511">MAPREDUCE-4511</a>.
Major improvement reported by ahmed.radwan and fixed by ahmed.radwan (mrv1, mrv2, performance)<br>
<b>Add IFile readahead</b><br>
<blockquote>This ticket is to add IFile readahead as part of HADOOP-7714.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4558">MAPREDUCE-4558</a>.
Major bug reported by sseth and fixed by sseth <br>
<b>TestJobTrackerSafeMode is failing</b><br>
<blockquote>MAPREDUCE-1906 exposed an issue with this unit test. It has 3 TTs running, but has a check for the TT count to reach exactly 2 (which would be reached with a higher heartbeat interval).<br><br>The test ends up getting stuck, with the following message repeated multiple times.<br>{code}<br> [junit] 2012-08-15 11:26:46,299 INFO mapred.TestJobTrackerSafeMode (TestJobTrackerSafeMode.java:checkTrackers(201)) - Waiting for Initialize all Task Trackers<br> [junit] 2012-08-15 11:26:47,301 INFO mapred.TestJo...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4603">MAPREDUCE-4603</a>.
Major improvement reported by acmurthy and fixed by acmurthy <br>
<b>Allow JobClient to retry job-submission when JT is in safemode</b><br>
<blockquote>Similar to HDFS-3504, it would be useful to allow JobClient to retry job-submission when JT is in safemode (via MAPREDUCE-4328).<br><br>This way applications like Pig/Hive don&apos;t bork midway when the NN/JT are not operational.</blockquote></li>
</ul>
<h2>Changes since Hadoop 1.0.2</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5528">HADOOP-5528</a>.
Major new feature reported by klbostee and fixed by klbostee <br>
<b>Binary partitioner</b><br>
<blockquote> New BinaryPartitioner that partitions BinaryComparable keys by hashing a configurable part of the bytes array corresponding to the key.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8352">HADOOP-8352</a>.
Major improvement reported by owen.omalley and fixed by owen.omalley <br>
<b>We should always generate a new configure script for the c++ code</b><br>
<blockquote>If you are compiling c++, the configure script will now be automatically regenerated as it should be.<br>This requires autoconf version 2.61 or greater.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4017">MAPREDUCE-4017</a>.
Trivial improvement reported by knoguchi and fixed by tgraves (jobhistoryserver, jobtracker)<br>
<b>Add jobname to jobsummary log</b><br>
<blockquote> The Job Summary log may contain commas in values that are escaped by a &#39;\&#39; character. This was true before, but is more likely to be exposed now.
</blockquote></li>
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6924">HADOOP-6924</a>.
Major bug reported by wattsteve and fixed by devaraj <br>
<b>Build fails with non-Sun JREs due to different pathing to the operating system architecture shared libraries</b><br>
<blockquote>The src/native/configure script used to build the native libraries has an environment variable called JNI_LDFLAGS which is set as follows:<br><br>JNI_LDFLAGS=&quot;-L$JAVA_HOME/jre/lib/$OS_ARCH/server&quot;<br><br>This pathing convention to the shared libraries for the operating system architecture is unique to Oracle/Sun Java and thus on other flavors of Java the path will not exist and will result in a build failure with the following exception:<br><br> [exec] gcc -shared ../src/org/apache/hadoop/io/compress/zlib...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6941">HADOOP-6941</a>.
Major bug reported by wattsteve and fixed by devaraj <br>
<b>Support non-SUN JREs in UserGroupInformation</b><br>
<blockquote>Attempting to format the namenode or attempting to start Hadoop using Apache Harmony or the IBM Java JREs results in the following exception:<br><br>10/09/07 16:35:05 ERROR namenode.NameNode: java.lang.NoClassDefFoundError: com.sun.security.auth.UnixPrincipal<br> at org.apache.hadoop.security.UserGroupInformation.&lt;clinit&gt;(UserGroupInformation.java:223)<br> at java.lang.J9VMInternals.initializeImpl(Native Method)<br> at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)<br> at org.apache.hadoop.hdfs.ser...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6963">HADOOP-6963</a>.
Critical bug reported by owen.omalley and fixed by raviprak (fs)<br>
<b>Fix FileUtil.getDU. It should not include the size of the directory or follow symbolic links</b><br>
<blockquote>The getDU method should not include the size of the directory. The Java interface says that the value is undefined and in Linux/Sun it gets the 4096 for the inode. Clearly this isn&apos;t useful.<br>It also recursively calls itself. In case the directory has a symbolic link forming a cycle, getDU keeps spinning in the cycle. In our case, we saw this in the org.apache.hadoop.mapred.JobLocalizer.downloadPrivateCacheObjects call. This prevented other tasks on the same node from committing, causing the T...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7381">HADOOP-7381</a>.
Major bug reported by jrottinghuis and fixed by jrottinghuis (build)<br>
<b>FindBugs OutOfMemoryError</b><br>
<blockquote>When running the findbugs target from Jenkins, I get an OutOfMemory error.<br>The &quot;effort&quot; in FindBugs is set to Max which ends up using a lot of memory to go through all the classes. The jvmargs passed to FindBugs is hardcoded to 512 MB max.<br><br>We can leave the default to 512M, as long as we pass this as an ant parameter which can be overwritten in individual cases through -D, or in the build.properties file (either basedir, or user&apos;s home directory).<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8027">HADOOP-8027</a>.
Minor improvement reported by qwertymaniac and fixed by atm (metrics)<br>
<b>Visiting /jmx on the daemon web interfaces may print unnecessary error in logs</b><br>
<blockquote>Logs that follow a {{/jmx}} servlet visit:<br><br>{code}<br>11/11/22 12:09:52 ERROR jmx.JMXJsonServlet: getting attribute UsageThreshold of java.lang:type=MemoryPool,name=Par Eden Space threw an exception<br>javax.management.RuntimeMBeanException: java.lang.UnsupportedOperationException: Usage threshold is not supported<br> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:856)<br>...<br>{code}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8151">HADOOP-8151</a>.
Major bug reported by tlipcon and fixed by mattf (io, native)<br>
<b>Error handling in snappy decompressor throws invalid exceptions</b><br>
<blockquote>SnappyDecompressor.c has the following code in a few places:<br>{code}<br> THROW(env, &quot;Ljava/lang/InternalError&quot;, &quot;Could not decompress data. Buffer length is too small.&quot;);<br>{code}<br>this is incorrect, though, since the THROW macro doesn&apos;t need the &quot;L&quot; before the class name. This results in a ClassNotFoundException for Ljava.lang.InternalError being thrown, instead of the intended exception.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8188">HADOOP-8188</a>.
Major improvement reported by devaraj and fixed by devaraj <br>
<b>Fix the build process to do with jsvc, with IBM&apos;s JDK as the underlying jdk</b><br>
<blockquote>When IBM JDK is used as the underlying JDK for the build process, the build of jsvc fails. I just needed to add an extra &quot;os arch&quot; expression in the condition that sets os-arch.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8251">HADOOP-8251</a>.
Blocker bug reported by tlipcon and fixed by tlipcon (security)<br>
<b>SecurityUtil.fetchServiceTicket broken after HADOOP-6941</b><br>
<blockquote>HADOOP-6941 replaced direct references to some classes with reflective access so as to support other JDKs. Unfortunately there was a mistake in the name of the Krb5Util class, which broke fetchServiceTicket. This manifests itself as the inability to run checkpoints or other krb5-SSL HTTP-based transfers:<br><br>java.lang.ClassNotFoundException: sun.security.jgss.krb5</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8293">HADOOP-8293</a>.
Major bug reported by owen.omalley and fixed by owen.omalley (build)<br>
<b>The native library&apos;s Makefile.am doesn&apos;t include JNI path</b><br>
<blockquote>When compiling on centos 6, I get the following error when compiling the native library:<br><br>{code}<br> [exec] /usr/bin/ld: cannot find -ljvm<br>{code}<br><br>The problem is simply that the Makefile.am libhadoop_la_LDFLAGS doesn&apos;t include AM_LDFLAGS.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8294">HADOOP-8294</a>.
Critical bug reported by kihwal and fixed by kihwal (ipc)<br>
<b>IPC Connection becomes unusable even if server address was temporarilly unresolvable</b><br>
<blockquote>This is same as HADOOP-7428, but was observed on 1.x data nodes. This can happen more frequently after HADOOP-7472, which allows IPC Connection to re-resolve the name. HADOOP-7428 needs to be back-ported.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8338">HADOOP-8338</a>.
Major bug reported by owen.omalley and fixed by owen.omalley (security)<br>
<b>Can&apos;t renew or cancel HDFS delegation tokens over secure RPC</b><br>
<blockquote>The fetchdt tool is failing for secure deployments when given --renew or --cancel on tokens fetched using RPC. (The tokens fetched over HTTP can be renewed and canceled fine.)</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8346">HADOOP-8346</a>.
Blocker bug reported by tucu00 and fixed by devaraj (security)<br>
<b>Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO</b><br>
<blockquote>before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test -PtestKerberos*<br><br>after HADOOP-6941 the tests fail with the error below.<br><br>Doing some IDE debugging I&apos;ve found out that the changes in HADOOP-6941 are making the JVM Kerberos libraries to append an extra element to the kerberos principal of the server (on the client side when creating the token) so *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when contacting the KDC to get the granting ticket, the serv...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-119">HDFS-119</a>.
Major bug reported by shv and fixed by sureshms (name-node)<br>
<b>logSync() may block NameNode forever.</b><br>
<blockquote># {{FSEditLog.logSync()}} first waits until {{isSyncRunning}} is false and then performs syncing to file streams by calling {{EditLogOutputStream.flush()}}.<br>If an exception is thrown after {{isSyncRunning}} is set to {{true}} all threads will always wait on this condition.<br>An {{IOException}} may be thrown by {{EditLogOutputStream.setReadyToFlush()}} or a {{RuntimeException}} may be thrown by {{EditLogOutputStream.flush()}} or by {{processIOError()}}.<br># The loop that calls {{eStream.flush()}} ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1041">HDFS-1041</a>.
Major bug reported by szetszwo and fixed by szetszwo (hdfs client)<br>
<b>DFSClient does not retry in getFileChecksum(..)</b><br>
<blockquote>If connection to the first datanode fails, DFSClient does not retry in getFileChecksum(..).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3061">HDFS-3061</a>.
Blocker bug reported by alex.holmes and fixed by kihwal (name-node)<br>
<b>Cached directory size in INodeDirectory can get permantently out of sync with computed size, causing quota issues</b><br>
<blockquote>It appears that there&apos;s a condition under which a HDFS directory with a space quota set can get to a point where the cached size for the directory can permanently differ from the computed value. When this happens the following command:<br><br>{code}<br>hadoop fs -count -q /tmp/quota-test<br>{code}<br><br>results in the following output in the NameNode logs:<br><br>{code}<br>WARN org.apache.hadoop.hdfs.server.namenode.NameNode: Inconsistent diskspace for directory quota-test. Cached: 6000 Computed: 6072<br>{code}<br><br>I&apos;ve ob...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3127">HDFS-3127</a>.
Major bug reported by brandonli and fixed by brandonli (name-node)<br>
<b>failure in recovering removed storage directories should not stop checkpoint process</b><br>
<blockquote>When a restore fails, rollEditLog() also fails even if there are healthy directories. Any exceptions from recovering the removed directories should not fail checkpoint process.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3265">HDFS-3265</a>.
Major bug reported by kumarr and fixed by kumarr (build)<br>
<b>PowerPc Build error.</b><br>
<blockquote>When attempting to build branch-1, the following error is seen and ant exits.<br>[exec] configure: error: Unsupported CPU architecture &quot;powerpc64&quot;<br><br>The following command was used to build hadoop-common<br><br>ant -Dlibhdfs=true -Dcompile.native=true -Dfusedfs=true -Dcompile.c++=true -Dforrest.home=$FORREST_HOME compile-core-native compile-c++ compile-c++-examples task-controller tar record-parser compile-hdfs-classes package -Djava5.home=/opt/ibm/ibm-java2-ppc64-50/ </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3310">HDFS-3310</a>.
Major bug reported by cmccabe and fixed by cmccabe <br>
<b>Make sure that we abort when no edit log directories are left</b><br>
<blockquote>We should make sure to abort when there are no edit log directories left to write to. It seems that there is at least one case that is slipping through the cracks right now in branch-1.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3374">HDFS-3374</a>.
Major bug reported by owen.omalley and fixed by owen.omalley (name-node)<br>
<b>hdfs&apos; TestDelegationToken fails intermittently with a race condition</b><br>
<blockquote>The testcase is failing because the MiniDFSCluster is shutdown before the secret manager can change the key, which calls system.exit with no edit streams available.<br><br>{code}<br><br> [junit] 2012-05-04 15:03:51,521 WARN common.Storage (FSImage.java:updateRemovedDirs(224)) - Removing storage dir /home/horton/src/hadoop/build/test/data/dfs/name1<br> [junit] 2012-05-04 15:03:51,522 FATAL namenode.FSNamesystem (FSEditLog.java:fatalExit(388)) - No edit streams are accessible<br> [junit] java.lang.Exce...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1238">MAPREDUCE-1238</a>.
Major bug reported by rramya and fixed by tgraves (jobtracker)<br>
<b>mapred metrics shows negative count of waiting maps and reduces </b><br>
<blockquote>Negative waiting_maps and waiting_reduces count is observed in the mapred metrics</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3377">MAPREDUCE-3377</a>.
Major bug reported by jxchen and fixed by jxchen <br>
<b>Compatibility issue with 0.20.203.</b><br>
<blockquote>I have an OutputFormat which implements Configurable. I set new config entries to a job configuration during checkOutputSpec() so that the tasks will get the config entries through the job configuration. This works fine in 0.20.2, but stopped working starting from 0.20.203. With 0.20.203, my OutputFormat still has the configuration set, but the copy a task gets does not have the new entries that are set as part of checkOutputSpec(). <br><br>I believe that the problem is with JobClient. The job...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3857">MAPREDUCE-3857</a>.
Major bug reported by jeagles and fixed by jeagles (examples)<br>
<b>Grep example ignores mapred.job.queue.name</b><br>
<blockquote>Grep example creates two jobs as part of its implementation. The first job correctly uses the configuration settings. The second job ignores configuration settings.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4003">MAPREDUCE-4003</a>.
Major bug reported by zaozaowang and fixed by knoguchi (task-controller, tasktracker)<br>
<b>log.index (No such file or directory) AND Task process exit with nonzero status of 126</b><br>
<blockquote>hello?I have dwelled on this hadoop(cdhu3) problem for 2 days,I have tried every google method.This is the issue: when ran hadoop example &quot;wordcount&quot; ,the tasktracker&apos;s log in one slave node presented such errors<br><br> 1.WARN org.apache.hadoop.mapred.DefaultTaskController: Task wrapper stderr: bash: /var/tmp/mapred/local/ttprivate/taskTracker/hdfs/jobcache/job_201203131751_0003/attempt_201203131751_0003_m_000006_0/taskjvm.sh: Permission denied<br><br>2.WARN org.apache.hadoop.mapred.TaskRunner: attempt_...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4012">MAPREDUCE-4012</a>.
Minor bug reported by knoguchi and fixed by tgraves <br>
<b>Hadoop Job setup error leaves no useful info to users (when LinuxTaskController is used)</b><br>
<blockquote>When distributed cache pull fail on the TaskTracker, job webUI only shows <br>{noformat}<br>Job initialization failed (255)<br>{noformat}<br>leaving users confused. <br><br>On the TaskTracker log, there is a log with useful info <br>{noformat}<br>2012-03-14 21:44:17,083 INFO org.apache.hadoop.mapred.TaskController: org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: <br>Permission denied: user=user1, access=READ, inode=&quot;testfile&quot;:user3:users:rw-------<br>...<br>2012-03-14 21...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4154">MAPREDUCE-4154</a>.
Major bug reported by thejas and fixed by devaraj <br>
<b>streaming MR job succeeds even if the streaming command fails</b><br>
<blockquote>Hadoop 1.0.1 behaves as expected - The task fails for streaming MR job if the streaming command fails. But it succeeds in hadoop 1.0.2 .<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4207">MAPREDUCE-4207</a>.
Major bug reported by kihwal and fixed by kihwal (mrv1)<br>
<b>Remove System.out.println() in FileInputFormat</b><br>
<blockquote>MAPREDUCE-3607 accidentally left the println statement. </blockquote></li>
</ul>
<h2>Changes since Hadoop 1.0.1</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-1722">HADOOP-1722</a>.
Major improvement reported by runping and fixed by klbostee <br>
<b>Make streaming to handle non-utf8 byte array</b><br>
<blockquote> Streaming allows binary (or other non-UTF8) streams.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3851">MAPREDUCE-3851</a>.
Major bug reported by kihwal and fixed by tgraves (tasktracker)<br>
<b>Allow more aggressive action on detection of the jetty issue</b><br>
<blockquote> added new configuration variables to control when TT aborts if it sees a certain number of exceptions: <br/>
<br/>
&nbsp;&nbsp;&nbsp;&nbsp;// Percent of shuffle exceptions (out of sample size) seen before it&#39;s <br/>
&nbsp;&nbsp;&nbsp;&nbsp;// fatal - acceptable values are from 0 to 1.0, 0 disables the check. <br/>
&nbsp;&nbsp;&nbsp;&nbsp;// ie. 0.3 = 30% of the last X number of requests matched the exception, <br/>
&nbsp;&nbsp;&nbsp;&nbsp;// so abort. <br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.getFloat( <br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&quot;mapreduce.reduce.shuffle.catch.exception.percent.limit.fatal&quot;, 0); <br/>
<br/>
&nbsp;&nbsp;&nbsp;&nbsp;// The number of trailing requests we track, used for the fatal <br/>
&nbsp;&nbsp;&nbsp;&nbsp;// limit calculation <br/>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;conf.getInt(&quot;mapreduce.reduce.shuffle.catch.exception.sample.size&quot;, 1000);
</blockquote></li>
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5450">HADOOP-5450</a>.
Blocker improvement reported by klbostee and fixed by klbostee <br>
<b>Add support for application-specific typecodes to typed bytes</b><br>
<blockquote>For serializing objects of types that are not supported by typed bytes serialization, applications might want to use a custom serialization format. Right now, typecode 0 has to be used for the bytes resulting from this custom serialization, which could lead to problems when deserializing the objects because the application cannot know if a byte sequence following typecode 0 is a customly serialized object or just a raw sequence of bytes. Therefore, a range of typecodes that are treated as ali...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7206">HADOOP-7206</a>.
Major new feature reported by eli and fixed by tucu00 <br>
<b>Integrate Snappy compression</b><br>
<blockquote>Google release Zippy as an open source (APLv2) project called Snappy (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.<br><br>{quote}<br>Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8050">HADOOP-8050</a>.
Major bug reported by kihwal and fixed by kihwal (metrics)<br>
<b>Deadlock in metrics</b><br>
<blockquote>The metrics serving thread and the periodic snapshot thread can deadlock.<br>It happened a few times on one of namenodes we have. When it happens RPC works but the web ui and hftp stop working. I haven&apos;t look at the trunk too closely, but it might happen there too.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8088">HADOOP-8088</a>.
Major bug reported by kihwal and fixed by (security)<br>
<b>User-group mapping cache incorrectly does negative caching on transient failures</b><br>
<blockquote>We&apos;ve seen a case where some getGroups() calls fail when the ldap server or the network is having transient failures. Looking at the code, the shell-based and the JNI-based implementations swallow exceptions and return an empty or partial list. The caller, Groups#getGroups() adds this likely empty list into the mapping cache for the user. This will function as negative caching until the cache expires. I don&apos;t think we want negative caching here, but even if we do, it should be intelligent eno...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8090">HADOOP-8090</a>.
Major improvement reported by gkesavan and fixed by gkesavan <br>
<b>rename hadoop 64 bit rpm/deb package name</b><br>
<blockquote>change hadoop rpm/deb name from hadoop-&lt;version&gt;.amd64.rpm/deb hadoop-&lt;version&gt;.x86_64.rpm/deb </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8132">HADOOP-8132</a>.
Major bug reported by arpitgupta and fixed by arpitgupta <br>
<b>64bit secure datanodes do not start as the jsvc path is wrong</b><br>
<blockquote>64bit secure datanodes were looking for /usr/libexec/../libexec/jsvc. instead of /usr/libexec/../libexec/jsvc.amd64</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8201">HADOOP-8201</a>.
Blocker bug reported by gkesavan and fixed by gkesavan <br>
<b>create the configure script for native compilation as part of the build</b><br>
<blockquote>configure script is checked into svn and its not regenerated during build. Ideally configure scritp should not be checked into svn and instead should be generated during build using autoreconf.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2701">HDFS-2701</a>.
Major improvement reported by eli and fixed by eli (name-node)<br>
<b>Cleanup FS* processIOError methods</b><br>
<blockquote>Let&apos;s rename the various &quot;processIOError&quot; methods to be more descriptive. The current code makes it difficult to identify and reason about bug fixes. While we&apos;re at it let&apos;s remove &quot;Fatal&quot; from the &quot;Unable to sync the edit log&quot; log since it&apos;s not actually a fatal error (this is confusing to users). And 2NN &quot;Checkpoint done&quot; should be info, not a warning (also confusing to users).<br><br>Thanks to HDFS-1073 these issues don&apos;t exist on trunk or 23.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2702">HDFS-2702</a>.
Critical bug reported by eli and fixed by eli (name-node)<br>
<b>A single failed name dir can cause the NN to exit </b><br>
<blockquote>There&apos;s a bug in FSEditLog#rollEditLog which results in the NN process exiting if a single name dir has failed. Here&apos;s the relevant code:<br><br>{code}<br>close() // So editStreams.size() is 0 <br>foreach edits dir {<br> ..<br> eStream = new ... // Might get an IOE here<br> editStreams.add(eStream);<br>} catch (IOException ioe) {<br> removeEditsForStorageDir(sd); // exits if editStreams.size() &lt;= 1 <br>}<br>{code}<br><br>If we get an IOException before we&apos;ve added two edits streams to the list we&apos;ll exit, eg if there&apos;s an ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2703">HDFS-2703</a>.
Major bug reported by eli and fixed by eli (name-node)<br>
<b>removedStorageDirs is not updated everywhere we remove a storage dir</b><br>
<blockquote>There are a number of places (FSEditLog#open, purgeEditLog, and rollEditLog) where we remove a storage directory but don&apos;t add it to the removedStorageDirs list. This means a storage dir may have been removed but we don&apos;t see it in the log or Web UI. This doesn&apos;t affect trunk/23 since the code there is totally different.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2978">HDFS-2978</a>.
Major new feature reported by atm and fixed by atm (name-node)<br>
<b>The NameNode should expose name dir statuses via JMX</b><br>
<blockquote>We currently display this info on the NN web UI, so users who wish to monitor this must either do it manually or parse HTML. We should publish this information via JMX.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3006">HDFS-3006</a>.
Major bug reported by bcwalrus and fixed by szetszwo (name-node)<br>
<b>Webhdfs &quot;SETOWNER&quot; call returns incorrect content-type</b><br>
<blockquote>The SETOWNER call returns an empty body. But the header has &quot;Content-Type: application/json&quot;, which is a contradiction (empty string is not valid json). This appears to happen for SETTIMES and SETPERMISSION as well.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3075">HDFS-3075</a>.
Major improvement reported by brandonli and fixed by brandonli (name-node)<br>
<b>Backport HADOOP-4885 to branch-1</b><br>
<blockquote>When a storage directory is inaccessible, namenode removes it from the valid storage dir list to a removedStorageDirs list. Those storage directories will not be restored when they become healthy again. <br><br>The proposed solution is to restore the previous failed directories at the beginning of checkpointing, say, rollEdits, by copying necessary metadata files from healthy directory to unhealthy ones. In this way, whenever a failed storage directory is recovered by the administrator, he/she can ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-3101">HDFS-3101</a>.
Major bug reported by wangzw and fixed by szetszwo (hdfs client)<br>
<b>cannot read empty file using webhdfs</b><br>
<blockquote>STEP:<br>1, create a new EMPTY file<br>2, read it using webhdfs.<br><br>RESULT:<br>expected: get a empty file<br>I got: {&quot;RemoteException&quot;:{&quot;exception&quot;:&quot;IOException&quot;,&quot;javaClassName&quot;:&quot;java.io.IOException&quot;,&quot;message&quot;:&quot;Offset=0 out of the range [0, 0); OPEN, path=/testFile&quot;}}<br><br>First of all, [0, 0) is not a valid range, and I think read a empty file should be OK.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-764">MAPREDUCE-764</a>.
Blocker bug reported by klbostee and fixed by klbostee (contrib/streaming)<br>
<b>TypedBytesInput&apos;s readRaw() does not preserve custom type codes</b><br>
<blockquote>The typed bytes format supports byte sequences of the form {{&lt;custom type code&gt; &lt;length&gt; &lt;bytes&gt;}}. When reading such a sequence via {{TypedBytesInput}}&apos;s {{readRaw()}} method, however, the returned sequence currently is {{0 &lt;length&gt; &lt;bytes&gt;}} (0 is the type code for a bytes array), which leads to bugs such as the one described [here|http://dumbo.assembla.com/spaces/dumbo/tickets/54].</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3583">MAPREDUCE-3583</a>.
Critical bug reported by zhihyu@ebaysf.com and fixed by zhihyu@ebaysf.com <br>
<b>ProcfsBasedProcessTree#constructProcessInfo() may throw NumberFormatException</b><br>
<blockquote>HBase PreCommit builds frequently gave us NumberFormatException.<br><br>From https://builds.apache.org/job/PreCommit-HBASE-Build/553//testReport/org.apache.hadoop.hbase.mapreduce/TestHFileOutputFormat/testMRIncrementalLoad/:<br>{code}<br>2011-12-20 01:44:01,180 WARN [main] mapred.JobClient(784): No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).<br>java.lang.NumberFormatException: For input string: &quot;18446743988060683582&quot;<br> at java.lang.NumberFormatException.fo...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3773">MAPREDUCE-3773</a>.
Major new feature reported by owen.omalley and fixed by owen.omalley (jobtracker)<br>
<b>Add queue metrics with buckets for job run times</b><br>
<blockquote>It would be nice to have queue metrics that reflect the number of jobs in each queue that have been running for different ranges of time.<br><br>Reasonable time ranges are probably 0-1 hr, 1-5 hr, 5-24 hr, 24+ hrs; but they should be configurable.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3824">MAPREDUCE-3824</a>.
Critical bug reported by aw and fixed by tgraves (distributed-cache)<br>
<b>Distributed caches are not removed properly</b><br>
<blockquote>Distributed caches are not being properly removed by the TaskTracker when they are expected to be expired. </blockquote></li>
</ul>
<h2>Changes since Hadoop 1.0.0</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8009">HADOOP-8009</a>.
Critical improvement reported by tucu00 and fixed by tucu00 (build)<br>
<b>Create hadoop-client and hadoop-minicluster artifacts for downstream projects </b><br>
<blockquote> Generate integration artifacts &quot;org.apache.hadoop:hadoop-client&quot; and &quot;org.apache.hadoop:hadoop-minicluster&quot; containing all the jars needed to use Hadoop client APIs, and to run Hadoop MiniClusters, respectively. Push these artifacts to the maven repository when mvn-deploy, along with existing artifacts.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8037">HADOOP-8037</a>.
Blocker bug reported by mattf and fixed by gkesavan (build)<br>
<b>Binary tarball does not preserve platform info for native builds, and RPMs fail to provide needed symlinks for libhadoop.so</b><br>
<blockquote> This fix is marked &quot;incompatible&quot; only because it changes the bin-tarball directory structure to be consistent with the source tarball directory structure. The source tarball is unchanged. RPMs and DEBs now use an intermediate bin-tarball with an &quot;${os.arch}&quot; tag (like the packages themselves). The un-tagged bin-tarball is now multi-platform and retains the structure of the source tarball; it is in fact generated by target &quot;tar&quot;, not by target &quot;binary&quot;. Finally, in the 64-bit RPMs and DEBs, the native libs go in the &quot;lib64&quot; directory instead of &quot;lib&quot;.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3184">MAPREDUCE-3184</a>.
Major improvement reported by tlipcon and fixed by tlipcon (jobtracker)<br>
<b>Improve handling of fetch failures when a tasktracker is not responding on HTTP</b><br>
<blockquote> The TaskTracker now has a thread which monitors for a known Jetty bug in which the selector thread starts spinning and map output can no longer be served. If the bug is detected, the TaskTracker will shut itself down. This feature can be disabled by setting mapred.tasktracker.jetty.cpu.check.enabled to false.
</blockquote></li>
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7470">HADOOP-7470</a>.
Minor improvement reported by stevel@apache.org and fixed by enis (util)<br>
<b>move up to Jackson 1.8.8</b><br>
<blockquote>I see that hadoop-core still depends on Jackson 1.0.1 -but that project is now up to 1.8.2 in releases. Upgrading will make it easier for other Jackson-using apps that are more up to date to keep their classpath consistent.<br><br>The patch would be updating the ivy file to pull in the later version; no test</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7960">HADOOP-7960</a>.
Major bug reported by gkesavan and fixed by mattf <br>
<b>Port HADOOP-5203 to branch-1, build version comparison is too restrictive</b><br>
<blockquote>hadoop services should not be using the build timestamp to verify version difference in the cluster installation. Instead it should use the source checksum as in HADOOP-5203.<br> </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7964">HADOOP-7964</a>.
Blocker bug reported by kihwal and fixed by daryn (security, util)<br>
<b>Deadlock in class init.</b><br>
<blockquote>After HADOOP-7808, client-side commands hang occasionally. There are cyclic dependencies in NetUtils and SecurityUtil class initialization. Upon initial look at the stack trace, two threads deadlock when they hit the either of class init the same time.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7987">HADOOP-7987</a>.
Major improvement reported by devaraj and fixed by jnp (security)<br>
<b>Support setting the run-as user in unsecure mode</b><br>
<blockquote>Some applications need to be able to perform actions (such as launch MR jobs) from map or reduce tasks. In earlier unsecure versions of hadoop (20.x), it was possible to do this by setting user.name in the configuration. But in 20.205 and 1.0, when running in unsecure mode, this does not work. (In secure mode, you can do this using the kerberos credentials).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7988">HADOOP-7988</a>.
Major bug reported by jnp and fixed by jnp <br>
<b>Upper case in hostname part of the principals doesn&apos;t work with kerberos.</b><br>
<blockquote>Kerberos doesn&apos;t like upper case in the hostname part of the principals.<br>This issue has been seen in 23 as well as 1.0.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8010">HADOOP-8010</a>.
Minor bug reported by rvs and fixed by rvs (scripts)<br>
<b>hadoop-config.sh spews error message when HADOOP_HOME_WARN_SUPPRESS is set to true and HADOOP_HOME is present</b><br>
<blockquote>Running hadoop daemon commands when HADOOP_HOME_WARN_SUPPRESS is set to true and HADOOP_HOME is present produces:<br>{noformat}<br> [: 76: true: unexpected operator<br>{noformat}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8052">HADOOP-8052</a>.
Major bug reported by reznor and fixed by reznor (metrics)<br>
<b>Hadoop Metrics2 should emit Float.MAX_VALUE (instead of Double.MAX_VALUE) to avoid making Ganglia&apos;s gmetad core</b><br>
<blockquote>Ganglia&apos;s gmetad converts the doubles emitted by Hadoop&apos;s Metrics2 system to strings, and the buffer it uses is 256 bytes wide.<br><br>When the SampleStat.MinMax class (in org.apache.hadoop.metrics2.util) emits its default min value (currently initialized to Double.MAX_VALUE), it ends up causing a buffer overflow in gmetad, which causes it to core, effectively rendering Ganglia useless (for some, the core is continuous; for others who are more fortunate, it&apos;s only a one-time Hadoop-startup-time thi...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2379">HDFS-2379</a>.
Critical bug reported by tlipcon and fixed by tlipcon (data-node)<br>
<b>0.20: Allow block reports to proceed without holding FSDataset lock</b><br>
<blockquote>As disks are getting larger and more plentiful, we&apos;re seeing DNs with multiple millions of blocks on a single machine. When page cache space is tight, block reports can take multiple minutes to generate. Currently, during the scanning of the data directories to generate a report, the FSVolumeSet lock is held. This causes writes and reads to block, timeout, etc, causing big problems especially for clients like HBase.<br><br>This JIRA is to explore some of the ideas originally discussed in HADOOP-458...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2814">HDFS-2814</a>.
Minor improvement reported by hitesh and fixed by hitesh <br>
<b>NamenodeMXBean does not account for svn revision in the version information</b><br>
<blockquote>Unlike the jobtracker where both the UI and jmx information report the version as &quot;x.y.z, r&lt;svn revision&quot;, in case of the namenode, the UI displays x.y.z and svn revision info but the jmx output only contains the x.y.z version.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3343">MAPREDUCE-3343</a>.
Major bug reported by ahmed.radwan and fixed by zhaoyunjiong (mrv1)<br>
<b>TaskTracker Out of Memory because of distributed cache</b><br>
<blockquote>This Out of Memory happens when you run large number of jobs (using the distributed cache) on a TaskTracker. <br><br>Seems the basic issue is with the distributedCacheManager (instance of TrackerDistributedCacheManager in TaskTracker.java), this gets created during TaskTracker.initialize(), and it keeps references to TaskDistributedCacheManager for every submitted job via the jobArchives Map, also references to CacheStatus via cachedArchives map. I am not seeing these cleaned up between jobs, so th...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3607">MAPREDUCE-3607</a>.
Major improvement reported by tomwhite and fixed by tomwhite (client)<br>
<b>Port missing new API mapreduce lib classes to 1.x</b><br>
<blockquote>There are a number of classes under mapreduce.lib that are not present in the 1.x series. Including these would help users and downstream projects using the new MapReduce API migrate to later versions of Hadoop in the future.<br><br>A few examples of where this would help:<br>* Sqoop uses mapreduce.lib.db.DBWritable and mapreduce.lib.input.CombineFileInputFormat (SQOOP-384).<br>* Mahout uses mapreduce.lib.output.MultipleOutputs (MAHOUT-822).<br>* HBase has a backport of mapreduce.lib.partition.InputSampler ...</blockquote></li>
</ul>
<h2>Changes since Hadoop 0.20.205.0</h2>
<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7728">HADOOP-7728</a>.
Major bug reported by rramya and fixed by rramya (conf)<br>
<b>hadoop-setup-conf.sh should be modified to enable task memory manager</b><br>
<blockquote> Enable task memory management to be configurable via hadoop config setup script.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7740">HADOOP-7740</a>.
Minor bug reported by arpitgupta and fixed by arpitgupta (conf)<br>
<b>security audit logger is not on by default, fix the log4j properties to enable the logger</b><br>
<blockquote> Fixed security audit logger configuration. (Arpit Gupta via Eric Yang)
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7923">HADOOP-7923</a>.
Major task reported by szetszwo and fixed by szetszwo (build, documentation)<br>
<b>Update doc versions from 0.20 to 1.0</b><br>
<blockquote> Docs version number is now automatically updated by reference to the build number.
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-617">HDFS-617</a>.
Major improvement reported by kzhang and fixed by kzhang (hdfs client, name-node)<br>
<b>Support for non-recursive create() in HDFS</b><br>
<blockquote> New DFSClient.create(...) allows option of not creating missing parent(s).
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2246">HDFS-2246</a>.
Major improvement reported by sanjay.radia and fixed by jnp <br>
<b>Shortcut a local client reads to a Datanodes files directly</b><br>
<blockquote> 1. New configurations <br/>
a. dfs.block.local-path-access.user is the key in datanode configuration to specify the user allowed to do short circuit read. <br/>
b. dfs.client.read.shortcircuit is the key to enable short circuit read at the client side configuration. <br/>
c. dfs.client.read.shortcircuit.skip.checksum is the key to bypass checksum check at the client side. <br/>
2. By default none of the above are enabled and short circuit read will not kick in. <br/>
3. If security is on, the feature can be used only for user that has kerberos credentials at the client, therefore map reduce tasks cannot benefit from it in general. <br/>
</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2316">HDFS-2316</a>.
Major new feature reported by szetszwo and fixed by szetszwo <br>
<b>[umbrella] webhdfs: a complete FileSystem implementation for accessing HDFS over HTTP</b><br>
<blockquote> Provide webhdfs as a complete FileSystem implementation for accessing HDFS over HTTP. <br/>
Previous hftp feature was a read-only FileSystem and does not provide &quot;write&quot; accesses.
</blockquote></li>
</ul>
<h3>Other Jiras (describe bug fixes and minor changes)</h3>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5124">HADOOP-5124</a>.
Major improvement reported by hairong and fixed by hairong <br>
<b>A few optimizations to FsNamesystem#RecentInvalidateSets</b><br>
<blockquote>This jira proposes a few optimization to FsNamesystem#RecentInvalidateSets:<br>1. when removing all replicas of a block, it does not traverse all nodes in the map. Instead it traverse only the nodes that the block is located.<br>2. When dispatching blocks to datanodes in ReplicationMonitor. It randomly chooses a predefined number of datanodes and dispatches blocks to those datanodes. This strategy provides fairness to all datanodes. The current strategy always starts from the first datanode.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6840">HADOOP-6840</a>.
Minor improvement reported by nspiegelberg and fixed by jnp (fs, io)<br>
<b>Support non-recursive create() in FileSystem &amp; SequenceFile.Writer</b><br>
<blockquote>The proposed solution for HBASE-2312 requires the sequence file to handle a non-recursive create. This is already supported by HDFS, but needs to have an equivalent FileSystem &amp; SequenceFile.Writer API.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6886">HADOOP-6886</a>.
Minor improvement reported by nspiegelberg and fixed by (fs)<br>
<b>LocalFileSystem Needs createNonRecursive API</b><br>
<blockquote>While running sanity check tests for HBASE-2312, I noticed that HDFS-617 did not include createNonRecursive() support for the LocalFileSystem. This is a problem for HBase, which allows the user to run over the LocalFS instead of HDFS for local cluster testing. I think this only affects 0.20-append, but may affect the trunk based upon how exactly FileContext handles non-recursive creates.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7461">HADOOP-7461</a>.
Major bug reported by rbodkin and fixed by gkesavan (build)<br>
<b>Jackson Dependency Not Declared in Hadoop POM</b><br>
<blockquote>(COMMENT: This bug still affects 0.20.205.0, four months after the bug was filed. This causes total failure, and the fix is trivial for whoever manages the POM -- just add the missing dependency! --ben)<br><br>This issue was identified and the fix &amp; workaround was documented at <br><br>https://issues.cloudera.org/browse/DISTRO-44<br><br>The issue affects use of Hadoop 0.20.203.0 from the Maven central repo. I built a job using that maven repo and ran it, resulting in this failure:<br><br>Exception in thread &quot;main&quot; ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7664">HADOOP-7664</a>.
Minor improvement reported by raviprak and fixed by raviprak (conf)<br>
<b>o.a.h.conf.Configuration complains of overriding final parameter even if the value with which its attempting to override is the same. </b><br>
<blockquote>o.a.h.conf.Configuration complains of overriding final parameter even if the value with which its attempting to override is the same. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7765">HADOOP-7765</a>.
Major bug reported by eyang and fixed by eyang (build)<br>
<b>Debian package contain both system and tar ball layout</b><br>
<blockquote>When packaging is invoked as &quot;ant clean tar deb&quot;. The system creates both system layout and tarball layout in the same build directory. Debian packaging target would pick up files for both layouts. The end result of using produced debian package built this way, would end up README.txt LICENSE.txt, and jar files in /usr.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7784">HADOOP-7784</a>.
Major bug reported by arpitgupta and fixed by eyang <br>
<b>secure datanodes fail to come up stating jsvc not found </b><br>
<blockquote>building 205.1 and trying to startup a secure dn leads to the following<br><br>/usr/libexec/../bin/hadoop: line 386: /usr/libexec/../libexec/jsvc.amd64: No such file or directory<br>/usr/libexec/../bin/hadoop: line 386: exec: /usr/libexec/../libexec/jsvc.amd64: cannot execute: No such file or directory</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7804">HADOOP-7804</a>.
Major improvement reported by arpitgupta and fixed by arpitgupta (conf)<br>
<b>enable hadoop config generator to set dfs.block.local-path-access.user to enable short circuit read</b><br>
<blockquote>we have a new config that allows to select which user can have access for short circuit read. We should make that configurable through the config generator scripts.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7815">HADOOP-7815</a>.
Minor bug reported by rramya and fixed by rramya (conf)<br>
<b>Map memory mb is being incorrectly set by hadoop-setup-conf.sh</b><br>
<blockquote>HADOOP-7728 enabled task memory management to be configurable in the hadoop-setup-conf.sh. However, the default value for mapred.job.map.memory.mb is being set incorrectly.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7816">HADOOP-7816</a>.
Major bug reported by davet and fixed by davet <br>
<b>Allow HADOOP_HOME deprecated warning suppression based on config specified in hadoop-env.sh</b><br>
<blockquote>Move suppression check for &quot;Warning: $HADOOP_HOME is deprecated&quot; to after sourcing of hadoop-env.sh so that people can set HADOOP_HOME_WARN_SUPPRESS inside the config.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7853">HADOOP-7853</a>.
Blocker bug reported by daryn and fixed by daryn (security)<br>
<b>multiple javax security configurations cause conflicts</b><br>
<blockquote>Both UGI and the SPNEGO KerberosAuthenticator set the global javax security configuration. SPNEGO stomps on UGI&apos;s security config which leads to kerberos/SASL authentication errors.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7854">HADOOP-7854</a>.
Critical bug reported by daryn and fixed by daryn (security)<br>
<b>UGI getCurrentUser is not synchronized</b><br>
<blockquote>Sporadic {{ConcurrentModificationExceptions}} are originating from {{UGI.getCurrentUser}} when it needs to create a new instance. The problem was specifically observed in a JT under heavy load when a post-job cleanup is accessing the UGI while a new job is being processed.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7865">HADOOP-7865</a>.
Major bug reported by jnp and fixed by jnp <br>
<b>Test Failures in 1.0.0 hdfs/common</b><br>
<blockquote>Following tests in hdfs and common are failing<br>1. TestFileAppend2<br>2. TestFileConcurrentReader<br>3. TestDoAsEffectiveUser </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7869">HADOOP-7869</a>.
Critical bug reported by owen.omalley and fixed by owen.omalley (scripts)<br>
<b>HADOOP_HOME warning happens all of the time</b><br>
<blockquote>With HADOOP-7816, the check for HADOOP_HOME has moved after it is set by hadoop-config so that it always happens unless HADOOP_HOME_WARN_SUPPRESS is set in hadoop-env or the environment.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-611">HDFS-611</a>.
Major bug reported by dhruba and fixed by zshao (data-node)<br>
<b>Heartbeats times from Datanodes increase when there are plenty of blocks to delete</b><br>
<blockquote>I am seeing that when we delete a large directory that has plenty of blocks, the heartbeat times from datanodes increase significantly from the normal value of 3 seconds to as large as 50 seconds or so. The heartbeat thread in the Datanode deletes a bunch of blocks sequentially, this causes the heartbeat times to increase.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1257">HDFS-1257</a>.
Major bug reported by rvadali and fixed by eepayne (name-node)<br>
<b>Race condition on FSNamesystem#recentInvalidateSets introduced by HADOOP-5124</b><br>
<blockquote>HADOOP-5124 provided some improvements to FSNamesystem#recentInvalidateSets. But it introduced unprotected access to the data structure recentInvalidateSets. Specifically, FSNamesystem.computeInvalidateWork accesses recentInvalidateSets without read-lock protection. If there is concurrent activity (like reducing replication on a file) that adds to recentInvalidateSets, the name-node crashes with a ConcurrentModificationException.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1943">HDFS-1943</a>.
Blocker bug reported by weiyj and fixed by mattf (scripts)<br>
<b>fail to start datanode while start-dfs.sh is executed by root user</b><br>
<blockquote>When start-dfs.sh is run by root user, we got the following error message:<br># start-dfs.sh<br>Starting namenodes on [localhost ]<br>localhost: namenode running as process 2556. Stop it first.<br>localhost: starting datanode, logging to /usr/hadoop/hadoop-common-0.23.0-SNAPSHOT/bin/../logs/hadoop-root-datanode-cspf01.out<br>localhost: Unrecognized option: -jvm<br>localhost: Could not create the Java virtual machine.<br><br>The -jvm options should be passed to jsvc when we starting a secure<br>datanode, but it still pa...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2065">HDFS-2065</a>.
Major bug reported by bharathm and fixed by umamaheswararao <br>
<b>Fix NPE in DFSClient.getFileChecksum</b><br>
<blockquote>The following code can throw NPE if callGetBlockLocations returns null.<br><br>If server returns null <br><br>{code}<br> List&lt;LocatedBlock&gt; locatedblocks<br> = callGetBlockLocations(namenode, src, 0, Long.MAX_VALUE).getLocatedBlocks();<br>{code}<br><br>The right fix for this is server should throw right exception.<br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2346">HDFS-2346</a>.
Blocker bug reported by umamaheswararao and fixed by lakshman (test)<br>
<b>TestHost2NodesMap &amp; TestReplicasMap will fail depending upon execution order of test methods</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2416">HDFS-2416</a>.
Major sub-task reported by arpitgupta and fixed by jnp <br>
<b>distcp with a webhdfs uri on a secure cluster fails</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2424">HDFS-2424</a>.
Major sub-task reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs liststatus json does not convert to a valid xml document</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2427">HDFS-2427</a>.
Major sub-task reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs mkdirs api call creates path with 777 permission, we should default it to 755</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2428">HDFS-2428</a>.
Major sub-task reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs api parameter validation should be better</b><br>
<blockquote>PUT Request: http://localhost:50070/webhdfs/some_path?op=MKDIRS&amp;permission=955<br><br>Exception returned<br><br><br>HTTP/1.1 500 Internal Server Error<br>{&quot;RemoteException&quot;:{&quot;className&quot;:&quot;com.sun.jersey.api.ParamException$QueryParamException&quot;,&quot;message&quot;:&quot;java.lang.NumberFormatException: For input string: \&quot;955\&quot;&quot;}} <br><br><br>We should return a 400 with appropriate error message</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2432">HDFS-2432</a>.
Major sub-task reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs setreplication api should return a 403 when called on a directory</b><br>
<blockquote>Currently the set replication api on a directory leads to a 200.<br><br>Request URI http://NN:50070/webhdfs/tmp/webhdfs_data/dir_replication_tests?op=SETREPLICATION&amp;replication=5<br>Request Method: PUT<br>Status Line: HTTP/1.1 200 OK<br>Response Content: {&quot;boolean&quot;:false}<br><br>Since we can determine that this call did not succeed (boolean=false) we should rather just return a 403</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2439">HDFS-2439</a>.
Major sub-task reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs open an invalid path leads to a 500 which states a npe, we should return a 404 with appropriate error message</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2441">HDFS-2441</a>.
Major sub-task reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs returns two content-type headers</b><br>
<blockquote>$ curl -i &quot;http://localhost:50070/webhdfs/path?op=GETFILESTATUS&quot;<br>HTTP/1.1 200 OK<br>Content-Type: text/html; charset=utf-8<br>Expires: Thu, 01-Jan-1970 00:00:00 GMT<br>........<br>Content-Type: application/json<br>Transfer-Encoding: chunked<br>Server: Jetty(6.1.26)<br><br><br>It should only return one content type header = application/json</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2450">HDFS-2450</a>.
Major bug reported by rajsaha and fixed by daryn <br>
<b>Only complete hostname is supported to access data via hdfs://</b><br>
<blockquote>If my complete hostname is host1.abc.xyz.com, only complete hostname must be used to access data via hdfs://<br><br>I am running following in .20.205 Client to get data from .20.205 NN (host1)<br>$hadoop dfs -copyFromLocal /etc/passwd hdfs://host1/tmp<br>copyFromLocal: Wrong FS: hdfs://host1/tmp, expected: hdfs://host1.abc.xyz.com<br>Usage: java FsShell [-copyFromLocal &lt;localsrc&gt; ... &lt;dst&gt;]<br><br>$hadoop dfs -copyFromLocal /etc/passwd hdfs://host1.abc/tmp/<br>copyFromLocal: Wrong FS: hdfs://host1.blue/tmp/1, exp...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2453">HDFS-2453</a>.
Major sub-task reported by arpitgupta and fixed by szetszwo <br>
<b>tail using a webhdfs uri throws an error</b><br>
<blockquote>/usr//bin/hadoop --config /etc/hadoop dfs -tail webhdfs://NN:50070/file <br>tail: HTTP_PARTIAL expected, received 200<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2494">HDFS-2494</a>.
Major sub-task reported by umamaheswararao and fixed by umamaheswararao (data-node)<br>
<b>[webhdfs] When Getting the file using OP=OPEN with DN http address, ESTABLISHED sockets are growing.</b><br>
<blockquote>As part of the reliable test,<br>Scenario:<br>Initially check the socket count. ---there are aroud 42 sockets are there.<br>open the file with DataNode http address using op=OPEN request parameter about 500 times in loop.<br>Wait for some time and check the socket count. --- There are thousands of ESTABLISHED sockets are growing. ~2052<br><br>Here is the netstat result:<br><br>C:\Users\uma&gt;netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l<br>2042<br>C:\Users\uma&gt;netstat | grep 127.0.0.1 | grep ESTABLISHED |wc -l<br>2042<br>C:\...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2501">HDFS-2501</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>add version prefix and root methods to webhdfs</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2527">HDFS-2527</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>Remove the use of Range header from webhdfs</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2528">HDFS-2528</a>.
Major sub-task reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs rest call to a secure dn fails when a token is sent</b><br>
<blockquote>curl -L -u : --negotiate -i &quot;http://NN:50070/webhdfs/v1/tmp/webhdfs_data/file_small_data.txt?op=OPEN&quot;<br><br>the following exception is thrown by the datanode when the redirect happens.<br>{&quot;RemoteException&quot;:{&quot;exception&quot;:&quot;IOException&quot;,&quot;javaClassName&quot;:&quot;java.io.IOException&quot;,&quot;message&quot;:&quot;Call to failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]&quot;}}<br>...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2539">HDFS-2539</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>Support doAs and GETHOMEDIRECTORY in webhdfs</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2540">HDFS-2540</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>Change WebHdfsFileSystem to two-step create/append</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2552">HDFS-2552</a>.
Major task reported by szetszwo and fixed by szetszwo (documentation)<br>
<b>Add WebHdfs Forrest doc</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2589">HDFS-2589</a>.
Major bug reported by daryn and fixed by daryn (security)<br>
<b>unnecessary hftp token fetch and renewal thread</b><br>
<blockquote>Instantiation of the hftp filesystem is causing a token to be implicitly created and added to a custom token renewal thread. With the new token renewal feature in the JT, this causes the mapreduce {{obtainTokensForNamenodes}} to fetch two tokens (an implicit and uncancelled token, and an explicit token) and leave a spurious renewal thread running. This thread should not be running in the JT.<br><br>After speaking with Owen, the quick solution is to lazy fetch the token, and to lazy start the rene...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2590">HDFS-2590</a>.
Major bug reported by szetszwo and fixed by szetszwo (documentation)<br>
<b>Some links in WebHDFS forrest doc do not work</b><br>
<blockquote>Some links are pointing to DistributedFileSystem javadoc but the javadoc of DistributedFileSystem is not generated by default.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2604">HDFS-2604</a>.
Minor improvement reported by szetszwo and fixed by szetszwo (data-node, documentation, name-node)<br>
<b>Add a log message to show if WebHDFS is enabled</b><br>
<blockquote>WebHDFS can be enabled/disabled by the conf key {{dfs.webhdfs.enabled}}. Let&apos;s add a log message to show if it is enabled.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2673">HDFS-2673</a>.
Trivial bug reported by umamaheswararao and fixed by umamaheswararao (name-node)<br>
<b>While Namenode processing the blocksBeingWrittenReport, it will log incorrect number blocks count</b><br>
<blockquote>In NameNode#blocksBeingWrittenReport<br> we have the following stateChangeLog<br>{code}<br>stateChangeLog.info(&quot;*BLOCK* NameNode.blocksBeingWrittenReport: &quot;<br> +&quot;from &quot;+nodeReg.getName()+&quot; &quot;+blocks.length +&quot; blocks&quot;);<br>{code}<br><br>here blocks is long array. Every consecutive 3 elements represents a block ( length, blockid, genstamp).<br><br>So, here in log message, blocks.length should be blocks.length/3.<br><br> </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3169">MAPREDUCE-3169</a>.
Major improvement reported by tlipcon and fixed by ahmed.radwan (mrv1, mrv2, test)<br>
<b>Create a new MiniMRCluster equivalent which only provides client APIs cross MR1 and MR2</b><br>
<blockquote>Many dependent projects like HBase, Hive, Pig, etc, depend on MiniMRCluster for writing tests. Many users do as well. MiniMRCluster, however, exposes MR implementation details like the existence of TaskTrackers, JobTrackers, etc, since it was used by MR1 for testing the server implementations as well.<br><br>This JIRA is to create a new interface which could be implemented either by MR1 or MR2 that exposes only the client-side portions of the MR framework. Ideally it would be &quot;recompile-compatible&quot;...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3319">MAPREDUCE-3319</a>.
Blocker bug reported by rvs and fixed by subrotosanyal (examples)<br>
<b>multifilewc from hadoop examples seems to be broken in 0.20.205.0</b><br>
<blockquote>{noformat}<br>/usr/lib/hadoop/bin/hadoop jar /usr/lib/hadoop/hadoop-examples-0.20.205.0.22.jar multifilewc examples/text examples-output/multifilewc<br>11/10/31 16:50:26 INFO mapred.FileInputFormat: Total input paths to process : 2<br>11/10/31 16:50:26 INFO mapred.JobClient: Running job: job_201110311350_0220<br>11/10/31 16:50:27 INFO mapred.JobClient: map 0% reduce 0%<br>11/10/31 16:50:42 INFO mapred.JobClient: Task Id : attempt_201110311350_0220_m_000000_0, Status : FAILED<br>java.lang.ClassCastException: ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3374">MAPREDUCE-3374</a>.
Major bug reported by rvs and fixed by (task-controller)<br>
<b>src/c++/task-controller/configure is not set executable in the tarball and that prevents task-controller from rebuilding</b><br>
<blockquote>ant task-controller fails because src/c++/task-controller/configure is not set executable</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3475">MAPREDUCE-3475</a>.
Major bug reported by daryn and fixed by daryn (jobtracker)<br>
<b>JT can&apos;t renew its own tokens</b><br>
<blockquote>When external systems submit jobs whose tasks need to submit additional jobs (such as oozie/pig), they include their own MR token used to submit the job. The token&apos;s renewer may not allow the JT to renew the token. The JT log will include very long SASL/GSSAPI exceptions when the job is submitted. It is also dubious for the JT to renew its token because it renders the expiry as meaningless since the JT will renew its own token until the max lifetime is exceeded.<br><br>After speaking with Owen &amp;...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3480">MAPREDUCE-3480</a>.
Major bug reported by jnp and fixed by jnp <br>
<b>TestJvmReuse fails in 1.0</b><br>
<blockquote>TestJvmReuse is failing in apache builds, although it passes in my local machine.</blockquote></li>
</ul>
<h2>Changes since Hadoop 0.20.204.0</h2>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6722">HADOOP-6722</a>.
Major bug reported by tlipcon and fixed by tlipcon (util)<br>
<b>NetUtils.connect should check that it hasn&apos;t connected a socket to itself</b><br>
<blockquote>I had no idea this was possible, but it turns out that a TCP connection will be established in the rare case that the local side of the socket binds to the ephemeral port that you later try to connect to. This can present itself in very very rare occasion when an RPC client is trying to connect to a daemon running on the same node, but that daemon is down. To see what I&apos;m talking about, run &quot;while true ; do telnet localhost 60020 ; done&quot; on a multicore box and wait several minutes.<br><br>This can ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6833">HADOOP-6833</a>.
Blocker bug reported by tlipcon and fixed by tlipcon <br>
<b>IPC leaks call parameters when exceptions thrown</b><br>
<blockquote>HADOOP-6498 moved the calls.remove() call lower into the SUCCESS clause of receiveResponse(), but didn&apos;t put a similar calls.remove into the ERROR clause. So, any RPC call that throws an exception ends up orphaning the Call object in the connection&apos;s &quot;calls&quot; hashtable. This prevents cleanup of the connection and is a memory leak for the call parameters.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6889">HADOOP-6889</a>.
Major new feature reported by hairong and fixed by johnvijoe (ipc)<br>
<b>Make RPC to have an option to timeout</b><br>
<blockquote>Currently Hadoop RPC does not timeout when the RPC server is alive. What it currently does is that a RPC client sends a ping to the server whenever a socket timeout happens. If the server is still alive, it continues to wait instead of throwing a SocketTimeoutException. This is to avoid a client to retry when a server is busy and thus making the server even busier. This works great if the RPC server is NameNode.<br><br>But Hadoop RPC is also used for some of client to DataNode communications, for e...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7119">HADOOP-7119</a>.
Major new feature reported by tucu00 and fixed by tucu00 (security)<br>
<b>add Kerberos HTTP SPNEGO authentication support to Hadoop JT/NN/DN/TT web-consoles</b><br>
<blockquote> Adding support for Kerberos HTTP SPNEGO authentication to the Hadoop web-consoles<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7314">HADOOP-7314</a>.
Major improvement reported by naisbitt and fixed by naisbitt <br>
<b>Add support for throwing UnknownHostException when a host doesn&apos;t resolve</b><br>
<blockquote>As part of MAPREDUCE-2489, we need support for having the resolve methods (for DNS mapping) throw UnknownHostExceptions. (Currently, they hide the exception). Since the existing &apos;resolve&apos; method is ultimately used by several other locations/components, I propose we add a new &apos;resolveValidHosts&apos; method.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7343">HADOOP-7343</a>.
Minor improvement reported by tgraves and fixed by tgraves (test)<br>
<b>backport HADOOP-7008 and HADOOP-7042 to branch-0.20-security</b><br>
<blockquote>backport HADOOP-7008 and HADOOP-7042 to branch-0.20-security so that we can enable test-patch.sh to have a configured number of acceptable findbugs and javadoc warnings</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7400">HADOOP-7400</a>.
Major bug reported by gkesavan and fixed by gkesavan (build)<br>
<b>HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set </b><br>
<blockquote>HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set a dir other than build dir<br><br>test-junit:<br> [copy] Copying 1 file to /home/y/var/builds/thread2/workspace/Cloud-Hadoop-0.20.1xx-Secondary/src/contrib/hdfsproxy/src/test/resources/proxy-config<br> [junit] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy<br> [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec<br> [junit] Test org.apache.hadoop.hdfsproxy.TestHdfsProxy FAILED</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7432">HADOOP-7432</a>.
Major improvement reported by sherri_chen and fixed by sherri_chen <br>
<b>Back-port HADOOP-7110 to 0.20-security</b><br>
<blockquote>HADOOP-7110 implemented chmod in the NativeIO library so we can have good performance (ie not fork) and still not be prone to races. This should fix build failures (and probably task failures too).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7472">HADOOP-7472</a>.
Minor improvement reported by kihwal and fixed by kihwal (ipc)<br>
<b>RPC client should deal with the IP address changes</b><br>
<blockquote>The current RPC client implementation and the client-side callers assume that the hostname-address mappings of servers never change. The resolved address is stored in an immutable InetSocketAddress object above/outside RPC, and the reconnect logic in the RPC Connection implementation also trusts the resolved address that was passed down.<br><br>If the NN suffers a failure that requires migration, it may be started on a different node with a different IP address. In this case, even if the name-addre...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7510">HADOOP-7510</a>.
Major improvement reported by daryn and fixed by daryn (security)<br>
<b>Tokens should use original hostname provided instead of ip</b><br>
<blockquote>Tokens currently store the ip:port of the remote server. This precludes tokens from being used after a host&apos;s ip is changed. Tokens should store the hostname used to make the RPC connection. This will enable new processes to use their existing tokens.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7539">HADOOP-7539</a>.
Major bug reported by johnvijoe and fixed by johnvijoe <br>
<b>merge hadoop archive goodness from trunk to .20</b><br>
<blockquote>hadoop archive in branch-0.20-security is outdated. When run recently, it produced some bugs which were all fixed in trunk. This JIRA aims to bring in all these JIRAs to branch-0.20-security.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7594">HADOOP-7594</a>.
Major new feature reported by szetszwo and fixed by szetszwo <br>
<b>Support HTTP REST in HttpServer</b><br>
<blockquote>Provide an API in HttpServer for supporting HTTP REST.<br><br>This is a part of HDFS-2284.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7596">HADOOP-7596</a>.
Major bug reported by eyang and fixed by eyang (build)<br>
<b>Enable jsvc to work with Hadoop RPM package</b><br>
<blockquote>For secure Hadoop 0.20.2xx cluster, datanode can only run with 32 bit jvm because Hadoop only packages 32 bit jsvc. The build process should download proper jsvc versions base on the build architecture. In addition, the shell script should be enhanced to locate hadoop jar files in the proper location.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7599">HADOOP-7599</a>.
Major bug reported by eyang and fixed by eyang (scripts)<br>
<b>Improve hadoop setup conf script to setup secure Hadoop cluster</b><br>
<blockquote>Setting up a secure Hadoop cluster requires a lot of manual setup. The motivation of this jira is to provide setup scripts to automate setup secure Hadoop cluster.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7602">HADOOP-7602</a>.
Major bug reported by johnvijoe and fixed by johnvijoe <br>
<b>wordcount, sort etc on har files fails with NPE</b><br>
<blockquote>wordcount, sort etc on har files fails with NPE@createSocketAddr(NetUtils.java:137). </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7603">HADOOP-7603</a>.
Major bug reported by eyang and fixed by eyang <br>
<b>Set default hdfs, mapred uid, and hadoop group gid for RPM packages</b><br>
<blockquote> Set hdfs, mapred uid, and hadoop uid to fixed numbers. (Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7610">HADOOP-7610</a>.
Major bug reported by eyang and fixed by eyang (scripts)<br>
<b>/etc/profile.d does not exist on Debian</b><br>
<blockquote>As part of post installation script, there is a symlink created in /etc/profile.d/hadoop-env.sh to source /etc/hadoop/hadoop-env.sh. Therefore, users do not need to configure HADOOP_* environment. Unfortunately, /etc/profile.d only exists in Ubuntu. [Section 9.9 of the Debian Policy|http://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9] states:<br><br>{quote}<br>A program must not depend on environment variables to get reasonable defaults. (That&apos;s because these environment variables would ha...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7615">HADOOP-7615</a>.
Major bug reported by eyang and fixed by eyang (scripts)<br>
<b>Binary layout does not put share/hadoop/contrib/*.jar into the class path</b><br>
<blockquote>For contrib projects, contrib jar files are not included in HADOOP_CLASSPATH in the binary layout. Several projects jar files should be copied to $HADOOP_PREFIX/share/hadoop/lib for binary deployment. The interesting jar files to include in $HADOOP_PREFIX/share/hadoop/lib are: capacity-scheduler, thriftfs, fairscheduler.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7625">HADOOP-7625</a>.
Major bug reported by owen.omalley and fixed by owen.omalley <br>
<b>TestDelegationToken is failing in 205</b><br>
<blockquote>After the patches on Friday, org.apache.hadoop.hdfs.security.TestDelegationToken is failing.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7626">HADOOP-7626</a>.
Major bug reported by eyang and fixed by eyang (scripts)<br>
<b>Allow overwrite of HADOOP_CLASSPATH and HADOOP_OPTS</b><br>
<blockquote>Quote email from Ashutosh Chauhan:<br><br>bq. There is a bug in hadoop-env.sh which prevents hcatalog server to start in secure settings. Instead of adding classpath, it overrides them. I was not able to verify where the bug belongs to, in HMS or in hadoop scripts. Looks like hadoop-env.sh is generated from hadoop-env.sh.template in installation process by HMS. Hand crafted patch follows:<br><br>bq. - export HADOOP_CLASSPATH=$f<br>bq. +export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:$f<br><br>bq. -export HADOOP_OPTS=...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7630">HADOOP-7630</a>.
Major bug reported by arpitgupta and fixed by eyang (conf)<br>
<b>hadoop-metrics2.properties should have a property *.period set to a default value foe metrics</b><br>
<blockquote>currently the hadoop-metrics2.properties file does not have a value set for *.period<br><br>This property is useful for metrics to determine when the property will refresh. We should set it to default of 60</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7631">HADOOP-7631</a>.
Major bug reported by rramya and fixed by eyang (conf)<br>
<b>In mapred-site.xml, stream.tmpdir is mapped to ${mapred.temp.dir} which is undeclared.</b><br>
<blockquote>Streaming jobs seem to fail with the following exception:<br><br>{noformat}<br>Exception in thread &quot;main&quot; java.io.IOException: No such file or directory<br> at java.io.UnixFileSystem.createFileExclusively(Native Method)<br> at java.io.File.checkAndCreate(File.java:1704)<br> at java.io.File.createTempFile(File.java:1792)<br> at org.apache.hadoop.streaming.StreamJob.packageJobJar(StreamJob.java:603)<br> at org.apache.hadoop.streaming.StreamJob.setJobConf(StreamJob.java:798)<br> a...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7633">HADOOP-7633</a>.
Major bug reported by arpitgupta and fixed by eyang (conf)<br>
<b>log4j.properties should be added to the hadoop conf on deploy</b><br>
<blockquote>currently the log4j properties are not present in the hadoop conf dir. We should add them so that log rotation happens appropriately and also define other logs that hadoop can generate for example the audit and the auth logs as well as the mapred summary logs etc.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7637">HADOOP-7637</a>.
Major bug reported by eyang and fixed by eyang (build)<br>
<b>Fair scheduler configuration file is not bundled in RPM</b><br>
<blockquote>205 build of tar is fine, but rpm failed with:<br><br>{noformat}<br> [rpm] Processing files: hadoop-0.20.205.0-1<br> [rpm] warning: File listed twice: /usr/libexec<br> [rpm] warning: File listed twice: /usr/libexec/hadoop-config.sh<br> [rpm] warning: File listed twice: /usr/libexec/jsvc.i386<br> [rpm] Checking for unpackaged file(s): /usr/lib/rpm/check-files /tmp/hadoop_package_build_hortonfo/BUILD<br> [rpm] error: Installed (but unpackaged) file(s) found:<br> [rpm] /etc/hadoop/fai...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7644">HADOOP-7644</a>.
Blocker bug reported by owen.omalley and fixed by owen.omalley (security)<br>
<b>Fix the delegation token tests to use the new style renewers</b><br>
<blockquote>Currently, TestDelegationTokenRenewal and TestDelegationTokenFetcher use the old style renewal and fail.<br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7645">HADOOP-7645</a>.
Blocker bug reported by atm and fixed by jnp (security)<br>
<b>HTTP auth tests requiring Kerberos infrastructure are not disabled on branch-0.20-security</b><br>
<blockquote>The back-port of HADOOP-7119 to branch-0.20-security included tests which require Kerberos infrastructure in order to run. In trunk and 0.23, these are disabled unless one enables the {{testKerberos}} maven profile. In branch-0.20-security, these tests are always run regardless, and so fail most of the time.<br><br>See this Jenkins build for an example: https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-0.20-security/26/</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7649">HADOOP-7649</a>.
Blocker bug reported by kihwal and fixed by jnp (security, test)<br>
<b>TestMapredGroupMappingServiceRefresh and TestRefreshUserMappings fail after HADOOP-7625</b><br>
<blockquote>TestMapredGroupMappingServiceRefresh and TestRefreshUserMappings fail after HADOOP-7625.<br>The classpath has been changed, so they try to create the rsrc file in a jar and fail.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7655">HADOOP-7655</a>.
Major improvement reported by arpitgupta and fixed by arpitgupta <br>
<b>provide a small validation script that smoke tests the installed cluster</b><br>
<blockquote>currently we have scripts that will setup a hadoop cluster, create users etc. We should add a script that will smoke test the installed cluster. The script could run 3 small mr jobs teragen, terasort and teravalidate and cleanup once its done.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7658">HADOOP-7658</a>.
Major bug reported by gkesavan and fixed by eyang <br>
<b>to fix hadoop config template</b><br>
<blockquote>hadoop rpm config template by default sets the HADOOP_SECURE_DN_USER, HADOOP_SECURE_DN_LOG_DIR &amp; HADOOP_SECURE_DN_PID_DIR <br>the above values should only be set for secured deployment ; <br># On secure datanodes, user to run the datanode as after dropping privileges<br>export HADOOP_SECURE_DN_USER=${HADOOP_HDFS_USER}<br><br># Where log files are stored. $HADOOP_HOME/logs by default.<br>export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER<br><br># Where log files are stored in the secure data environment.<br>export HADOOP_SE...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7661">HADOOP-7661</a>.
Major bug reported by jnp and fixed by jnp <br>
<b>FileSystem.getCanonicalServiceName throws NPE for any file system uri that doesn&apos;t have an authority.</b><br>
<blockquote>FileSystem.getCanonicalServiceName throws NPE for any file system uri that doesn&apos;t have an authority. <br><br>....<br>java.lang.NullPointerException<br>at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:138)<br>at org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:261)<br>at org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:174)<br>....</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7674">HADOOP-7674</a>.
Major bug reported by jnp and fixed by jnp <br>
<b>TestKerberosName fails in 20 branch.</b><br>
<blockquote>TestKerberosName fails in 20 branch. In fact this test has got duplicated in 20, with a little change to the rules.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7676">HADOOP-7676</a>.
Major bug reported by gkesavan and fixed by gkesavan <br>
<b>add rules to the core-site.xml template</b><br>
<blockquote>add rules for master and region in core-site.xml template.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7679">HADOOP-7679</a>.
Major bug reported by rramya and fixed by rramya (conf)<br>
<b>log4j.properties templates does not define mapred.jobsummary.logger</b><br>
<blockquote>In templates/conf/hadoop-env.sh, HADOOP_JOBTRACKER_OPTS is defined as -Dsecurity.audit.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dmapred.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}<br>However, in templates/conf/hadoop-env.sh, instead of mapred.jobsummary.logger, hadoop.mapreduce.jobsummary.logger is defined as follows:<br>hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}<br>This is preventing collection of jobsummary logs.<br><br>We have to consistently use mapred.jobsummary.logg...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7681">HADOOP-7681</a>.
Minor bug reported by arpitgupta and fixed by arpitgupta (conf)<br>
<b>log4j.properties is missing properties for security audit and hdfs audit should be changed to info</b><br>
<blockquote>(Arpit Gupta via Eric Yang)<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7683">HADOOP-7683</a>.
Minor bug reported by arpitgupta and fixed by arpitgupta <br>
<b>hdfs-site.xml template has properties that are not used in 20</b><br>
<blockquote>properties dfs.namenode.http-address and dfs.namenode.https-address should be removed</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7684">HADOOP-7684</a>.
Major bug reported by eyang and fixed by eyang (scripts)<br>
<b>jobhistory server and secondarynamenode should have init.d script</b><br>
<blockquote> Added init.d script for jobhistory server and secondary namenode. (Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7685">HADOOP-7685</a>.
Major bug reported by devaraj.k and fixed by eyang (scripts)<br>
<b>Issues with hadoop-common-project\hadoop-common\src\main\packages\hadoop-setup-conf.sh file </b><br>
<blockquote>hadoop-common-project\hadoop-common\src\main\packages\hadoop-setup-conf.sh has following issues<br>1. check_permission does not work as expected if there are two folders with $NAME as part of their name inside $PARENT<br>e.g. /home/hadoop/conf, /home/hadoop/someconf, <br>The result of `ls -ln $PARENT | grep -w $NAME| awk &apos;{print $3}&apos;` is non zero..it is 0 0 and hence the following if check becomes true.<br>{code:xml}<br>if [ &quot;$OWNER&quot; != &quot;0&quot; ]; then<br>RESULT=1<br>break<br>fi <br>{code}<br><br>2. Spelling mistake<br>{code:xml}<br>H...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7691">HADOOP-7691</a>.
Major bug reported by gkesavan and fixed by eyang <br>
<b>hadoop deb pkg should take a diff group id</b><br>
<blockquote> Fixed conflict uid for install packages. (Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7707">HADOOP-7707</a>.
Major improvement reported by arpitgupta and fixed by arpitgupta (conf)<br>
<b>improve config generator to allow users to specify proxy user, turn append on or off, turn webhdfs on or off</b><br>
<blockquote> Added toggle for dfs.support.append, webhdfs and hadoop proxy user to setup config script. (Arpit Gupta via Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7708">HADOOP-7708</a>.
Critical bug reported by arpitgupta and fixed by eyang (conf)<br>
<b>config generator does not update the properties file if on exists already</b><br>
<blockquote> Fixed hadoop-setup-conf.sh to handle config file consistently. (Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7710">HADOOP-7710</a>.
Major improvement reported by arpitgupta and fixed by arpitgupta <br>
<b>create a script to setup application in order to create root directories for application such hbase, hcat, hive etc</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7711">HADOOP-7711</a>.
Major bug reported by arpitgupta and fixed by arpitgupta (conf)<br>
<b>hadoop-env.sh generated from templates has duplicate info</b><br>
<blockquote> Fixed recursive sourcing of HADOOP_OPTS environment variables (Arpit Gupta via Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7715">HADOOP-7715</a>.
Major bug reported by arpitgupta and fixed by eyang (conf)<br>
<b>see log4j Error when running mr jobs and certain dfs calls</b><br>
<blockquote> Removed unnecessary security logger configuration. (Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7720">HADOOP-7720</a>.
Major improvement reported by arpitgupta and fixed by arpitgupta (conf)<br>
<b>improve the hadoop-setup-conf.sh to read in the hbase user and setup the configs</b><br>
<blockquote> Added parameter for HBase user to setup config script. (Arpit Gupta via Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7721">HADOOP-7721</a>.
Major bug reported by arpitgupta and fixed by jnp <br>
<b>dfs.web.authentication.kerberos.principal expects the full hostname and does not replace _HOST with the hostname</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7724">HADOOP-7724</a>.
Major bug reported by gkesavan and fixed by arpitgupta <br>
<b>hadoop-setup-conf.sh should put proxy user info into the core-site.xml </b><br>
<blockquote> Fixed hadoop-setup-conf.sh to put proxy user in core-site.xml. (Arpit Gupta via Eric Yang)<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-142">HDFS-142</a>.
Blocker bug reported by rangadi and fixed by dhruba <br>
<b>In 0.20, move blocks being written into a blocksBeingWritten directory</b><br>
<blockquote>Before 0.18, when Datanode restarts, it deletes files under data-dir/tmp directory since these files are not valid anymore. But in 0.18 it moves these files to normal directory incorrectly making them valid blocks. One of the following would work :<br><br>- remove the tmp files during upgrade, or<br>- if the files under /tmp are in pre-18 format (i.e. no generation), delete them.<br><br>Currently effect of this bug is that, these files end up failing block verification and eventually get deleted. But cause...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-200">HDFS-200</a>.
Blocker new feature reported by szetszwo and fixed by dhruba <br>
<b>In HDFS, sync() not yet guarantees data available to the new readers</b><br>
<blockquote>In the append design doc (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc), it says<br>* A reader is guaranteed to be able to read data that was &apos;flushed&apos; before the reader opened the file<br><br>However, this feature is not yet implemented. Note that the operation &apos;flushed&apos; is now called &quot;sync&quot;.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-561">HDFS-561</a>.
Major sub-task reported by kzhang and fixed by kzhang (data-node, hdfs client)<br>
<b>Fix write pipeline READ_TIMEOUT</b><br>
<blockquote>When writing a file, the pipeline status read timeouts for datanodes are not set up properly.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-606">HDFS-606</a>.
Major bug reported by shv and fixed by shv (name-node)<br>
<b>ConcurrentModificationException in invalidateCorruptReplicas()</b><br>
<blockquote>{{BlockManager.invalidateCorruptReplicas()}} iterates over DatanodeDescriptor-s while removing corrupt replicas from the descriptors. This causes {{ConcurrentModificationException}} if there is more than one replicas of the block. I ran into this exception debugging different scenarios in append, but it should be fixed in the trunk too.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-630">HDFS-630</a>.
Major improvement reported by mry.maillist and fixed by clehene (hdfs client, name-node)<br>
<b>In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.</b><br>
<blockquote>created from hdfs-200.<br><br>If during a write, the dfsclient sees that a block replica location for a newly allocated block is not-connectable, it re-requests the NN to get a fresh set of replica locations of the block. It tries this dfs.client.block.write.retries times (default 3), sleeping 6 seconds between each retry ( see DFSClient.nextBlockOutputStream).<br><br>This setting works well when you have a reasonable size cluster; if u have few datanodes in the cluster, every retry maybe pick the dead-d...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-724">HDFS-724</a>.
Blocker bug reported by szetszwo and fixed by hairong (data-node, hdfs client)<br>
<b>Pipeline close hangs if one of the datanode is not responsive.</b><br>
<blockquote>In the new pipeline design, pipeline close is implemented by sending an additional empty packet. If one of the datanode does not response to this empty packet, the pipeline hangs. It seems that there is no timeout.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-826">HDFS-826</a>.
Major improvement reported by dhruba and fixed by dhruba (hdfs client)<br>
<b>Allow a mechanism for an application to detect that datanode(s) have died in the write pipeline</b><br>
<blockquote>HDFS does not replicate the last block of the file that is being currently written to by an application. Every datanode death in the write pipeline decreases the reliability of the last block of the currently-being-written block. This situation can be improved if the application can be notified of a datanode death in the write pipeline. Then, the application can decide what is the right course of action to be taken on this event.<br><br>In our use-case, the application can close the file on the fir...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-895">HDFS-895</a>.
Major improvement reported by dhruba and fixed by tlipcon (hdfs client)<br>
<b>Allow hflush/sync to occur in parallel with new writes to the file</b><br>
<blockquote>In the current trunk, the HDFS client methods writeChunk() and hflush./sync are syncronized. This means that if a hflush/sync is in progress, an applicationn cannot write data to the HDFS client buffer. This reduces the write throughput of the transaction log in HBase. <br><br>The hflush/sync should allow new writes to happen to the HDFS client even when a hflush/sync is in progress. It can record the seqno of the message for which it should receice the ack, indicate to the DataStream thread to sta...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-988">HDFS-988</a>.
Blocker bug reported by dhruba and fixed by eli (name-node)<br>
<b>saveNamespace race can corrupt the edits log</b><br>
<blockquote>The adminstrator puts the namenode is safemode and then issues the savenamespace command. This can corrupt the edits log. The problem is that when the NN enters safemode, there could still be pending logSycs occuring from other threads. Now, the saveNamespace command, when executed, would save a edits log with partial writes. I have seen this happen on 0.20.<br><br>https://issues.apache.org/jira/browse/HDFS-909?focusedCommentId=12828853&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1054">HDFS-1054</a>.
Major improvement reported by tlipcon and fixed by tlipcon (hdfs client)<br>
<b>Remove unnecessary sleep after failure in nextBlockOutputStream</b><br>
<blockquote>If DFSOutputStream fails to create a pipeline, it currently sleeps 6 seconds before retrying. I don&apos;t see a great reason to wait at all, much less 6 seconds (especially now that HDFS-630 ensures that a retry won&apos;t go back to the bad node). We should at least make it configurable, and perhaps something like backoff makes some sense.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1057">HDFS-1057</a>.
Blocker sub-task reported by tlipcon and fixed by rash37 (data-node)<br>
<b>Concurrent readers hit ChecksumExceptions if following a writer to very end of file</b><br>
<blockquote>In BlockReceiver.receivePacket, it calls replicaInfo.setBytesOnDisk before calling flush(). Therefore, if there is a concurrent reader, it&apos;s possible to race here - the reader will see the new length while those bytes are still in the buffers of BlockReceiver. Thus the client will potentially see checksum errors or EOFs. Additionally, the last checksum chunk of the file is made accessible to readers even though it is not stable.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1118">HDFS-1118</a>.
Major bug reported by zshao and fixed by zshao <br>
<b>DFSOutputStream socket leak when cannot connect to DataNode</b><br>
<blockquote>The offending code is in {{DFSOutputStream.nextBlockOutputStream}}<br><br>This function retries several times to call {{createBlockOutputStream}}. Each time when it fails, it leaves a {{Socket}} object in {{DFSOutputStream.s}}.<br>That object is never closed, but overwritten the next time {{createBlockOutputStream}} is called.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1122">HDFS-1122</a>.
Major sub-task reported by rash37 and fixed by rash37 <br>
<b>client block verification may result in blocks in DataBlockScanner prematurely</b><br>
<blockquote>found that when the DN uses client verification of a block that is open for writing, it will add it to the DataBlockScanner prematurely. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1141">HDFS-1141</a>.
Blocker bug reported by tlipcon and fixed by tlipcon (name-node)<br>
<b>completeFile does not check lease ownership</b><br>
<blockquote>completeFile should check that the caller still owns the lease of the file that it&apos;s completing. This is for the &apos;testCompleteOtherLeaseHoldersFile&apos; case in HDFS-1139.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1164">HDFS-1164</a>.
Major bug reported by eli and fixed by tlipcon (contrib/hdfsproxy)<br>
<b>TestHdfsProxy is failing</b><br>
<blockquote>TestHdfsProxy is failing on trunk, seen in HDFS-1132 and HDFS-1143. It doesn&apos;t look like hudson posts test results for contrib and it&apos;s hard to see what&apos;s going on from the raw console output. Can someone with access to hudson upload the individual test output for TestHdfsProxy so we can see what the issue is?</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1186">HDFS-1186</a>.
Blocker bug reported by tlipcon and fixed by tlipcon (data-node)<br>
<b>0.20: DNs should interrupt writers at start of recovery</b><br>
<blockquote>When block recovery starts (eg due to NN recovering lease) it needs to interrupt any writers currently writing to those blocks. Otherwise, an old writer (who hasn&apos;t realized he lost his lease) can continue to write+sync to the blocks, and thus recovery ends up truncating data that has been sync()ed.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1197">HDFS-1197</a>.
Major bug reported by tlipcon and fixed by (data-node, hdfs client, name-node)<br>
<b>Blocks are considered &quot;complete&quot; prematurely after commitBlockSynchronization or DN restart</b><br>
<blockquote>I saw this failure once on my internal Hudson job that runs the append tests 48 times a day:<br>junit.framework.AssertionFailedError: expected:&lt;114688&gt; but was:&lt;98304&gt;<br> at org.apache.hadoop.hdfs.AppendTestUtil.check(AppendTestUtil.java:112)<br> at org.apache.hadoop.hdfs.TestFileAppend3.testTC2(TestFileAppend3.java:116)<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1202">HDFS-1202</a>.
Major bug reported by tlipcon and fixed by tlipcon (data-node)<br>
<b>DataBlockScanner throws NPE when updated before initialized</b><br>
<blockquote>Missing an isInitialized() check in updateScanStatusInternal</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1204">HDFS-1204</a>.
Major bug reported by tlipcon and fixed by rash37 <br>
<b>0.20: Lease expiration should recover single files, not entire lease holder</b><br>
<blockquote>This was brought up in HDFS-200 but didn&apos;t make it into the branch on Apache.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1207">HDFS-1207</a>.
Major bug reported by tlipcon and fixed by tlipcon (name-node)<br>
<b>0.20-append: stallReplicationWork should be volatile</b><br>
<blockquote>the stallReplicationWork member in FSNamesystem is accessed by multiple threads without synchronization, but isn&apos;t marked volatile. I believe this is responsible for about 1% failure rate on TestFileAppend4.testAppendSyncChecksum* on my 8-core test boxes (looking at logs I see replication happening even though we&apos;ve supposedly disabled it)</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1210">HDFS-1210</a>.
Trivial improvement reported by tlipcon and fixed by tlipcon (hdfs client)<br>
<b>DFSClient should log exception when block recovery fails</b><br>
<blockquote>Right now we just retry without necessarily showing the exception. It can be useful to see what the error was that prevented the recovery RPC from succeeding.<br>(I believe this only applies in 0.20 style of block recovery)</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1211">HDFS-1211</a>.
Minor improvement reported by tlipcon and fixed by tlipcon (data-node)<br>
<b>0.20 append: Block receiver should not log &quot;rewind&quot; packets at INFO level</b><br>
<blockquote>In the 0.20 append implementation, it logs an INFO level message for every packet that &quot;rewinds&quot; the end of the block file. This is really noisy for applications like HBase which sync every edit.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1218">HDFS-1218</a>.
Critical bug reported by tlipcon and fixed by tlipcon (data-node)<br>
<b>20 append: Blocks recovered on startup should be treated with lower priority during block synchronization</b><br>
<blockquote>When a datanode experiences power loss, it can come back up with truncated replicas (due to local FS journal replay). Those replicas should not be allowed to truncate the block during block synchronization if there are other replicas from DNs that have _not_ restarted.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1242">HDFS-1242</a>.
Major test reported by tlipcon and fixed by tlipcon <br>
<b>0.20 append: Add test for appendFile() race solved in HDFS-142</b><br>
<blockquote>This is a unit test that didn&apos;t make it into branch-0.20-append, but worth having in TestFileAppend4.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1252">HDFS-1252</a>.
Major test reported by tlipcon and fixed by tlipcon (test)<br>
<b>TestDFSConcurrentFileOperations broken in 0.20-appendj</b><br>
<blockquote>This test currently has several flaws:<br> - It calls DN.updateBlock with a BlockInfo instance, which then causes java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.server.namenode.BlocksMap$BlockInfo.&lt;init&gt;() in the logs when the DN tries to send blockReceived for the block<br> - It assumes that getBlockLocations returns an up-to-date length block after a sync, which is false. It happens to work because it calls getBlockLocations directly on the NN, and thus gets a...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1260">HDFS-1260</a>.
Critical bug reported by tlipcon and fixed by tlipcon <br>
<b>0.20: Block lost when multiple DNs trying to recover it to different genstamps</b><br>
<blockquote>Saw this issue on a cluster where some ops people were doing network changes without shutting down DNs first. So, recovery ended up getting started at multiple different DNs at the same time, and some race condition occurred that caused a block to get permanently stuck in recovery mode. What seems to have happened is the following:<br>- FSDataset.tryUpdateBlock called with old genstamp 7091, new genstamp 7094, while the block in the volumeMap (and on filesystem) was genstamp 7093<br>- we find the b...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1346">HDFS-1346</a>.
Major bug reported by hairong and fixed by hairong (data-node, hdfs client)<br>
<b>DFSClient receives out of order packet ack</b><br>
<blockquote>When running 0.20 patched with HDFS-101, we sometimes see an error as follow:<br>WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block blk_-2871223654872350746_21421120java.io.IOException: Responseprocessor: Expecting seq<br>no for block blk_-2871223654872350746_21421120 10280 but received 10281<br>at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2570)<br><br>This indicates that DFS client expects an ack for packet N, but receives an ack for packe...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1520">HDFS-1520</a>.
Major new feature reported by hairong and fixed by hairong (name-node)<br>
<b>HDFS 20 append: Lightweight NameNode operation to trigger lease recovery</b><br>
<blockquote>Currently HBase uses append to trigger the close of HLog during Hlog split. Append is a very expensive operation, which involves not only NameNode operations but creating a writing pipeline. If one of datanodes on the pipeline has a problem, this recovery may takes minutes. I&apos;d like implement a lightweight NameNode operation to trigger lease recovery and make HBase to use this instead.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1554">HDFS-1554</a>.
Major improvement reported by hairong and fixed by hairong <br>
<b>Append 0.20: New semantics for recoverLease</b><br>
<blockquote> Change recoverLease API to return if the file is closed or not. It also change the semantics of recoverLease to start lease recovery immediately.<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1555">HDFS-1555</a>.
Major improvement reported by hairong and fixed by hairong <br>
<b>HDFS 20 append: Disallow pipeline recovery if a file is already being lease recovered</b><br>
<blockquote>When a file is under lease recovery and the writer is still alive, the write pipeline will be killed and then the writer will start a pipeline recovery. Sometimes the pipeline recovery may race before the lease recovery and as a result fail the lease recovery. This is very bad if we want to support the strong recoverLease semantics in HDFS-1554. So it would be nice if we could disallow a file&apos;s pipeline recovery while its lease recovery is in progress.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1779">HDFS-1779</a>.
Major bug reported by umamaheswararao and fixed by umamaheswararao (data-node, name-node)<br>
<b>After NameNode restart , Clients can not read partial files even after client invokes Sync.</b><br>
<blockquote>In Append HDFS-200 issue,<br>If file has 10 blocks and after writing 5 blocks if client invokes sync method then NN will persist the blocks information in edits. <br>After this if we restart the NN, All the DataNodes will reregister with NN. But DataNodes are not sending the blocks being written information to NN. DNs are sending the blocksBeingWritten information in DN startup. So, here NameNode can not find that the 5 persisted blocks belongs to which datanodes. This information can build based o...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1836">HDFS-1836</a>.
Major bug reported by hkdennis2k and fixed by bharathm (hdfs client)<br>
<b>Thousand of CLOSE_WAIT socket </b><br>
<blockquote>$ /usr/sbin/lsof -i TCP:50010 | grep -c CLOSE_WAIT<br>4471<br><br>It is better if everything runs normal. <br>However, from time to time there are some &quot;DataStreamer Exception: java.net.SocketTimeoutException&quot; and &quot;DFSClient.processDatanodeError(2507) | Error Recovery for&quot; can be found from log file and the number of CLOSE_WAIT socket just keep increasing<br><br>The CLOSE_WAIT handles may remain for hours and days; then &quot;Too many open file&quot; some day.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2053">HDFS-2053</a>.
Minor bug reported by miguno and fixed by miguno (name-node)<br>
<b>Bug in INodeDirectory#computeContentSummary warning</b><br>
<blockquote>*How to reproduce*<br><br>{code}<br># create test directories<br>$ hadoop fs -mkdir /hdfs-1377/A<br>$ hadoop fs -mkdir /hdfs-1377/B<br>$ hadoop fs -mkdir /hdfs-1377/C<br><br># ...add some test data (few kB or MB) to all three dirs...<br><br># set space quota for subdir C only<br>$ hadoop dfsadmin -setSpaceQuota 1g /hdfs-1377/C<br><br># the following two commands _on the parent dir_ trigger the warning<br>$ hadoop fs -dus /hdfs-1377<br>$ hadoop fs -count -q /hdfs-1377<br>{code}<br><br>Warning message in the namenode logs:<br><br>{code}<br>2011-06-09 09:42...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2117">HDFS-2117</a>.
Minor bug reported by eli and fixed by eli (data-node)<br>
<b>DiskChecker#mkdirsWithExistsAndPermissionCheck may return true even when the dir is not created</b><br>
<blockquote>In branch-0.20-security as part of HADOOP-6566, DiskChecker#mkdirsWithExistsAndPermissionCheck will return true even if it wasn&apos;t able to create the directory, which means instead of throwing a DiskErrorException the code will proceed to getFileStatus and throw a FNF exception. Post HADOOP-7040, which modified makeInstance to catch not just DiskErrorExceptions but IOExceptions as well, this is not an issue since now the exception is caught either way. But for future modifications we should st...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2190">HDFS-2190</a>.
Major bug reported by atm and fixed by atm (name-node)<br>
<b>NN fails to start if it encounters an empty or malformed fstime file</b><br>
<blockquote>On startup, the NN reads the fstime file of all the configured dfs.name.dirs to determine which one to load. However, if any of the searched directories contain an empty or malformed fstime file, the NN will fail to start. The NN should be able to just proceed with starting and ignore the directory containing the bad fstime file.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2202">HDFS-2202</a>.
Major new feature reported by eepayne and fixed by eepayne (balancer, data-node)<br>
<b>Changes to balancer bandwidth should not require datanode restart.</b><br>
<blockquote> New dfsadmin command added: [-setBalancerBandwidth &amp;lt;bandwidth&amp;gt;] where bandwidth is max network bandwidth in bytes per second that the balancer is allowed to use on each datanode during balacing.&lt;br/&gt;<br><br>This is an incompatible change in 0.23. The versions of ClientProtocol and DatanodeProtocol are changed.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2259">HDFS-2259</a>.
Minor bug reported by eli and fixed by eli (data-node)<br>
<b>DN web-UI doesn&apos;t work with paths that contain html </b><br>
<blockquote>The 20-based DN web UI doesn&apos;t work with paths that contain html. The paths need to be unescaped when used to access the file and escaped when printed for navigation.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2284">HDFS-2284</a>.
Major sub-task reported by sanjay.radia and fixed by szetszwo <br>
<b>Write Http access to HDFS</b><br>
<blockquote>HFTP allows on read access to HDFS via HTTP. Add write HTTP access to HDFS.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2300">HDFS-2300</a>.
Major bug reported by jnp and fixed by jnp <br>
<b>TestFileAppend4 and TestMultiThreadedSync fail on 20.append and 20-security.</b><br>
<blockquote>TestFileAppend4 and TestMultiThreadedSync fail on the 20.append and 20-security branch.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2309">HDFS-2309</a>.
Major bug reported by jnp and fixed by jnp <br>
<b>TestRenameWhileOpen fails in branch-0.20-security</b><br>
<blockquote>TestRenameWhileOpen is failing in branch-0.20-security.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2317">HDFS-2317</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>Read access to HDFS using HTTP REST</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2318">HDFS-2318</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>Provide authentication to webhdfs using SPNEGO</b><br>
<blockquote> Added two new conf properties dfs.web.authentication.kerberos.principal and dfs.web.authentication.kerberos.keytab for the SPNEGO servlet filter.<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2320">HDFS-2320</a>.
Major bug reported by sureshms and fixed by sureshms (data-node, hdfs client, name-node)<br>
<b>Make merged protocol changes from 0.20-append to 0.20-security compatible with previous releases.</b><br>
<blockquote>0.20-append changes have been merged to 0.20-security. The merge has changes to version numbers in several protocols. This jira makes the protocol changes compatible with older release, allowing clients running older version to talk to server running 205 version and clients running 205 version talk to older servers running 203, 204.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2325">HDFS-2325</a>.
Blocker bug reported by charlescearl and fixed by kihwal (contrib/fuse-dfs, libhdfs)<br>
<b>Fuse-DFS fails to build on Hadoop 20.203.0</b><br>
<blockquote>In building fuse-dfs, the compile fails due to an argument mismatch between call to hdfsConnectAsUser on line 40 of src/contrib/fuse-dfs/src/fuse_connect.c and an earlier definition of hdfsConnectAsUser given in src/c++/libhdfs/hdfs.h.<br>I suggest changing hdfs.h. I made the following change in hdfs.h in my local copy:<br><br>106c106,107<br>&lt; hdfsFS hdfsConnectAsUser(const char* host, tPort port, const char *user);<br>---<br>&gt; // hdfsFS hdfsConnectAsUser(const char* host, tPort port, const char *us...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2328">HDFS-2328</a>.
Critical bug reported by daryn and fixed by owen.omalley <br>
<b>hftp throws NPE if security is not enabled on remote cluster</b><br>
<blockquote>If hftp cannot locate either a hdfs or hftp token in the ugi, it will call {{getDelegationToken}} to acquire one from the remote nn. This method may return a null {{Token}} if security is disabled(*) on the remote nn. Hftp will internally call its {{setDelegationToken}} which will throw a NPE when the token is {{null}}.<br><br>(*) Actually, if any problem happens while acquiring the token it assumes security is disabled! However, it&apos;s a pre-existing issue beyond the scope of the token renewal c...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2331">HDFS-2331</a>.
Major bug reported by abhijit.shingate and fixed by abhijit.shingate (hdfs client)<br>
<b>Hdfs compilation fails</b><br>
<blockquote>I am trying to perform complete build from trunk folder but the compilation fails.<br><br>*Commandline:*<br>mvn clean install <br><br>*Error Message:*<br><br>[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.<br>3.2:compile (default-compile) on project hadoop-hdfs: Compilation failure<br>[ERROR] \Hadoop\SVN\trunk\hadoop-hdfs-project\hadoop-hdfs\src\main\java\org<br>\apache\hadoop\hdfs\web\WebHdfsFileSystem.java:[209,21] type parameters of &lt;T&gt;T<br>cannot be determined; no unique maximal instance...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2333">HDFS-2333</a>.
Major bug reported by ikelly and fixed by szetszwo <br>
<b>HDFS-2284 introduced 2 findbugs warnings on trunk</b><br>
<blockquote>When HDFS-2284 was submitted it made DFSOutputStream public which triggered two SC_START_IN_CTOR findbug warnings.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2338">HDFS-2338</a>.
Major sub-task reported by jnp and fixed by jnp <br>
<b>Configuration option to enable/disable webhdfs.</b><br>
<blockquote> Added a conf property dfs.webhdfs.enabled for enabling/disabling webhdfs.<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2340">HDFS-2340</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>Support getFileBlockLocations and getDelegationToken in webhdfs</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2342">HDFS-2342</a>.
Blocker bug reported by kihwal and fixed by szetszwo (build)<br>
<b>TestSleepJob and TestHdfsProxy broken after HDFS-2284</b><br>
<blockquote>After HDFS-2284, TestSleepJob and TestHdfsProxy are failing.<br>The both work in rev 1167444 and fail in rev 1167663.<br>It will be great if they can be fixed for 205.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2348">HDFS-2348</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>Support getContentSummary and getFileChecksum in webhdfs</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2356">HDFS-2356</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>webhdfs: support case insensitive query parameter names</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2358">HDFS-2358</a>.
Major bug reported by rajsaha and fixed by daryn (name-node)<br>
<b>NPE when the default filesystem&apos;s uri has no authority</b><br>
<blockquote> Give meaningful error message instead of NPE.<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2359">HDFS-2359</a>.
Major bug reported by rajsaha and fixed by jeagles (data-node)<br>
<b>NPE found in Datanode log while Disk failed during different HDFS operation</b><br>
<blockquote>Scenario:<br>I have a cluster of 4 DN ,each of them have 12disks.<br><br>In hdfs-site.xml I have &quot;dfs.datanode.failed.volumes.tolerated=3&quot; <br><br>During the execution of distcp (hdfs-&gt;hdfs), I am failing 3 disks in one Datanode, by making Data Directory permission 000, The distcp job is successful but , I am getting some NullPointerException in Datanode log<br><br>In one thread<br>$hadoop distcp /user/$HADOOPQA_USER/data1 /user/$HADOOPQA_USER/data3<br><br>In another thread in a datanode<br>$ chmod 000 /xyz/{0,1,2}/hadoop/v...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2361">HDFS-2361</a>.
Critical bug reported by rajsaha and fixed by jnp (name-node)<br>
<b>hftp is broken</b><br>
<blockquote>Distcp with hftp is failing.<br><br>{noformat}<br>$hadoop distcp hftp://&lt;NNhostname&gt;:50070/user/hadoopqa/1316814737/newtemp 1316814737/as<br>11/09/23 21:52:33 INFO tools.DistCp: srcPaths=[hftp://&lt;NNhostname&gt;:50070/user/hadoopqa/1316814737/newtemp]<br>11/09/23 21:52:33 INFO tools.DistCp: destPath=1316814737/as<br>Retrieving token from: https://&lt;NN IP&gt;:50470/getDelegationToken<br>Retrieving token from: https://&lt;NN IP&gt;:50470/getDelegationToken?renewer=mapred<br>11/09/23 21:52:34 INFO security.TokenCache: Got dt for h...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2366">HDFS-2366</a>.
Major bug reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs throws a npe when ugi is null from getDelegationToken</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2368">HDFS-2368</a>.
Major bug reported by arpitgupta and fixed by szetszwo <br>
<b>defaults created for web keytab and principal, these properties should not have defaults</b><br>
<blockquote>the following defaults are set in hdfs-defaults.xml<br><br>&lt;property&gt;<br> &lt;name&gt;dfs.web.authentication.kerberos.principal&lt;/name&gt;<br> &lt;value&gt;HTTP/${dfs.web.hostname}@${kerberos.realm}&lt;/value&gt;<br> &lt;description&gt;<br> The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint.<br><br> The HTTP Kerberos principal MUST start with &apos;HTTP/&apos; per Kerberos<br> HTTP SPENGO specification.<br> &lt;/description&gt;<br>&lt;/property&gt;<br><br>&lt;property&gt;<br> &lt;name&gt;dfs.web.authentication.kerberos.keytab&lt;/name&gt;<br> &lt;value&gt;${user.home}/dfs.web....</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2373">HDFS-2373</a>.
Major bug reported by arpitgupta and fixed by arpitgupta <br>
<b>Commands using webhdfs and hftp print unnecessary debug information on the console with security enabled</b><br>
<blockquote>run an hdfs command using either hftp or webhdfs and it prints the following line to the console (system out)<br><br>Retrieving token from: https://NN_HOST:50470/getDelegationToken<br><br><br>Probably in the code where we get the delegation token. This should be removed as people using the dfs commands to get a handle to the content such as dfs -cat will now get an extra line that is not part of the actual content. This should either be only in the log or not logged at all.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2375">HDFS-2375</a>.
Blocker bug reported by sureshms and fixed by sureshms (hdfs client)<br>
<b>TestFileAppend4 fails in 0.20.205 branch</b><br>
<blockquote>TestFileAppend4 fails due to change from HDFS-2333. The test uses reflection to get to the method DFSOutputStream#getNumCurrentReplicas(). Since HDFS-2333 patch change this method from public to private, reflection get the method fails resulting in test failures.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2385">HDFS-2385</a>.
Major sub-task reported by szetszwo and fixed by szetszwo <br>
<b>Support delegation token renewal in webhdfs</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2392">HDFS-2392</a>.
Critical bug reported by rajsaha and fixed by daryn (name-node)<br>
<b>Dist with hftp is failing again</b><br>
<blockquote>$ hadoop distcp hftp://&lt;NN Hostname&gt;:50070/user/hadoopqa/input1/part-00000 /user/hadoopqa/out3<br>11/09/30 18:57:59 INFO tools.DistCp: srcPaths=[hftp://&lt;NN Hostname&gt;:50070/user/hadoopqa/input1/part-00000]<br>11/09/30 18:57:59 INFO tools.DistCp: destPath=/user/hadoopqa/out3<br>11/09/30 18:58:00 INFO security.TokenCache: Got dt for<br>hftp://&lt;NN Hostname&gt;:50070/user/hadoopqa/input1/part-00000;uri=&lt;NN IP&gt;:50470;t.service=&lt;NN IP&gt;:50470<br>11/09/30 18:58:00 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN toke...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2395">HDFS-2395</a>.
Critical bug reported by arpitgupta and fixed by szetszwo <br>
<b>webhdfs api&apos;s should return a root element in the json response</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2403">HDFS-2403</a>.
Major bug reported by szetszwo and fixed by szetszwo <br>
<b>The renewer in NamenodeWebHdfsMethods.generateDelegationToken(..) is not used</b><br>
<blockquote>Below are some suggestions from Suresh.<br># renewer not used in #generateDelegationToken<br># put() does not use InputStream in and should not throw URISyntaxException<br># post() does not use InputStream in and should not throw URISyntaxException<br># get() should not throw URISyntaxException<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2404">HDFS-2404</a>.
Major bug reported by arpitgupta and fixed by sureshms <br>
<b>webhdfs liststatus json response is not correct</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2408">HDFS-2408</a>.
Blocker bug reported by stack and fixed by stack (hdfs client)<br>
<b>DFSClient#getNumCurrentReplicas is package private in 205 but public in branch-0.20-append</b><br>
<blockquote>The below commit broke hdfs-826 for hbase in 205 rc1. It changes the accessiblity from public to package private on getNumCurrentReplicas and now current shipping hbase&apos;s at least cannot get at this method.<br><br>{code}<br>Revision 1174483 - (view) (download) (annotate) - [select for diffs] <br>Modified Fri Sep 23 01:30:18 2011 UTC (13 days, 4 hours ago) by szetszwo <br>File length: 136876 byte(s) <br>Diff to previous 1174479 (colored)<br>svn merge -c 1171137 from branch-0.20-security for HDFS-2333.<br>{code}<br><br>Her...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2411">HDFS-2411</a>.
Major bug reported by arpitgupta and fixed by jnp <br>
<b>with webhdfs enabled in secure mode the auth to local mappings are not being respected.</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1734">MAPREDUCE-1734</a>.
Blocker improvement reported by tomwhite and fixed by tlipcon (documentation)<br>
<b>Un-deprecate the old MapReduce API in the 0.20 branch</b><br>
<blockquote>This issue is to un-deprecate the &quot;old&quot; MapReduce API (in o.a.h.mapred) in the next 0.20 release, as discussed at http://www.mail-archive.com/mapreduce-dev@hadoop.apache.org/msg01833.html</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2187">MAPREDUCE-2187</a>.
Major bug reported by azaroth and fixed by anupamseth <br>
<b>map tasks timeout during sorting</b><br>
<blockquote> I just committed this. Thanks Anupam!<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2324">MAPREDUCE-2324</a>.
Major bug reported by tlipcon and fixed by revans2 <br>
<b>Job should fail if a reduce task can&apos;t be scheduled anywhere</b><br>
<blockquote>If there&apos;s a reduce task that needs more disk space than is available on any mapred.local.dir in the cluster, that task will stay pending forever. For example, we produced this in a QA cluster by accidentally running terasort with one reducer - since no mapred.local.dir had 1T free, the job remained in pending state for several days. The reason for the &quot;stuck&quot; task wasn&apos;t clear from a user perspective until we looked at the JT logs.<br><br>Probably better to just fail the job if a reduce task goes ...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2489">MAPREDUCE-2489</a>.
Major bug reported by naisbitt and fixed by naisbitt (jobtracker)<br>
<b>Jobsplits with random hostnames can make the queue unusable</b><br>
<blockquote>We saw an issue where a custom InputSplit was returning invalid hostnames for the splits that were then causing the JobTracker to attempt to excessively resolve host names. This caused a major slowdown for the JobTracker. We should prevent invalid InputSplit hostnames from affecting everyone else.<br><br>I propose we implement some verification for the hostnames to try to ensure that we only do DNS lookups on valid hostnames (and fail otherwise). We could also fail the job after a certain number...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2494">MAPREDUCE-2494</a>.
Major improvement reported by revans2 and fixed by revans2 (distributed-cache)<br>
<b>Make the distributed cache delete entires using LRU priority</b><br>
<blockquote> Added config option mapreduce.tasktracker.cache.local.keep.pct to the TaskTracker. It is the target percentage of the local distributed cache that should be kept in between garbage collection runs. In practice it will delete unused distributed cache entries in LRU order until the size of the cache is less than mapreduce.tasktracker.cache.local.keep.pct of the maximum cache size. This is a floating point value between 0.0 and 1.0. The default is 0.95.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2549">MAPREDUCE-2549</a>.
Major bug reported by devaraj.k and fixed by devaraj.k (contrib/eclipse-plugin, contrib/streaming)<br>
<b>Potential resource leaks in HadoopServer.java, RunOnHadoopWizard.java and Environment.java</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2610">MAPREDUCE-2610</a>.
Major bug reported by jrottinghuis and fixed by jrottinghuis (client)<br>
<b>Inconsistent API JobClient.getQueueAclsForCurrentUser</b><br>
<blockquote>Client needs access to the current user&apos;s queue name.<br>Public method JobClient.getQueueAclsForCurrentUser() returns QueueAclsInfo[].<br>The QueueAclsInfo class has default access. A public method should not return a package-private class.<br><br>The QueueAclsInfo class, its two constructors, getQueueName, and getOperations methods should be public.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2650">MAPREDUCE-2650</a>.
Major bug reported by sherri_chen and fixed by sherri_chen <br>
<b>back-port MAPREDUCE-2238 to 0.20-security</b><br>
<blockquote>Dev had seen the attempt directory permission getting set to 000 or 111 in the CI builds and tests run on dev desktops with 0.20-security.<br>MAPREDUCE-2238 reported and fixed the issue for 0.22.0, back-port to 0.20-security is needed.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2705">MAPREDUCE-2705</a>.
Major bug reported by tgraves and fixed by tgraves (tasktracker)<br>
<b>tasks localized and launched serially by TaskLauncher - causing other tasks to be delayed</b><br>
<blockquote>The current TaskLauncher serially launches new tasks one at a time. During the launch it does the localization and then starts the map/reduce task. This can cause any other tasks to be blocked waiting for the current task to be localized and started. In some instances we have seen a task that has a large file to localize (1.2MB) block another task for about 40 minutes. This particular task being blocked was a cleanup task which caused the job to be delayed finishing for the 40 minutes.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2729">MAPREDUCE-2729</a>.
Major improvement reported by sherri_chen and fixed by sherri_chen <br>
<b>Reducers are always counted having &quot;pending tasks&quot; even if they can&apos;t be scheduled yet because not enough of their mappers have completed</b><br>
<blockquote>In capacity scheduler, number of users in a queue needing slots are calculated based on whether users&apos; jobs have any pending tasks.<br>This works fine for map tasks. However, for reduce tasks, jobs do not need reduce slots until the minimum number of map tasks have been completed.<br><br>Here, we add checking whether reduce is ready to schedule (i.e. if a job has completed enough map tasks) when we increment number of users in a queue needing reduce slots.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2764">MAPREDUCE-2764</a>.
Major bug reported by daryn and fixed by owen.omalley <br>
<b>Fix renewal of dfs delegation tokens</b><br>
<blockquote> Generalizes token renewal and canceling to a common interface and provides a plugin interface for adding renewers for new kinds of tokens. Hftp changed to store the tokens as HFTP and renew them over http.<br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2777">MAPREDUCE-2777</a>.
Major new feature reported by jeagles and fixed by amar_kamat <br>
<b>Backport MAPREDUCE-220 to Hadoop 20 security branch</b><br>
<blockquote> Adds cumulative cpu usage and total heap usage to task counters. This is a backport of &lt;a href=&quot;/jira/browse/MAPREDUCE-220&quot; title=&quot;Collecting cpu and memory usage for MapReduce tasks&quot;&gt;&lt;strike&gt;MAPREDUCE-220&lt;/strike&gt;&lt;/a&gt; and &lt;a href=&quot;/jira/browse/MAPREDUCE-2469&quot; title=&quot;Task counters should also report the total heap usage of the task&quot;&gt;&lt;strike&gt;MAPREDUCE-2469&lt;/strike&gt;&lt;/a&gt;.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2780">MAPREDUCE-2780</a>.
Major sub-task reported by daryn and fixed by daryn <br>
<b>Standardize the value of token service</b><br>
<blockquote>The token&apos;s service field must (currently) be set to &quot;ip:port&quot;. All the producers of a token are independently building the service string. This should be done via a common method to reduce the chance of error, and to facilitate the field value being easily changed in the (near) future.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2852">MAPREDUCE-2852</a>.
Major bug reported by eli and fixed by kihwal (tasktracker)<br>
<b>Jira for YDH bug 2854624 </b><br>
<blockquote>The DefaultTaskController and LinuxTaskController reference Yahoo! internal bug 2854624:<br><br>{code}<br>FileSystem rawFs = FileSystem.getLocal(getConf()).getRaw();<br>long logSize = 0; //TODO: Ref BUG:2854624<br>{code}<br><br>This jira tracks this TODO. If someone w/ access to Yahoo&apos;s bugzilla could update this jira with what the bug is that would be great.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2915">MAPREDUCE-2915</a>.
Major bug reported by kihwal and fixed by kihwal (task-controller)<br>
<b>LinuxTaskController does not work when JniBasedUnixGroupsNetgroupMapping or JniBasedUnixGroupsMapping is enabled</b><br>
<blockquote>When a job is submitted, LinuxTaskController launches the native task-controller binary for job initialization. The native program does a series of prep work and call execv() to run JobLocalizer. It was observed that JobLocalizer does fails to run when JniBasedUnixGroupsNetgroupMapping or JniBasedUnixGroupsMapping is enabled, resulting in 100% job failures.<br><br>JobLocalizer normally does not need the native library (libhadoop) for its functioning, but enabling a JNI user-to-group mapping functi...</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2928">MAPREDUCE-2928</a>.
Major sub-task reported by eli and fixed by eli (tasktracker)<br>
<b>MR-2413 improvements</b><br>
<blockquote>Tracks improvements to MR-2413. See [this comment|https://issues.apache.org/jira/browse/MAPREDUCE-2413?focusedCommentId=13095073&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13095073].</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2981">MAPREDUCE-2981</a>.
Major improvement reported by matei and fixed by matei (contrib/fair-share)<br>
<b>Backport trunk fairscheduler to 0.20-security branch</b><br>
<blockquote>A lot of improvements have been made to the fair scheduler in 0.21, 0.22 and trunk, but have not been ported back to the new 0.20.20X releases that are currently considered the stable branch of Hadoop.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3076">MAPREDUCE-3076</a>.
Blocker bug reported by acmurthy and fixed by acmurthy (test)<br>
<b>TestSleepJob fails </b><br>
<blockquote>TestSleepJob fails, it was intended to be used in other tests for MAPREDUCE-2981.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3081">MAPREDUCE-3081</a>.
Major bug reported by vitthal_gogate and fixed by (contrib/vaidya)<br>
<b>Change the name format for hadoop core and vaidya jar to be hadoop-{core/vaidya}-{version}.jar in vaidya.sh</b><br>
<blockquote> contrib/vaidya/bin/vaidya.sh script fixed to use appropriate jars and classpath <br><br> <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3112">MAPREDUCE-3112</a>.
Major bug reported by eyang and fixed by eyang (contrib/streaming)<br>
<b>Calling hadoop cli inside mapreduce job leads to errors</b><br>
<blockquote> Removed inheritance of certain server environment variables (HADOOP_OPTS and HADOOP_ROOT_LOGGER) in task attempt process. &lt;br/&gt;<br><br><br></blockquote></li>
</ul>
<h2>Changes since Hadoop 0.20.203.0</h2>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2846">MAPREDUCE-2846</a>.
Blocker bug reported by aw and fixed by owen.omalley (task, task-controller, tasktracker)<br>
<b>a small % of all tasks fail with DefaultTaskController</b><br>
<blockquote>Fixed a race condition in writing the log index file that caused tasks to &apos;fail&apos;.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2804">MAPREDUCE-2804</a>.
Blocker bug reported by aw and fixed by owen.omalley <br>
<b>&quot;Creation of symlink to attempt log dir failed.&quot; message is not useful</b><br>
<blockquote>Removed duplicate chmods of job log dir that were vulnerable to race conditions between tasks. Also improved the messages when the symlinks failed to be created.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2651">MAPREDUCE-2651</a>.
Major bug reported by bharathm and fixed by bharathm (task-controller)<br>
<b>Race condition in Linux Task Controller for job log directory creation</b><br>
<blockquote>There is a rare race condition in linux task controller when concurrent task processes tries to create job log directory at the same time. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2621">MAPREDUCE-2621</a>.
Minor bug reported by sherri_chen and fixed by sherri_chen <br>
<b>TestCapacityScheduler fails with &quot;Queue &quot;q1&quot; does not exist&quot;</b><br>
<blockquote>{quote}<br>Error Message<br><br>Queue &quot;q1&quot; does not exist<br><br>Stacktrace<br><br>java.io.IOException: Queue &quot;q1&quot; does not exist<br> at org.apache.hadoop.mapred.JobInProgress.&lt;init&gt;(JobInProgress.java:354)<br> at org.apache.hadoop.mapred.TestCapacityScheduler$FakeJobInProgress.&lt;init&gt;(TestCapacityScheduler.java:172)<br> at org.apache.hadoop.mapred.TestCapacityScheduler.submitJob(TestCapacityScheduler.java:794)<br> at org.apache.hadoop.mapred.TestCapacityScheduler.submitJob(TestCapacityScheduler.java:818)<br> at org.apache.hadoop.mapred.TestCapacityScheduler.submitJobAndInit(TestCapacityScheduler.java:825)<br> at org.apache.hadoop.mapred.TestCapacityScheduler.testMultiTaskAssignmentInMultipleQueues(TestCapacityScheduler.java:1109)<br>{quote}<br><br>When queue name is invalid, an exception is thrown now. <br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2558">MAPREDUCE-2558</a>.
Major new feature reported by naisbitt and fixed by naisbitt (jobtracker)<br>
<b>Add queue-level metrics 0.20-security branch</b><br>
<blockquote>We would like to record and present the jobtracker metrics on a per-queue basis.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2555">MAPREDUCE-2555</a>.
Minor bug reported by tgraves and fixed by tgraves (tasktracker)<br>
<b>JvmInvalidate errors in the gridmix TT logs</b><br>
<blockquote>Observing a lot of jvmValidate exceptions in TT logs for grid mix run<br><br><br><br>************************<br>2011-04-28 02:00:37,578 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 46121, call<br>statusUpdate(attempt_201104270735_5993_m_003305_0, org.apache.hadoop.mapred.MapTaskStatus@1840a9c,<br>org.apache.hadoop.mapred.JvmContext@1d4ab6b) from 127.0.0.1:50864: error: java.io.IOException: JvmValidate Failed.<br>Ignoring request from task: attempt_201104270735_5993_m_003305_0, with JvmId:<br>jvm_201104270735_5993_m_103399012gsbl20430: java.io.IOException: JvmValidate Failed. Ignoring request from task:<br>attempt_201104270735_5993_m_003305_0, with JvmId: jvm_201104270735_5993_m_103399012gsbl20430: --<br> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1386)<br> at java.security.AccessController.doPrivileged(Native Method)<br> at javax.security.auth.Subject.doAs(Subject.java:396)<br> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)<br> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1384)<br><br><br>*********************<br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2529">MAPREDUCE-2529</a>.
Major bug reported by tgraves and fixed by tgraves (tasktracker)<br>
<b>Recognize Jetty bug 1342 and handle it</b><br>
<blockquote>Added 2 new config parameters:<br><br><br><br>mapreduce.reduce.shuffle.catch.exception.stack.regex<br><br>mapreduce.reduce.shuffle.catch.exception.message.regex</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2524">MAPREDUCE-2524</a>.
Minor improvement reported by tgraves and fixed by tgraves (tasktracker)<br>
<b>Backport trunk heuristics for failing maps when we get fetch failures retrieving map output during shuffle</b><br>
<blockquote>Added a new configuration option: mapreduce.reduce.shuffle.maxfetchfailures, and removed a no longer used option: mapred.reduce.copy.backoff.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2514">MAPREDUCE-2514</a>.
Trivial bug reported by jeagles and fixed by jeagles (tasktracker)<br>
<b>ReinitTrackerAction class name misspelled RenitTrackerAction in task tracker log</b><br>
<blockquote></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2495">MAPREDUCE-2495</a>.
Minor improvement reported by revans2 and fixed by revans2 (distributed-cache)<br>
<b>The distributed cache cleanup thread has no monitoring to check to see if it has died for some reason</b><br>
<blockquote>The cleanup thread in the distributed cache handles IOExceptions and the like correctly, but just to be a bit more defensive it would be good to monitor the thread, and check that it is still alive regularly, so that the distributed cache does not fill up the entire disk on the node. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2490">MAPREDUCE-2490</a>.
Trivial improvement reported by jeagles and fixed by jeagles (jobtracker)<br>
<b>Log blacklist debug count</b><br>
<blockquote>Gain some insight into blacklist increments/decrements by enhancing the debug logging</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2479">MAPREDUCE-2479</a>.
Major improvement reported by revans2 and fixed by revans2 (tasktracker)<br>
<b>Backport MAPREDUCE-1568 to hadoop security branch</b><br>
<blockquote>Added mapreduce.tasktracker.distributedcache.checkperiod to the task tracker that defined the period to wait while cleaning up the distributed cache. The default is 1 min.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2456">MAPREDUCE-2456</a>.
Trivial improvement reported by naisbitt and fixed by naisbitt (jobtracker)<br>
<b>Show the reducer taskid and map/reduce tasktrackers for &quot;Failed fetch notification #_ for task attempt...&quot; log messages</b><br>
<blockquote>This jira is to provide more useful log information for debugging the &quot;Too many fetch-failures&quot; error.<br><br>Looking at the JobTracker node, we see messages like this:<br>&quot;2010-12-14 00:00:06,911 INFO org.apache.hadoop.mapred.JobInProgress: Failed fetch notification #8 for task<br>attempt_201011300729_189729_m_007458_0&quot;.<br><br>I would be useful to see which reducer is reporting the error here.<br><br>So, I propose we add the following to these log messages:<br> 1. reduce task ID<br> 2. TaskTracker nodenames for both the mapper and the reducer<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2451">MAPREDUCE-2451</a>.
Trivial bug reported by tgraves and fixed by tgraves (jobtracker)<br>
<b>Log the reason string of healthcheck script</b><br>
<blockquote>The information on why a specific TaskTracker got blacklisted is not stored anywhere. The jobtracker web ui will show the detailed reason string until the TT gets unblacklisted. After that it is lost.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2447">MAPREDUCE-2447</a>.
Minor bug reported by sseth and fixed by sseth <br>
<b>Set JvmContext sooner for a task - MR2429</b><br>
<blockquote>TaskTracker.validateJVM() is throwing NPE when setupWorkDir() throws IOException. This is because<br>taskFinal.setJvmContext() is not executed yet</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2443">MAPREDUCE-2443</a>.
Minor bug reported by sseth and fixed by sseth (test)<br>
<b>Fix FI build - broken after MR-2429</b><br>
<blockquote>src/test/system/aop/org/apache/hadoop/mapred/TaskAspect.aj:72 [warning] advice defined in org.apache.hadoop.mapred.TaskAspect has not been applied [Xlint:adviceDidNotMatch]<br><br>After the fix in MR-2429, the call to ping in TaskAspect needs to be fixed.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2429">MAPREDUCE-2429</a>.
Major bug reported by acmurthy and fixed by sseth (tasktracker)<br>
<b>Check jvmid during task status report</b><br>
<blockquote>Currently TT doens&apos;t check to ensure jvmid is relevant during communication with the Child via TaskUmbilicalProtocol.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2418">MAPREDUCE-2418</a>.
Minor bug reported by sseth and fixed by sseth <br>
<b>Errors not shown in the JobHistory servlet (specifically Counter Limit Exceeded)</b><br>
<blockquote>Job error details are not displayed in the JobHistory servlet. e.g. Errors like &apos;Counter limit exceeded for a job&apos;. <br>jobdetails.jsp has &apos;Failure Info&apos;, but this is missing in jobdetailshistory.jsp</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2415">MAPREDUCE-2415</a>.
Major sub-task reported by bharathm and fixed by bharathm (task-controller, tasktracker)<br>
<b>Distribute TaskTracker userlogs onto multiple disks</b><br>
<blockquote>Currently, userlogs directory in TaskTracker is placed under hadoop.log.dir like &lt;hadoop.log.dir&gt;/userlogs. I am proposing to spread these userlogs onto multiple configured mapred.local.dirs to strengthen TaskTracker reliability w.r.t disk failures. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2413">MAPREDUCE-2413</a>.
Major sub-task reported by bharathm and fixed by ravidotg (task-controller, tasktracker)<br>
<b>TaskTracker should handle disk failures at both startup and runtime</b><br>
<blockquote>At present, TaskTracker doesn&apos;t handle disk failures properly both at startup and runtime.<br><br>(1) Currently TaskTracker doesn&apos;t come up if any of the mapred-local-dirs is on a bad disk. TaskTracker should ignore that particular mapred-local-dir and start up and use only the remaining good mapred-local-dirs.<br>(2) If a disk goes bad while TaskTracker is running, currently TaskTracker doesn&apos;t do anything special. This results in either<br> (a) TaskTracker continues to &quot;try to use that bad disk&quot; and this results in lots of task failures and possibly job failures(because of multiple TTs having bad disks) and eventually these TTs getting graylisted for all jobs. And this needs manual restart of TT with modified configuration of mapred-local-dirs avoiding the bad disk. OR<br> (b) Health check script identifying the disk as bad and the TT gets blacklisted. And this also needs manual restart of TT with modified configuration of mapred-local-dirs avoiding the bad disk.<br><br>This JIRA is to make TaskTracker more fault-tolerant to disk failures solving (1) and (2). i.e. TT should start even if at least one of the mapred-local-dirs is on a good disk and TT should adjust its in-memory list of mapred-local-dirs and avoid using bad mapred-local-dirs.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2411">MAPREDUCE-2411</a>.
Minor bug reported by dking and fixed by dking <br>
<b>When you submit a job to a queue with no ACLs you get an inscrutible NPE</b><br>
<blockquote>With this patch we&apos;ll check for that, and print a message in the logs. Then at submission time you find out about it.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2409">MAPREDUCE-2409</a>.
Major bug reported by sseth and fixed by sseth (distributed-cache)<br>
<b>Distributed Cache does not differentiate between file /archive for files with the same path</b><br>
<blockquote>If a &apos;global&apos; file is specified as a &apos;file&apos; by one job - subsequent jobs cannot override this source file to be an &apos;archive&apos; (until the TT cleans up it&apos;s cache or a TT restart).<br>The other way around as well -&gt; &apos;archive&apos; to &apos;file&apos;<br><br>In case of an accidental submission using the wrong type - some of the tasks for the second job will end up seeing the source file as an archive, others as a file.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2366">MAPREDUCE-2366</a>.
Major bug reported by owen.omalley and fixed by dking (tasktracker)<br>
<b>TaskTracker can&apos;t retrieve stdout and stderr from web UI</b><br>
<blockquote>Problem where the task browser UI can&apos;t retrieve the stdxxx printouts of streaming jobs that abend in the unix code, in the common case where the containing job doesn&apos;t reuse JVM&apos;s.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2364">MAPREDUCE-2364</a>.
Major bug reported by owen.omalley and fixed by devaraj (tasktracker)<br>
<b>Shouldn&apos;t hold lock on rjob while localizing resources.</b><br>
<blockquote>There is a deadlock while localizing resources on the TaskTracker.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2362">MAPREDUCE-2362</a>.
Major bug reported by owen.omalley and fixed by roelofs (test)<br>
<b>Unit test failures: TestBadRecords and TestTaskTrackerMemoryManager</b><br>
<blockquote>Fix unit-test failures: TestBadRecords (NPE due to rearranged MapTask code) and TestTaskTrackerMemoryManager (need hostname in output-string pattern).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2360">MAPREDUCE-2360</a>.
Major bug reported by owen.omalley and fixed by (client)<br>
<b>Pig fails when using non-default FileSystem</b><br>
<blockquote>The job client strips the file system from the user&apos;s job jar, which causes breakage when it isn&apos;t the default file system.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2359">MAPREDUCE-2359</a>.
Major bug reported by owen.omalley and fixed by ramach <br>
<b>Distributed cache doesn&apos;t use non-default FileSystems correctly</b><br>
<blockquote>We are passing fs.deafult.name as viewfs:/// in core site.xml on oozie server.<br>We have default name node in configuration also viewfs:///<br><br>We are using hdfs://path in our path for application.<br>Its giving following error:<br><br>IllegalArgumentException: Wrong FS:<br>hdfs://nn/user/strat_ci/oozie-oozi/0000002-110217014830452-oozie-oozi-W/hadoop1--map-reduce/map-reduce-launcher.jar,<br>expected: viewfs:/</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2358">MAPREDUCE-2358</a>.
Major bug reported by owen.omalley and fixed by ramach <br>
<b>MapReduce assumes HDFS as the default filesystem</b><br>
<blockquote>Mapred assumes hdfs as the default fs even when defined otherwise.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2357">MAPREDUCE-2357</a>.
Major bug reported by owen.omalley and fixed by vicaya (task)<br>
<b>When extending inputsplit (non-FileSplit), all exceptions are ignored</b><br>
<blockquote>if you&apos;re using a custom RecordReader/InputFormat setup and using an<br>InputSplit that does NOT extend FileSplit, then any exceptions you throw in your RecordReader.nextKeyValue() function<br>are silently ignored.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2356">MAPREDUCE-2356</a>.
Major bug reported by owen.omalley and fixed by vicaya <br>
<b>A task succeeded even though there were errors on all attempts.</b><br>
<blockquote>From Luke Lu:<br><br>Here is a summary of why the failed map task was considered &quot;successful&quot; (Thanks to Mahadev, Arun and Devaraj<br>for insightful discussions).<br><br>1. The map task was hanging BEFORE being initialized (probably in localization, but it doesn&apos;t matter in this case).<br>Its state is UNASSIGNED.<br><br>2. The jt decided to kill it due to timeout and scheduled a cleanup task on the same node.<br><br>3. The cleanup task has the same attempt id (by design.) but runs in a different JVM. Its initial state is<br>FAILED_UNCLEAN.<br><br>4. The JVM of the original attempt is getting killed, while proceeding to setupWorkDir and throwed an<br>IllegalStateException while FileSystem.getLocal, which causes taskFinal.taskCleanup being called in Child, and<br>triggered the NPE due to the task is not yet initialized (committer is null). Before the NPE, however it sent a<br>statusUpdate to TT, and in tip.reportProgress, changed the task state (currently FAILED_UNCLEAN) to UNASSIGNED.<br><br>5. The cleanup attempt succeeded and report done to TT. In tip.reportDone, the isCleanup() check returned false due to<br>the UNASSIGNED state and set the task state as SUCCEEDED.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-517">MAPREDUCE-517</a>.
Critical bug reported by acmurthy and fixed by acmurthy <br>
<b>The capacity-scheduler should assign multiple tasks per heartbeat</b><br>
<blockquote>HADOOP-3136 changed the default o.a.h.mapred.JobQueueTaskScheduler to assign multiple tasks per TaskTracker heartbeat, the capacity-scheduler should do the same.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-118">MAPREDUCE-118</a>.
Blocker bug reported by amar_kamat and fixed by amareshwari (client)<br>
<b>Job.getJobID() will always return null</b><br>
<blockquote>JobContext is used for a read-only view of job&apos;s info. Hence all the readonly fields in JobContext are set in the constructor. Job extends JobContext. When a Job is created, jobid is not known and hence there is no way to set JobID once Job is created. JobID is obtained only when the JobClient queries the jobTracker for a job-id., which happens later i.e upon job submission.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2218">HDFS-2218</a>.
Blocker test reported by mattf and fixed by mattf (contrib/hdfsproxy, test)<br>
<b>Disable TestHdfsProxy.testHdfsProxyInterface in automated test suite for 0.20-security-204 release</b><br>
<blockquote>Test case TestHdfsProxy.testHdfsProxyInterface has been temporarily disabled for this release, due to failure in the Hudson automated test environment.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2057">HDFS-2057</a>.
Major bug reported by bharathm and fixed by bharathm (data-node)<br>
<b>Wait time to terminate the threads causing unit tests to take longer time</b><br>
<blockquote>As a part of datanode process hang, this part of code was introduced in 0.20.204 to clean up all the waiting threads.<br><br>- try {<br>- readPool.awaitTermination(10, TimeUnit.SECONDS);<br>- } catch (InterruptedException e) {<br>- LOG.info(&quot;Exception occured in doStop:&quot; + e.getMessage());<br>- }<br>- readPool.shutdownNow();<br><br>This was clearly meant for production, but all the unit tests uses minidfscluster and minimrcluster for shutdown which waits on this part of the code. Due to this, we saw increase in unit test run times. So removing this code. <br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2044">HDFS-2044</a>.
Major test reported by mattf and fixed by mattf (test)<br>
<b>TestQueueProcessingStatistics failing automatic test due to timing issues</b><br>
<blockquote>The test makes assumptions about timing issues that hold true in workstation environments but not in Hudson auto-test.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-2023">HDFS-2023</a>.
Major bug reported by bharathm and fixed by bharathm (data-node)<br>
<b>Backport of NPE for File.list and File.listFiles</b><br>
<blockquote>Since we have multiple Jira&apos;s in trunk for common and hdfs, I am creating another jira for this issue. <br><br>This patch addresses the following:<br><br>1. Provides FileUtil API for list and listFiles which throws IOException for null cases. <br>2. Replaces most of the code where JDK file API with FileUtil API. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1878">HDFS-1878</a>.
Minor bug reported by mattf and fixed by mattf (name-node)<br>
<b>TestHDFSServerPorts unit test failure - race condition in FSNamesystem.close() causes NullPointerException without serious consequence</b><br>
<blockquote>In 20.204, TestHDFSServerPorts was observed to intermittently throw a NullPointerException. This only happens when FSNamesystem.close() is called, which means system termination for the Namenode, so this is not a serious bug for .204. TestHDFSServerPorts is more likely than normal execution to stimulate the race, because it runs two Namenodes in the same JVM, causing more interleaving and more potential to see a race condition.<br><br>The race is in FSNamesystem.close(), line 566, we have:<br> if (replthread != null) replthread.interrupt();<br> if (replmon != null) replmon = null;<br><br>Since the interrupted replthread is not waited on, there is a potential race condition with replmon being nulled before replthread is dead, but replthread references replmon in computeDatanodeWork() where the NullPointerException occurs.<br><br>The solution is either to wait on replthread or just don&apos;t null replmon. The latter is preferred, since none of the sibling Namenode processing threads are waited on in close().<br><br>I&apos;ll attach a patch for .205.<br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1822">HDFS-1822</a>.
Blocker bug reported by sureshms and fixed by sureshms (name-node)<br>
<b>Editlog opcodes overlap between 20 security and later releases</b><br>
<blockquote>Same opcode are used for different operations between 0.20.security, 0.22 and 0.23. This results in failure to load editlogs on later release, especially during upgrades.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1773">HDFS-1773</a>.
Minor improvement reported by tanping and fixed by tanping (name-node)<br>
<b>Remove a datanode from cluster if include list is not empty and this datanode is removed from both include and exclude lists</b><br>
<blockquote>Our service engineering team who operates the clusters on a daily basis founds it is confusing that after a data node is decommissioned, there is no way to make the cluster forget about this data node and it always remains in the dead node list.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1767">HDFS-1767</a>.
Major sub-task reported by mattf and fixed by mattf (data-node)<br>
<b>Namenode should ignore non-initial block reports from datanodes when in safemode during startup</b><br>
<blockquote>Consider a large cluster that takes 40 minutes to start up. The datanodes compete to register and send their Initial Block Reports (IBRs) as fast as they can after startup (subject to a small sub-two-minute random delay, which isn&apos;t relevant to this discussion). <br><br>As each datanode succeeds in sending its IBR, it schedules the starting time for its regular cycle of reports, every hour (or other configured value of dfs.blockreport.intervalMsec). In order to spread the reports evenly across the block report interval, each datanode picks a random fraction of that interval, for the starting point of its regular report cycle. For example, if a particular datanode ends up randomly selecting 18 minutes after the hour, then that datanode will send a Block Report at 18 minutes after the hour every hour as long as it remains up. Other datanodes will start their cycles at other randomly selected times. This code is in DataNode.blockReport() and DataNode.scheduleBlockReport().<br><br>The &quot;second Block Report&quot; (2BR), is the start of these hourly reports. The problem is that some of these 2BRs get scheduled sooner rather than later, and actually occur within the startup period. For example, if the cluster takes 40 minutes (2/3 of an hour) to start up, then out of the datanodes that succeed in sending their IBRs during the first 10 minutes, between 1/2 and 2/3 of them will send their 2BR before the 40-minute startup time has completed!<br><br>2BRs sent within the startup time actually compete with the remaining IBRs, and thereby slow down the overall startup process. This can be seen in the following data, which shows the startup process for a 3700-node cluster that took about 17 minutes to finish startup:<br><br>{noformat}<br> time starts sum regs sum IBR sum 2nd_BR sum total_BRs/min<br>0 1299799498 3042 3042 1969 1969 151 151 0 151<br>1 1299799558 665 3707 1470 3439 248 399 0 248<br>2 1299799618 3707 224 3663 270 669 0 270<br>3 1299799678 3707 14 3677 261 930 3 3 264<br>4 1299799738 3707 23 3700 288 1218 1 4 289<br>5 1299799798 3707 7 3707 258 1476 3 7 261<br>6 1299799858 3707 3707 317 1793 4 11 321<br>7 1299799918 3707 3707 292 2085 6 17 298<br>8 1299799978 3707 3707 292 2377 8 25 300<br>9 1299800038 3707 3707 272 2649 25 272<br>10 1299800098 3707 3707 280 2929 15 40 295<br>11 1299800158 3707 3707 223 3152 14 54 237<br>12 1299800218 3707 3707 143 3295 54 143<br>13 1299800278 3707 3707 141 3436 20 74 161<br>14 1299800338 3707 3707 195 3631 78 152 273<br>15 1299800398 3707 3707 51 3682 209 361 260<br>16 1299800458 3707 3707 25 3707 369 730 394<br>17 1299800518 3707 3707 3707 166 896 166<br>18 1299800578 3707 3707 3707 72 968 72<br>19 1299800638 3707 3707 3707 67 1035 67<br>20 1299800698 3707 3707 3707 75 1110 75<br>21 1299800758 3707 3707 3707 71 1181 71<br>22 1299800818 3707 3707 3707 67 1248 67<br>23 1299800878 3707 3707 3707 62 1310 62<br>24 1299800938 3707 3707 3707 56 1366 56<br>25 1299800998 3707 3707 3707 60 1426 60<br>{noformat}<br><br>This data was harvested from the startup logs of all the datanodes, and correlated into one-minute buckets. Each row of the table represents the progress during one elapsed minute of clock time. It seems that every cluster startup is different, but this one showed the effect fairly well.<br><br>The &quot;starts&quot; column shows that all the nodes started up within the first 2 minutes, and the &quot;regs&quot; column shows that all succeeded in registering by minute 6. The IBR column shows a sustained rate of Initial Block Report processing of 250-300/minute for the first 10 minutes.<br><br>The question is why, during minutes 11 through 16, the rate of IBR processing slowed down. Why didn&apos;t the startup just finish? In the &quot;2nd_BR&quot; column, we see the rate of 2BRs ramping up as more datanodes complete their IBRs. As the rate increases, they become more effective at competing with the IBRs, and slow down the IBR processing even more. After the IBRs finally finish in minute 16, the rate of 2BRs settles down to a steady ~60-70/minute.<br><br>In order to decrease competition for locks and other resources, to speed up IBR processing during startup, we propose to delay 2BRs until later into the cycle.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1758">HDFS-1758</a>.
Minor bug reported by tanping and fixed by tanping (tools)<br>
<b>Web UI JSP pages thread safety issue</b><br>
<blockquote>The set of JSP pages that web UI uses are not thread safe. We have observed some problems when requesting Live/Dead/Decommissioning pages from the web UI, incorrect page is displayed. To be more specific, requesting Dead node list page, sometimes, Live node page is returned. Requesting decommissioning page, sometimes, dead page is returned.<br><br>The root cause of this problem is that JSP page is not thread safe by default. When multiple requests come in, each request is assigned to a different thread, multiple threads access the same instance of the servlet class resulted from a JSP page. A class variable is shared by multiple threads. The JSP code in 20 branche, for example, dfsnodelist.jsp has<br>{code}<br>&lt;!%<br> int rowNum = 0;<br> int colNum = 0;<br> String sorterField = null;<br> String sorterOrder = null;<br> String whatNodes = &quot;LIVE&quot;;<br> ...<br>%&gt;<br>{code}<br><br>declared as class variables. ( These set of variables are declared within &lt;%! code %&gt; directives which made them class members. ) Multiple threads share the same set of class member variables, one request would step on anther&apos;s toe. <br><br>However, due to the JSP code refactor, HADOOP-5857, all of these class member variables are moved to become function local variables. So this bug does not appear in Apache trunk. Hence, we have proposed to take a simple fix for this bug on 20 branch alone, to be more specific, branch-0.20-security.<br><br>The simple fix is to add jsp ThreadSafe=&quot;false&quot; directive into the related JSP pages, dfshealth.jsp and dfsnodelist.jsp to make them thread safe, i.e. only on request is processed at each time. <br><br>We did evaluate the thread safety issue for other JSP pages on trunk, we noticed a potential problem is that when we retrieving some statistics from namenode, for example, we make the call to <br>{code}<br>NamenodeJspHelper.getInodeLimitText(fsn);<br>{code}<br>in dfshealth.jsp, which eventuality is <br><br>{code}<br> static String getInodeLimitText(FSNamesystem fsn) {<br> long inodes = fsn.dir.totalInodes();<br> long blocks = fsn.getBlocksTotal();<br> long maxobjects = fsn.getMaxObjects();<br> ....<br>{code}<br><br>some of the function calls are already guarded by readwritelock, e.g. dir.totalInodes, but others are not. As a result of this, the web ui results are not 100% thread safe. But after evaluating the prons and cons of adding a giant lock into the JSP pages, we decided not to issue FSNamesystem ReadWrite locks into JSPs.<br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1750">HDFS-1750</a>.
Major bug reported by szetszwo and fixed by szetszwo <br>
<b>fs -ls hftp://file not working</b><br>
<blockquote>{noformat}<br>hadoop dfs -touchz /tmp/file1 # create file. OK<br>hadoop dfs -ls /tmp/file1 # OK<br>hadoop dfs -ls hftp://namenode:50070/tmp/file1 # FAILED: not seeing the file<br>{noformat}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1692">HDFS-1692</a>.
Major bug reported by bharathm and fixed by bharathm (data-node)<br>
<b>In secure mode, Datanode process doesn&apos;t exit when disks fail.</b><br>
<blockquote>In secure mode, when disks fail more than volumes tolerated, datanode process doesn&apos;t exit properly and it just hangs even though shutdown method is called. <br><br></blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1592">HDFS-1592</a>.
Major bug reported by bharathm and fixed by bharathm <br>
<b>Datanode startup doesn&apos;t honor volumes.tolerated </b><br>
<blockquote>Datanode startup doesn&apos;t honor volumes.tolerated for hadoop 20 version.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1541">HDFS-1541</a>.
Major sub-task reported by hairong and fixed by hairong (name-node)<br>
<b>Not marking datanodes dead When namenode in safemode</b><br>
<blockquote>In a big cluster, when namenode starts up, it takes a long time for namenode to process block reports from all datanodes. Because heartbeats processing get delayed, some datanodes are erroneously marked as dead, then later on they have to register again, thus wasting time.<br><br>It would speed up starting time if the checking of dead nodes is disabled when namenode in safemode.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1445">HDFS-1445</a>.
Major sub-task reported by mattf and fixed by mattf (data-node)<br>
<b>Batch the calls in DataStorage to FileUtil.createHardLink(), so we call it once per directory instead of once per file</b><br>
<blockquote>Batch hardlinking during &quot;upgrade&quot; snapshots, cutting time from aprx 8 minutes per volume to aprx 8 seconds. Validated in both Linux and Windows. Depends on prior integration with patch for HADOOP-7133.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1377">HDFS-1377</a>.
Blocker bug reported by eli and fixed by eli (name-node)<br>
<b>Quota bug for partial blocks allows quotas to be violated </b><br>
<blockquote>There&apos;s a bug in the quota code that causes them not to be respected when a file is not an exact multiple of the block size. Here&apos;s an example:<br><br>{code}<br>$ hadoop fs -mkdir /test<br>$ hadoop dfsadmin -setSpaceQuota 384M /test<br>$ ls dir/ | wc -l # dir contains 101 files<br>101<br>$ du -ms dir # each is 3mb<br>304 dir<br>$ hadoop fs -put dir /test<br>$ hadoop fs -count -q /test<br> none inf 402653184 -550502400 2 101 317718528 hdfs://haus01.sf.cloudera.com:10020/test<br>$ hadoop fs -stat &quot;%o %r&quot; /test/dir/f30<br>134217728 3 # three 128mb blocks<br>{code}<br><br>INodeDirectoryWithQuota caches the number of bytes consumed by it&apos;s children in {{diskspace}}. The quota adjustment code has a bug that causes {{diskspace}} to get updated incorrectly when a file is not an exact multiple of the block size (the value ends up being negative). <br><br>This causes the quota checking code to think that the files in the directory consumes less space than they actually do, so the verifyQuota does not throw a QuotaExceededException even when the directory is over quota. However the bug isn&apos;t visible to users because {{fs count -q}} reports the numbers generated by INode#getContentSummary which adds up the sizes of the blocks rather than use the cached INodeDirectoryWithQuota#diskspace value.<br><br>In FSDirectory#addBlock the disk space consumed is set conservatively to the full block size * the number of replicas:<br><br>{code}<br>updateCount(inodes, inodes.length-1, 0,<br> fileNode.getPreferredBlockSize()*fileNode.getReplication(), true);<br>{code}<br><br>In FSNameSystem#addStoredBlock we adjust for this conservative estimate by subtracting out the difference between the conservative estimate and what the number of bytes actually stored was:<br><br>{code}<br>//Updated space consumed if required.<br>INodeFile file = (storedBlock != null) ? storedBlock.getINode() : null;<br>long diff = (file == null) ? 0 :<br> (file.getPreferredBlockSize() - storedBlock.getNumBytes());<br><br>if (diff &gt; 0 &amp;&amp; file.isUnderConstruction() &amp;&amp;<br> cursize &lt; storedBlock.getNumBytes()) {<br>...<br> dir.updateSpaceConsumed(path, 0, -diff*file.getReplication());<br>{code}<br><br>We do the same in FSDirectory#replaceNode when completing the file, but at a file granularity (I believe the intent here is to correct for the cases when there&apos;s a failure replicating blocks and recovery). Since oldnode is under construction INodeFile#diskspaceConsumed will use the preferred block size (vs of Block#getNumBytes used by newnode) so we will again subtract out the difference between the full block size and what the number of bytes actually stored was:<br><br>{code}<br>long dsOld = oldnode.diskspaceConsumed();<br>...<br>//check if disk space needs to be updated.<br>long dsNew = 0;<br>if (updateDiskspace &amp;&amp; (dsNew = newnode.diskspaceConsumed()) != dsOld) {<br> try {<br> updateSpaceConsumed(path, 0, dsNew-dsOld);<br>...<br>{code}<br><br>So in the above example we started with diskspace at 384mb (3 * 128mb) and then we subtract 375mb (to reflect only 9mb raw was actually used) twice so for each file the diskspace for the directory is - 366mb (384mb minus 2 * 375mb). Which is why the quota gets negative and yet we can still write more files.<br><br>So a directory with lots of single block files (if you have multiple blocks on the final partial block ends up subtracting from the diskspace used) ends up having a quota that&apos;s way off.<br><br>I think the fix is to in FSDirectory#replaceNode not have the diskspaceConsumed calculations differ when the old and new INode have the same blocks. I&apos;ll work on a patch which also adds a quota test for blocks that are not multiples of the block size and warns in INodeDirectory#computeContentSummary if the computed size does not reflect the cached value.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1258">HDFS-1258</a>.
Blocker bug reported by atm and fixed by atm (name-node)<br>
<b>Clearing namespace quota on &quot;/&quot; corrupts FS image</b><br>
<blockquote>The HDFS root directory starts out with a default namespace quota of Integer.MAX_VALUE. If you clear this quota (using &quot;hadoop dfsadmin -clrQuota /&quot;), the fsimage gets corrupted immediately. Subsequent 2NN rolls will fail, and the NN will not come back up from a restart.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1189">HDFS-1189</a>.
Major bug reported by xiaokang and fixed by johnvijoe (name-node)<br>
<b>Quota counts missed between clear quota and set quota</b><br>
<blockquote>HDFS Quota counts will be missed between a clear quota operation and a set quota.<br><br>When setting quota for a dir, the INodeDirectory will be replaced by INodeDirectoryWithQuota and dir.isQuotaSet() becomes true. When INodeDirectoryWithQuota is newly created, quota counting will be performed. However, when clearing quota, the quota conf is set to -1 and dir.isQuotaSet() becomes false while INodeDirectoryWithQuota will NOT be replaced back to INodeDirectory.<br><br>FSDirectory.updateCount just update the quota count for inodes that isQuotaSet() is true. So after clear quota for a dir, its quota counts will not be updated and it&apos;s reasonable. But when re seting quota for this dir, quota counting will not be performed and some counts will be missed.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7475">HADOOP-7475</a>.
Blocker bug reported by eyang and fixed by eyang <br>
<b>hadoop-setup-single-node.sh is broken</b><br>
<blockquote>When running hadoop-setup-single-node.sh, the system can not find the templates configuration directory:<br><br>{noformat}<br>cat: /usr/libexec/../templates/conf/core-site.xml: No such file or directory<br>cat: /usr/libexec/../templates/conf/hdfs-site.xml: No such file or directory<br>cat: /usr/libexec/../templates/conf/mapred-site.xml: No such file or directory<br>cat: /usr/libexec/../templates/conf/hadoop-env.sh: No such file or directory<br>chown: cannot access `hadoop-env.sh&apos;: No such file or directory<br>chmod: cannot access `hadoop-env.sh&apos;: No such file or directory<br>cp: cannot stat `*.xml&apos;: No such file or directory<br>cp: cannot stat `hadoop-env.sh&apos;: No such file or directory<br>{noformat}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7398">HADOOP-7398</a>.
Major new feature reported by owen.omalley and fixed by owen.omalley <br>
<b>create a mechanism to suppress the HADOOP_HOME deprecated warning</b><br>
<blockquote>Create a new mechanism to suppress the warning about HADOOP_HOME deprecation.<br><br>I&apos;ll create a HADOOP_HOME_WARN_SUPPRESS environment variable that suppresses the warning.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7373">HADOOP-7373</a>.
Major bug reported by owen.omalley and fixed by owen.omalley <br>
<b>Tarball deployment doesn&apos;t work with {start,stop}-{dfs,mapred}</b><br>
<blockquote>The hadoop-config.sh overrides the variable &quot;bin&quot;, which makes the scripts use libexec for hadoop-daemon(s).</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7364">HADOOP-7364</a>.
Major bug reported by tgraves and fixed by tgraves (test)<br>
<b>TestMiniMRDFSCaching fails if test.build.dir is set to something other than build/test</b><br>
<blockquote>TestMiniMRDFSCaching fails if test.build.dir is set to something other than build/test. </blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7356">HADOOP-7356</a>.
Blocker bug reported by eyang and fixed by eyang <br>
<b>RPM packages broke bin/hadoop script for hadoop 0.20.205</b><br>
<blockquote>hadoop-config.sh has been moved to libexec for binary package, but developers prefers to have hadoop-config.sh in bin. Hadoo shell scripts should be modified to support both scenarios.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7330">HADOOP-7330</a>.
Major bug reported by vicaya and fixed by vicaya (metrics)<br>
<b>The metrics source mbean implementation should return the attribute value instead of the object</b><br>
<blockquote>The MetricsSourceAdapter#getAttribute in 0.20.203 is returning the attribute object instead of the value.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7324">HADOOP-7324</a>.
Blocker bug reported by vicaya and fixed by priyomustafi (metrics)<br>
<b>Ganglia plugins for metrics v2</b><br>
<blockquote>Although, all metrics in metrics v2 are exposed via the standard JMX mechanisms, most users are using Ganglia to collect metrics.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7277">HADOOP-7277</a>.
Minor improvement reported by naisbitt and fixed by naisbitt (build)<br>
<b>Add Eclipse launch tasks for the 0.20-security branch</b><br>
<blockquote>This is to add the eclipse launchers from HADOOP-5911 to the 0.20 security branch.<br><br>Eclipse has a notion of &quot;run configuration&quot;, which encapsulates what&apos;s needed to run or debug an application. I use this quite a bit to start various Hadoop daemons in debug mode, with breakpoints set, to inspect state and what not.<br><br>This is simply configuration, so no tests are provided. After running &quot;ant eclipse&quot; and refreshing your project, you should see entries in the Run Configurations and Debug Configurations for launching the various hadoop daemons from within eclipse. There&apos;s a template for testing a specific test, and also templates to run all the tests, the job tracker, and a task tracker. It&apos;s likely that some parameters need to be further tweaked to have the same behavior as &quot;ant test&quot;, but for most tests, this works.<br><br>This also requires a small change to build.xml for the eclipse classpath.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7274">HADOOP-7274</a>.
Minor bug reported by jeagles and fixed by jeagles (util)<br>
<b>CLONE - IOUtils.readFully and IOUtils.skipFully have typo in exception creation&apos;s message</b><br>
<blockquote>Same fix as for HADOOP-7057 for the Hadoop security branch<br><br>{noformat}<br> throw new IOException( &quot;Premeture EOF from inputStream&quot;);<br>{noformat}</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7248">HADOOP-7248</a>.
Minor improvement reported by cos and fixed by tgraves (build)<br>
<b>Have a way to automatically update Eclipse .classpath file when new libs are added to the classpath through Ivy for 0.20-* based sources</b><br>
<blockquote>Backport HADOOP-6407 into 0.20 based source trees</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7232">HADOOP-7232</a>.
Blocker bug reported by owen.omalley and fixed by owen.omalley (documentation)<br>
<b>Fix javadoc warnings</b><br>
<blockquote>The javadoc is currently generating 31 warnings.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7144">HADOOP-7144</a>.
Major new feature reported by vicaya and fixed by revans2 <br>
<b>Expose JMX with something like JMXProxyServlet </b><br>
<blockquote>Much of the Hadoop metrics and status info is available via JMX, especially since 0.20.100, and 0.22+ (HDFS-1318, HADOOP-6728 etc.) For operations staff not familiar JMX setup, especially JMX with SSL and firewall tunnelling, the usage can be daunting. Using a JMXProxyServlet (a la Tomcat) to translate JMX attributes into JSON output would make a lot of non-Java admins happy.<br><br>We could probably use Tomcat&apos;s JMXProxyServlet code directly, if it&apos;s already output some standard format (JSON or XML etc.) The code is simple enough to port over and can probably integrate with the common HttpServer as one of the default servelet (maybe /jmx) for the pluggable security.</blockquote></li>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6255">HADOOP-6255</a>.
Major new feature reported by owen.omalley and fixed by eyang <br>
<b>Create an rpm integration project</b><br>
<blockquote>Added RPM/DEB packages to build system.</blockquote></li>
</ul>
<h2>Changes Since Hadoop 0.20.2</h2>
<ul>
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7190">HADOOP-7190</a>. Add metrics v1 back for backwards compatibility. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2360">MAPREDUCE-2360</a>. Remove stripping of scheme, authority from submit dir in
support of viewfs. (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2359">MAPREDUCE-2359</a> Use correct file system to access distributed cache objects.
(Krishna Ramachandran)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2361">MAPREDUCE-2361</a>. "Fix Distributed Cache is not adding files to class paths
correctly" - Drop the host/scheme/fragment from URI (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2362">MAPREDUCE-2362</a>. Fix unit-test failures: TestBadRecords (NPE due to
rearranged MapTask code) and TestTaskTrackerMemoryManager
(need hostname in output-string pattern). (Greg Roelofs, Krishna
Ramachandran)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1729">HDFS-1729</a>. Add statistics logging for better visibility into
startup time costs. (Matt Foley)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2363">MAPREDUCE-2363</a>. When a queue is built without any access rights we
explain the problem. (Richard King)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1563">MAPREDUCE-1563</a>. TaskDiagnosticInfo may be missed sometime. (Krishna
Ramachandran)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2364">MAPREDUCE-2364</a>. Don't hold the rjob lock while localizing resources. (ddas
via omalley)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1598">HDFS-1598</a>. Directory listing on hftp:// does not show
.*.crc files. (szetszwo)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2365">MAPREDUCE-2365</a>. New counters for FileInputFormat (BYTES_READ) and
FileOutputFormat (BYTES_WRITTEN).
New counter MAP_OUTPUT_MATERIALIZED_BYTES for compressed MapOutputSize.
(Siddharth Seth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7040">HADOOP-7040</a>. Change DiskErrorException to IOException (boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7104">HADOOP-7104</a>. Remove unnecessary DNS reverse lookups from RPC layer
(kzhang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2366">MAPREDUCE-2366</a>. Fix a problem where the task browser UI can't retrieve the
stdxxx printouts of streaming jobs that abend in the unix code, in
the common case where the containing job doesn't reuse JVM's.
(Richard King)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6977">HADOOP-6977</a>. Herriot daemon clients should vend statistics (cos)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6971">HADOOP-6971</a>. Clover build doesn't generate per-test coverage (cos)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6879">HADOOP-6879</a>. Provide SSH based (Jsch) remote execution API for system
tests. (cos)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2355">MAPREDUCE-2355</a>. Add a configuration knob
mapreduce.tasktracker.outofband.heartbeat.damper that limits out of band
heartbeats (acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2356">MAPREDUCE-2356</a>. Fix a race-condition that corrupted a task's state on the
JobTracker. (Luke Lu)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2357">MAPREDUCE-2357</a>. Always propagate IOExceptions that are thrown by
non-FileInputFormat. (Luke Lu)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7163">HADOOP-7163</a>. RPC handles SocketTimeOutException during SASL negotiation.
(ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2358">MAPREDUCE-2358</a>. MapReduce assumes the default FileSystem is HDFS.
(Krishna Ramachandran)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1904">MAPREDUCE-1904</a>. Reducing locking contention in TaskTracker's
MapOutputServlet LocalDirAllocator. (Rajesh Balamohan via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1626">HDFS-1626</a>. Make BLOCK_INVALIDATE_LIMIT configurable. (szetszwo)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1584">HDFS-1584</a>. Adds a check for whether relogin is needed to
getDelegationToken in HftpFileSystem. (Kan Zhang via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7115">HADOOP-7115</a>. Reduces the number of calls to getpwuid_r and
getpwgid_r, by implementing a cache in NativeIO. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6882">HADOOP-6882</a>. An XSS security exploit in jetty-6.1.14. jetty upgraded to
6.1.26. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2278">MAPREDUCE-2278</a>. Fixes a memory leak in the TaskTracker. (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1353 redux">HDFS-1353 redux</a>. Modulate original 1353 to not bump RPC version.
(jhoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2082">MAPREDUCE-2082</a> Race condition in writing the jobtoken password file when
launching pipes jobs (jitendra and ddas)
<a href="https://issues.apache.org/jira/browse/HADOOP-6978">HADOOP-6978</a>. Fixes task log servlet vulnerabilities via symlinks.
(Todd Lipcon and Devaraj Das)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2178">MAPREDUCE-2178</a>. Write task initialization to avoid race
conditions leading to privilege escalation and resource leakage by
performing more actiions as the user. (Owen O'Malley, Devaraj Das,
Chris Douglas via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1364">HDFS-1364</a>. HFTP client should support relogin from keytab
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6907">HADOOP-6907</a>. Make RPC client to use per-proxy configuration.
(Kan Zhang via ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2055">MAPREDUCE-2055</a>. Fix JobTracker to decouple job retirement from copy of
job-history file to HDFS and enhance RetiredJobInfo to carry aggregated
job-counters to prevent a disk roundtrip on job-completion to fetch
counters for the JobClient. (Krishna Ramachandran via acmurthy)
<a href="https://issues.apache.org/jira/browse/HDFS-1353">HDFS-1353</a>. Remove most of getBlockLocation optimization (jghoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2023">MAPREDUCE-2023</a>. TestDFSIO read test may not read specified bytes. (htang)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1340">HDFS-1340</a>. A null delegation token is appended to the url if security is
disabled when browsing filesystem.(boryas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1352">HDFS-1352</a>. Fix jsvc.location. (jghoman)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6860">HADOOP-6860</a>. 'compile-fault-inject' should never be called directly. (cos)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2005">MAPREDUCE-2005</a>. TestDelegationTokenRenewal fails (boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2000">MAPREDUCE-2000</a>. Rumen is not able to extract counters for Job history logs
from Hadoop 0.20. (htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1961">MAPREDUCE-1961</a>. ConcurrentModificationException when shutting down Gridmix.
(htang)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6899">HADOOP-6899</a>. RawLocalFileSystem set working directory does
not work for relative names. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-495">HDFS-495</a>. New clients should be able to take over files lease if the old
client died. (shv)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6728">HADOOP-6728</a>. Re-design and overhaul of the Metrics framework. (Luke Lu via
acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1966">MAPREDUCE-1966</a>. Change blacklisting of tasktrackers on task failures to be
a simple graylist to fingerpoint bad tasktrackers. (Greg Roelofs via
acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6864">HADOOP-6864</a>. Add ability to get netgroups (as returned by getent
netgroup command) using native code (JNI) instead of forking. (Erik Steffl)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1318">HDFS-1318</a>. HDFS Namenode and Datanode WebUI information needs to be
accessible programmatically for scripts. (Tanping Wang via suresh)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1315">HDFS-1315</a>. Add fsck event to audit log and remove other audit log events
corresponding to FSCK listStatus and open calls. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1941">MAPREDUCE-1941</a>. Provides access to JobHistory file (raw) with job user/acl
permission. (Srikanth Sundarrajan via ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-291.">MAPREDUCE-291.</a> Optionally a separate daemon should serve JobHistory.
(Srikanth Sundarrajan via ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1936">MAPREDUCE-1936</a>. Make Gridmix3 more customizable (sync changes from trunk).
(htang)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5981">HADOOP-5981</a>. Fix variable substitution during parsing of child environment
variables. (Krishna Ramachandran via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-339.">MAPREDUCE-339.</a> Greedily schedule failed tasks to cause early job failure.
(cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1872">MAPREDUCE-1872</a>. Hardened CapacityScheduler to have comprehensive, coherent
limits on tasks/jobs for jobs/users/queues. Also, added the ability to
refresh queue definitions without the need to restart the JobTracker.
(acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1161">HDFS-1161</a>. Make DN minimum valid volumes configurable. (shv)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-457">HDFS-457</a>. Reintroduce volume failure tolerance for DataNodes. (shv)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1307 Add start time, end time and total time taken for FSCK
to FSCK report">HDFS-1307 Add start time, end time and total time taken for FSCK
to FSCK report</a>. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1207">MAPREDUCE-1207</a>. Sanitize user environment of map/reduce tasks and allow
admins to set environment and java options. (Krishna Ramachandran via
acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1298 - Add support in HDFS for new statistics added in FileSystem
to track the file system operations (suresh)
<li> HDFS-1301">HDFS-1298 - Add support in HDFS for new statistics added in FileSystem
to track the file system operations (suresh)
<li> HDFS-1301</a>. TestHDFSProxy need to use server side conf for ProxyUser
stuff.(boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6859">HADOOP-6859</a> - Introduce additional statistics to FileSystem to track
file system operations (suresh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6818">HADOOP-6818</a>. Provides a JNI implementation of Unix Group resolution. The
config hadoop.security.group.mapping should be set to
org.apache.hadoop.security.JniBasedUnixGroupsMapping to enable this
implementation. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1938">MAPREDUCE-1938</a>. Introduces a configuration for putting user classes before
the system classes during job submission and in task launches. Two things
need to be done in order to use this feature -
(1) mapreduce.user.classpath.first : this should be set to true in the
jobconf, and, (2) HADOOP_USER_CLASSPATH_FIRST : this is relevant for job
submissions done using bin/hadoop shell script. HADOOP_USER_CLASSPATH_FIRST
should be defined in the environment with some non-empty value
(like "true"), and then bin/hadoop should be executed. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6669">HADOOP-6669</a>. Respect compression configuration when creating DefaultCodec
compressors. (Koji Noguchi via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6855">HADOOP-6855</a>. Add support for netgroups, as returned by command
getent netgroup. (Erik Steffl)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-599">HDFS-599</a>. Allow NameNode to have a seprate port for service requests from
client requests. (Dmytro Molkov via hairong)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-132">HDFS-132</a>. Fix namenode to not report files deleted metrics for deletions
done while replaying edits during startup. (shv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1521">MAPREDUCE-1521</a>. Protection against incorrectly configured reduces
(mahadev)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1936">MAPREDUCE-1936</a>. Make Gridmix3 more customizable. (htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-517.">MAPREDUCE-517.</a> Enhance the CapacityScheduler to assign multiple tasks
per-heartbeat. (acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-323.">MAPREDUCE-323.</a> Re-factor layout of JobHistory files on HDFS to improve
operability. (Dick King via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1921">MAPREDUCE-1921</a>. Ensure exceptions during reading of input data in map
tasks are augmented by information about actual input file which caused
the exception. (Krishna Ramachandran via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1118">MAPREDUCE-1118</a>. Enhance the JobTracker web-ui to ensure tabular columns
are sortable, also added a /scheduler servlet to CapacityScheduler for
enhanced UI for queue information. (Krishna Ramachandran via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5913">HADOOP-5913</a>. Add support for starting/stopping queues. (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6835">HADOOP-6835</a>. Add decode support for concatenated gzip files. (Greg Roelofs)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1158">HDFS-1158</a>. Revert <a href="https://issues.apache.org/jira/browse/HDFS-457">HDFS-457</a>. (shv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1699">MAPREDUCE-1699</a>. Ensure JobHistory isn't disabled for any reason. (Krishna
Ramachandran via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1682">MAPREDUCE-1682</a>. Fix speculative execution to ensure tasks are not
scheduled after job failure. (acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1914">MAPREDUCE-1914</a>. Ensure unique sub-directories for artifacts in the
DistributedCache are cleaned up. (Dick King via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6713">HADOOP-6713</a>. Multiple RPC Reader Threads (Bharathm)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1250">HDFS-1250</a>. Namenode should reject block reports and block received
requests from dead datanodes (suresh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1863">MAPREDUCE-1863</a>. [Rumen] Null failedMapAttemptCDFs in job traces generated
by Rumen. (htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1309">MAPREDUCE-1309</a>. Rumen refactory. (htang)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1114">HDFS-1114</a>. Implement LightWeightGSet for BlocksMap in order to reduce
NameNode memory footprint. (szetszwo)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-572.">MAPREDUCE-572.</a> Fixes DistributedCache.checkURIs to throw error if link is
missing for uri in cache archives. (amareshwari)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-787.">MAPREDUCE-787.</a> Fix JobSubmitter to honor user given symlink in the path.
(amareshwari)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6815">HADOOP-6815</a>. refreshSuperUserGroupsConfiguration should use
server side configuration for the refresh( boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1868">MAPREDUCE-1868</a>. Add a read and connection timeout to JobClient while
pulling tasklogs. (Krishna Ramachandran via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1119">HDFS-1119</a>. Introduce a GSet interface to BlocksMap. (szetszwo)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1778">MAPREDUCE-1778</a>. Ensure failure to setup CompletedJobStatusStore is not
silently ignored by the JobTracker. (Krishna Ramachandran via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1538">MAPREDUCE-1538</a>. Add a limit on the number of artifacts in the
DistributedCache to ensure we cleanup aggressively. (Dick King via
acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1850">MAPREDUCE-1850</a>. Add information about the host from which a job is
submitted. (Krishna Ramachandran via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1110">HDFS-1110</a>. Reuses objects for commonly used file names in namenode to
reduce the heap usage. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6810">HADOOP-6810</a>. Extract a subset of tests for smoke (DOA) validation. (cos)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6642">HADOOP-6642</a>. Remove debug stmt left from original patch. (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6808">HADOOP-6808</a>. Add comments on how to setup File/Ganglia Context for
kerberos metrics (Erik Steffl)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1061">HDFS-1061</a>. INodeFile memory optimization. (bharathm)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1109">HDFS-1109</a>. HFTP supports filenames that contains the character "+".
(Dmytro Molkov via dhruba, backported by szetszwo)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1085">HDFS-1085</a>. Check file length and bytes read when reading a file through
hftp in order to detect failure. (szetszwo)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1311">HDFS-1311</a>. Running tests with 'testcase' cause triple execution of the
same test case (cos)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1150">HDFS-1150</a>.FIX. Verify datanodes' identities to clients in secure clusters.
Update to patch to improve handling of jsvc source in build.xml (jghoman)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6752">HADOOP-6752</a>. Remote cluster control functionality needs JavaDocs
improvement. (Balaji Rajagopalan via cos)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1288">MAPREDUCE-1288</a>. Fixes TrackerDistributedCacheManager to take into account
the owner of the localized file in the mapping from cache URIs to
CacheStatus objects. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1682">MAPREDUCE-1682</a>. Fix speculative execution to ensure tasks are not
scheduled after job failure. (acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1914">MAPREDUCE-1914</a>. Ensure unique sub-directories for artifacts in the
DistributedCache are cleaned up. (Dick King via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1538">MAPREDUCE-1538</a>. Add a limit on the number of artifacts in the
DistributedCache to ensure we cleanup aggressively. (Dick King via
acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1900">MAPREDUCE-1900</a>. Fixes a FS leak that i missed in the earlier patch.
(ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1900">MAPREDUCE-1900</a>. Makes JobTracker/TaskTracker close filesystems, created
on behalf of users, when they are no longer needed. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6832">HADOOP-6832</a>. Add a static user plugin for web auth for external users.
(omalley)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1007">HDFS-1007</a>. Fixes a bug in SecurityUtil.buildDTServiceName to do
with handling of null hostname. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1007">HDFS-1007</a>. makes long running servers using hftp work. Also has some
refactoring in the MR code to do with handling of delegation tokens.
(omalley & ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1178">HDFS-1178</a>. The NameNode servlets should not use RPC to connect to the
NameNode. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1807">MAPREDUCE-1807</a>. Re-factor TestQueueManager. (Richard King via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1150">HDFS-1150</a>. Fixes the earlier patch to do logging in the right directory
and also adds facility for monitoring processes (via -Dprocname in the
command line). (Jakob Homan via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6781">HADOOP-6781</a>. security audit log shouldn't have exception in it. (boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6776">HADOOP-6776</a>. Fixes the javadoc in UGI.createProxyUser. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1150">HDFS-1150</a>. building jsvc from source tar. source tar is also checked in.
(jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1150">HDFS-1150</a>. Bugfix in the hadoop shell script. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1153">HDFS-1153</a>. The navigation to /dfsnodelist.jsp with invalid input
parameters produces NPE and HTTP 500 error (rphulari)
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-1664">MAPREDUCE-1664</a>. Bugfix to enable queue administrators of a queue to
view job details of jobs submitted to that queue even though they
are not part of acl-view-job.
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1150">HDFS-1150</a>. Bugfix to add more knobs to secure datanode starter.
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1157">HDFS-1157</a>. Modifications introduced by <a href="https://issues.apache.org/jira/browse/HDFS-1150 are breaking aspect's
bindings (cos)
<li> HDFS-1130">HDFS-1150 are breaking aspect's
bindings (cos)
<li> HDFS-1130</a>. Adds a configuration dfs.cluster.administrators for
controlling access to the default servlets in hdfs. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6706">HADOOP-6706</a>.FIX. Relogin behavior for RPC clients could be improved
(boryas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1150">HDFS-1150</a>. Verify datanodes' identities to clients in secure clusters.
(jghoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1442">MAPREDUCE-1442</a>. Fixed regex in job-history related to parsing Counter
values. (Luke Lu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6760">HADOOP-6760</a>. WebServer shouldn't increase port number in case of negative
port setting caused by Jetty's race. (cos)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1146">HDFS-1146</a>. Javadoc for getDelegationTokenSecretManager in FSNamesystem.
(jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6706">HADOOP-6706</a>. Fix on top of the earlier patch. Closes the connection
on a SASL connection failure, and retries again with a new
connection. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1716">MAPREDUCE-1716</a>. Fix on top of earlier patch for logs truncation a.k.a
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-1100">MAPREDUCE-1100</a>. Addresses log truncation issues when binary data is
written to log files and adds a header to a truncated log file to
inform users of the done trucation.
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1383">HDFS-1383</a>. Improve the error messages when using hftp://.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1744">MAPREDUCE-1744</a>. Fixed DistributedCache apis to take a user-supplied
FileSystem to allow for better proxy behaviour for Oozie. (Richard King)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1733">MAPREDUCE-1733</a>. Authentication between pipes processes and java
counterparts. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1664">MAPREDUCE-1664</a>. Bugfix on top of the previous patch. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1136">HDFS-1136</a>. FileChecksumServlets.RedirectServlet doesn't carry forward
the delegation token (boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6756">HADOOP-6756</a>. Change value of FS_DEFAULT_NAME_KEY from fs.defaultFS
to fs.default.name which is a correct name for 0.20 (steffl)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6756">HADOOP-6756</a>. Document (javadoc comments) and cleanup configuration
keys in CommonConfigurationKeys.java (steffl)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1759">MAPREDUCE-1759</a>. Exception message for unauthorized user doing killJob,
killTask, setJobPriority needs to be improved. (gravi via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6715">HADOOP-6715</a>. AccessControlList.toString() returns empty string when
we set acl to "*". (gravi via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6757">HADOOP-6757</a>. NullPointerException for hadoop clients launched from
streaming tasks. (amarrk via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6631">HADOOP-6631</a>. FileUtil.fullyDelete() should continue to delete other files
despite failure at any level. (vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1317">MAPREDUCE-1317</a>. NPE in setHostName in Rumen. (rksingh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1754">MAPREDUCE-1754</a>. Replace mapred.persmissions.supergroup with an acl :
mapreduce.cluster.administrators and <a href="https://issues.apache.org/jira/browse/HADOOP-6748">HADOOP-6748</a>.: Remove
hadoop.cluster.administrators. Contributed by Amareshwari Sriramadasu.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6701">HADOOP-6701</a>. Incorrect exit codes for "dfs -chown", "dfs -chgrp"
(rphulari)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6640">HADOOP-6640</a>. FileSystem.get() does RPC retires within a static
synchronized block. (hairong)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1006">HDFS-1006</a>. Removes unnecessary logins from the previous patch. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6745">HADOOP-6745</a>. adding some java doc to Server.RpcMetrics, UGI (boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1707">MAPREDUCE-1707</a>. TaskRunner can get NPE in getting ugi from TaskTracker.
(vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1104">HDFS-1104</a>. Fsck triggers full GC on NameNode. (hairong)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6332">HADOOP-6332</a>. Large-scale Automated Test Framework (sharad, Sreekanth
Ramakrishnan, at all via cos)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6526">HADOOP-6526</a>. Additional fix for test context on top of existing one. (cos)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6710">HADOOP-6710</a>. Symbolic umask for file creation is not conformant with posix.
(suresh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6693">HADOOP-6693</a>. Added metrics to track kerberos login success and failure.
(suresh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1711">MAPREDUCE-1711</a>. Gridmix should provide an option to submit jobs to the same
queues as specified in the trace. (rksing via htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1687">MAPREDUCE-1687</a>. Stress submission policy does not always stress the
cluster. (htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1641">MAPREDUCE-1641</a>. Bug-fix to ensure command line options such as
-files/-archives are checked for duplicate artifacts in the
DistributedCache. (Amareshwari Sreeramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1641">MAPREDUCE-1641</a>. Fix DistributedCache to ensure same files cannot be put in
both the archives and files sections. (Richard King via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6670">HADOOP-6670</a>. Fixes a testcase issue introduced by the earlier commit
of the <a href="https://issues.apache.org/jira/browse/HADOOP-6670">HADOOP-6670</a> patch. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1718">MAPREDUCE-1718</a>. Fixes a problem to do with correctly constructing
service name for the delegation token lookup in HftpFileSystem
(borya via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6674">HADOOP-6674</a>. Fixes the earlier patch to handle pings correctly (ddas).
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1664">MAPREDUCE-1664</a>. Job Acls affect when Queue Acls are set.
(Ravi Gummadi via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6718">HADOOP-6718</a>. Fixes a problem to do with clients not closing RPC
connections on a SASL failure. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1397">MAPREDUCE-1397</a>. NullPointerException observed during task failures.
(Amareshwari Sriramadasu via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6670">HADOOP-6670</a>. Use the UserGroupInformation's Subject as the criteria for
equals and hashCode. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6716">HADOOP-6716</a>. System won't start in non-secure mode when kerb5.conf
(edu.mit.kerberos on Mac) is not present. (boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1607">MAPREDUCE-1607</a>. Task controller may not set permissions for a
task cleanup attempt's log directory. (Amareshwari Sreeramadasu via
vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1533">MAPREDUCE-1533</a>. JobTracker performance enhancements. (Amar Kamat via
vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1701">MAPREDUCE-1701</a>. AccessControlException while renewing a delegation token
in not correctly handled in the JobTracker. (boryas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-481">HDFS-481</a>. Incremental patch to fix broken unit test in contrib/hdfsproxy
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6706">HADOOP-6706</a>. Fixes a bug in the earlier version of the same patch (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1096">HDFS-1096</a>. allow dfsadmin/mradmin refresh of superuser proxy group
mappings(boryas).
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1012">HDFS-1012</a>. Support for cluster specific path entries in ldap for hdfsproxy
(Srikanth Sundarrajan via Nicholas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1011">HDFS-1011</a>. Improve Logging in HDFSProxy to include cluster name associated
with the request (Srikanth Sundarrajan via Nicholas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1010">HDFS-1010</a>. Retrieve group information from UnixUserGroupInformation
instead of LdapEntry (Srikanth Sundarrajan via Nicholas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-481">HDFS-481</a>. Bug fix - hdfsproxy: Stack overflow + Race conditions
(Srikanth Sundarrajan via Nicholas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1657">MAPREDUCE-1657</a>. After task logs directory is deleted, tasklog servlet
displays wrong error message about job ACLs. (Ravi Gummadi via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1692">MAPREDUCE-1692</a>. Remove TestStreamedMerge from the streaming tests.
(Amareshwari Sriramadasu and Sreekanth Ramakrishnan via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1081">HDFS-1081</a>. Performance regression in
DistributedFileSystem::getFileBlockLocations in secure systems (jhoman)
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-1656">MAPREDUCE-1656</a>. JobStory should provide queue info. (htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1317">MAPREDUCE-1317</a>. Reducing memory consumption of rumen objects. (htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1317">MAPREDUCE-1317</a>. Reverting the patch since it caused build failures. (htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1683">MAPREDUCE-1683</a>. Fixed jobtracker web-ui to correctly display heap-usage.
(acmurthy)
<a href="https://issues.apache.org/jira/browse/HADOOP-6706">HADOOP-6706</a>. Fixes exception handling for saslConnect. The ideal
solution is to the Refreshable interface but as Owen noted in
<a href="https://issues.apache.org/jira/browse/HADOOP-6656">HADOOP-6656</a>, it doesn't seem to work as expected. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1617">MAPREDUCE-1617</a>. TestBadRecords failed once in our test runs. (Amar
Kamat via vinodkv).
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-587.">MAPREDUCE-587.</a> Stream test TestStreamingExitStatus fails with Out of
Memory. (Amar Kamat via vinodkv).
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1096">HDFS-1096</a>. Reverting the patch since it caused build failures. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1317">MAPREDUCE-1317</a>. Reducing memory consumption of rumen objects. (htang)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1680">MAPREDUCE-1680</a>. Add a metric to track number of heartbeats processed by the
JobTracker. (Richard King via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1683">MAPREDUCE-1683</a>. Removes JNI calls to get jvm current/max heap usage in
ClusterStatus by default. (acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6687">HADOOP-6687</a>. user object in the subject in UGI should be reused in case
of a relogin. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5647">HADOOP-5647</a>. TestJobHistory fails if /tmp/_logs is not writable to.
Testcase should not depend on /tmp. (Ravi Gummadi via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-181.">MAPREDUCE-181.</a> Bug fix for Secure job submission. (Ravi Gummadi via
vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1635">MAPREDUCE-1635</a>. ResourceEstimator does not work after <a href="https://issues.apache.org/jira/browse/MAPREDUCE-842.">MAPREDUCE-842.</a>
(Amareshwari Sriramadasu via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1526">MAPREDUCE-1526</a>. Cache the job related information while submitting the
job. (rksingh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6674">HADOOP-6674</a>. Turn off SASL checksums for RPCs. (jitendra via omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5958">HADOOP-5958</a>. Replace fork of DF with library call. (cdouglas via omalley)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-999">HDFS-999</a>. Secondary namenode should login using kerberos if security
is configured. Bugfix to original patch. (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1594">MAPREDUCE-1594</a>. Support for SleepJobs in Gridmix (rksingh)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1007">HDFS-1007</a>. Fix. ServiceName for delegation token for Hftp has hftp
port and not RPC port.
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-1376">MAPREDUCE-1376</a>. Support for varied user submissions in Gridmix (rksingh)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1080">HDFS-1080</a>. SecondaryNameNode image transfer should use the defined
http address rather than local ip address (jhoman)
<a href="https://issues.apache.org/jira/browse/HADOOP-6661">HADOOP-6661</a>. User document for UserGroupInformation.doAs for secure
impersonation. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1624">MAPREDUCE-1624</a>. Documents the job credentials and associated details
to do with delegation tokens (ddas)
<a href="https://issues.apache.org/jira/browse/HDFS-1036">HDFS-1036</a>. Documentation for fetchdt for forrest (boryas)
<a href="https://issues.apache.org/jira/browse/HDFS-1039">HDFS-1039</a>. New patch on top of previous patch. Gets namenode address
from conf. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6656">HADOOP-6656</a>. Renew Kerberos TGT when 80% of the renew lifetime has been
used up. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6653">HADOOP-6653</a>. Protect against NPE in setupSaslConnection when real user is
null. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6649">HADOOP-6649</a>. An error in the previous committed patch. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6652">HADOOP-6652</a>. ShellBasedUnixGroupsMapping shouldn't have a cache.
(ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6649">HADOOP-6649</a>. login object in UGI should be inside the subject
(jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6637">HADOOP-6637</a>. Benchmark overhead of RPC session establishment
(shv via jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6648">HADOOP-6648</a>. Credentials must ignore null tokens that can be generated
when using HFTP to talk to insecure clusters. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6632">HADOOP-6632</a>. Fix on JobTracker to reuse filesystem handles if possible.
(ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6647">HADOOP-6647</a>. balancer fails with "is not authorized for protocol
interface NamenodeProtocol" in secure environment (boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1612">MAPREDUCE-1612</a>. job conf file is not accessible from job history
web page. (Ravi Gummadi via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1611">MAPREDUCE-1611</a>. Refresh nodes and refresh queues doesnt work with
service authorization enabled. (Amar Kamat via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6644">HADOOP-6644</a>. util.Shell getGROUPS_FOR_USER_COMMAND method
name - should use common naming convention (boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1609">MAPREDUCE-1609</a>. Fixes a problem with localization of job log
directories when tasktracker is re-initialized that can result
in failed tasks. (Amareshwari Sriramadasu via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1610">MAPREDUCE-1610</a>. Update forrest documentation for directory
structure of localized files. (Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1532">MAPREDUCE-1532</a>. Fixes a javadoc and an exception message in JobInProgress
when the authenticated user is different from the user in conf. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1417">MAPREDUCE-1417</a>. Update forrest documentation for private
and public distributed cache files. (Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6634">HADOOP-6634</a>. AccessControlList uses full-principal names to verify acls
causing queue-acls to fail (vinodkv)
<a href="https://issues.apache.org/jira/browse/HADOOP-6642">HADOOP-6642</a>. Fix javac, javadoc, findbugs warnings. (chrisdo via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1044">HDFS-1044</a>. Cannot submit mapreduce job from secure client to
unsecure sever. (boryas)
<a href="https://issues.apache.org/jira/browse/HADOOP-6638">HADOOP-6638</a>. try to relogin in a case of failed RPC connection
(expired tgt) only in case the subject is loginUser or
proxyUgi.realUser. (boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6632">HADOOP-6632</a>. Support for using different Kerberos keys for different
instances of Hadoop services. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6526">HADOOP-6526</a>. Need mapping from long principal names to local OS
user names. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1604">MAPREDUCE-1604</a>. Update Forrest documentation for job authorization
ACLs. (Amareshwari Sriramadasu via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1045">HDFS-1045</a>. In secure clusters, re-login is necessary for https
clients before opening connections (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6603">HADOOP-6603</a>. Addition to original patch to be explicit
about new method not being for general use. (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1543">MAPREDUCE-1543</a>. Add audit log messages for job and queue
access control checks. (Amar Kamat via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1606">MAPREDUCE-1606</a>. Fixed occassinal timeout in TestJobACL. (Ravi Gummadi via
acmurthy)
<li><a href="https://issues.apache.org/jira/browse/HADOOP-6633">HADOOP-6633</a>. normalize property names for JT/NN kerberos principal
names in configuration. (boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6613">HADOOP-6613</a>. Changes the RPC server so that version is checked first
on an incoming connection. (Kan Zhang via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5592">HADOOP-5592</a>. Fix typo in Streaming doc in reference to GzipCodec.
(Corinne Chandel via tomwhite)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-813.">MAPREDUCE-813.</a> Updates Streaming and M/R tutorial documents.
(Corinne Chandel via ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-927.">MAPREDUCE-927.</a> Cleanup of task-logs should happen in TaskTracker instead
of the Child. (Amareshwari Sriramadasu via vinodkv)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1039">HDFS-1039</a>. Service should be set in the token in JspHelper.getUGI.
(jitendra)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1599">MAPREDUCE-1599</a>. MRBench reuses jobConf and credentials there in.
(jitendra)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1522">MAPREDUCE-1522</a>. FileInputFormat may use the default FileSystem for the
input path. (Tsz Wo (Nicholas), SZE via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1036">HDFS-1036</a>. In DelegationTokenFetch pass Configuration object so
getDefaultUri will work correctly.
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1038">HDFS-1038</a>. In nn_browsedfscontent.jsp fetch delegation token only if
security is enabled. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1036">HDFS-1036</a>. in DelegationTokenFetch dfs.getURI returns no port (boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6598">HADOOP-6598</a>. Verbose logging from the Group class (one more case)
(boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6627">HADOOP-6627</a>. Bad Connection to FS" message in FSShell should print
message from the exception (boryas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1033">HDFS-1033</a>. In secure clusters, NN and SNN should verify that the remote
principal during image and edits transfer (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1005">HDFS-1005</a>. Fixes a bug to do with calling the cross-realm API in Fsck
client. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1422">MAPREDUCE-1422</a>. Fix cleanup of localized job directory to work if files
with non-deletable permissions are created within it.
(Amar Kamat via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1007">HDFS-1007</a>. Fixes bugs to do with 20S cluster talking to 20 over
hftp (borya)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1566">MAPREDUCE-1566</a>. Fixes bugs in the earlier patch. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-992">HDFS-992</a>. A bug in backport for <a href="https://issues.apache.org/jira/browse/HDFS-992">HDFS-992</a>. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6598">HADOOP-6598</a>. Remove verbose logging from the Groups class. (borya)
<a href="https://issues.apache.org/jira/browse/HADOOP-6620">HADOOP-6620</a>. NPE if renewer is passed as null in getDelegationToken.
(jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1023">HDFS-1023</a>. Second Update to original patch to fix username (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1435">MAPREDUCE-1435</a>. Add test cases to already committed patch for this
jira, synchronizing changes with trunk. (yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6612">HADOOP-6612</a>. Protocols RefreshUserToGroupMappingsProtocol and
RefreshAuthorizationPolicyProtocol authorization settings thru
KerberosInfo (boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1566">MAPREDUCE-1566</a>. Bugfix for tests on top of the earlier patch. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1566">MAPREDUCE-1566</a>. Mechanism to import tokens and secrets from a file in to
the submitted job. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6603">HADOOP-6603</a>. Provide workaround for issue with Kerberos not
resolving corss-realm principal. (kan via jhoman)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1023">HDFS-1023</a>. Update to original patch to fix username (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-814">HDFS-814</a>. Add an api to get the visible length of a
DFSDataInputStream. (hairong)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1023">HDFS-1023</a>. Allow http server to start as regular user if https
principal is not defined. (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1022">HDFS-1022</a>. Merge all three test specs files (common, hdfs, mapred)
into one. (steffl)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-101">HDFS-101</a>. DFS write pipeline: DFSClient sometimes does not detect
second datanode failure. (hairong)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1015">HDFS-1015</a>. Intermittent failure in TestSecurityTokenEditLog. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1550">MAPREDUCE-1550</a>. A bugfix on top of what was committed earlier (ddas).
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1155">MAPREDUCE-1155</a>. DISABLING THE TestStreamingExitStatus temporarily. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1020">HDFS-1020</a>. Changes the check for renewer from short name to long name
in the cancel/renew delegation token methods. (jitendra via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1019">HDFS-1019</a>. Fixes values of delegation token parameters in
hdfs-default.xml. (jitendra via ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1430">MAPREDUCE-1430</a>. Fixes a backport issue with the earlier patch. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1559">MAPREDUCE-1559</a>. Fixes a problem in DelegationTokenRenewal class to
do with using the right credentials when talking to the NameNode.(ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1550">MAPREDUCE-1550</a>. Fixes a problem to do with creating a filesystem using
the user's UGI in the JobHistory browsing. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6609">HADOOP-6609</a>. Fix UTF8 to use a thread local DataOutputBuffer instead of
a static that was causing a deadlock in RPC. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6584">HADOOP-6584</a>. Fix javadoc warnings introduced by original <a href="https://issues.apache.org/jira/browse/HADOOP-6584">HADOOP-6584</a>
patch (jhoman)
<a href="https://issues.apache.org/jira/browse/HDFS-1017">HDFS-1017</a>. browsedfs jsp should call JspHelper.getUGI rather than using
createRemoteUser(). (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-899.">MAPREDUCE-899.</a> Modified LinuxTaskController to check that task-controller
has right permissions and ownership before performing any actions.
(Amareshwari Sriramadasu via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-204">HDFS-204</a>. Revive number of files listed metrics. (hairong)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6569">HADOOP-6569</a>. FsShell#cat should avoid calling uneccessary getFileStatus
before opening a file to read. (hairong)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1014">HDFS-1014</a>. Error in reading delegation tokens from edit logs. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-458">HDFS-458</a>. Add under-10-min tests from 0.22 to 0.20.1xx, only the tests
that already exist in 0.20.1xx (steffl)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1155">MAPREDUCE-1155</a>. Just pulls out the TestStreamingExitStatus part of the
patch from jira (that went to 0.22). (ddas)
<a href="https://issues.apache.org/jira/browse/HADOOP-6600">HADOOP-6600</a>. Fix for branch backport only. Comparing of user should use
equals. (boryas).
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1006">HDFS-1006</a>. Fixes NameNode and SecondaryNameNode to use kerberizedSSL for
the http communication. (Jakob Homan via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1007">HDFS-1007</a>. Fixes a bug on top of the earlier patch. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1005">HDFS-1005</a>. Fsck security. Makes it work over kerberized SSL (boryas and
jhoman)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1007">HDFS-1007</a>. Makes HFTP and Distcp use kerberized SSL. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1455">MAPREDUCE-1455</a>. Fixes a testcase in the earlier patch.
(Ravi Gummadi via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-992">HDFS-992</a>. Refactors block access token implementation to conform to the
generic Token interface. (Kan Zhang via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6584">HADOOP-6584</a>. Adds KrbSSL connector for jetty. (Jakob Homan via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6589">HADOOP-6589</a>. Add a framework for better error messages when rpc connections
fail to authenticate. (Kan Zhang via omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6600">HADOOP-6600</a>,<a href="https://issues.apache.org/jira/browse/HDFS-1003,<a href="https://issues">HDFS-1003,<a href="https://issues</a>.apache.org/jira/browse/MAPREDUCE-1539">MAPREDUCE-1539</a>. mechanism for authorization check
for inter-server protocols(boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6580">HADOOP-6580</a>,<a href="https://issues.apache.org/jira/browse/HDFS-993,<a href="https://issues">HDFS-993,<a href="https://issues</a>.apache.org/jira/browse/MAPREDUCE-1516">MAPREDUCE-1516</a>. UGI should contain authentication
method.
<li> Namenode and JT should issue a delegation token only for kerberos
authenticated clients. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-984,<a href="https://issues">HDFS-984,<a href="https://issues</a>.apache.org/jira/browse/HADOOP-6573">HADOOP-6573</a>,<a href="https://issues.apache.org/jira/browse/MAPREDUCE-1537">MAPREDUCE-1537</a>. Delegation Tokens should be persisted
in Namenode, and corresponding changes in common and mr. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-994">HDFS-994</a>. Provide methods for obtaining delegation token from Namenode for
hftp and other uses. Incorporates <a href="https://issues.apache.org/jira/browse/HADOOP-6594">HADOOP-6594</a>: Update hdfs script to
provide fetchdt tool. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6586">HADOOP-6586</a>. Log authentication and authorization failures and successes
(boryas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-991">HDFS-991</a>. Allow use of delegation tokens to authenticate to the
HDFS servlets. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-1849">HADOOP-1849</a>. Add undocumented configuration parameter for per handler
call queue size in IPC Server. (shv)
<a href="https://issues.apache.org/jira/browse/HADOOP-6599">HADOOP-6599</a>. Split existing RpcMetrics with summary in RpcMetrics and
details information in RpcDetailedMetrics. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-985">HDFS-985</a>. HDFS should issue multiple RPCs for listing a large directory.
(hairong)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-1000">HDFS-1000</a>. Updates libhdfs to use the new UGI. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1532">MAPREDUCE-1532</a>. Ensures all filesystem operations at the client is done
as the job submitter. Also, changes the renewal to maintain list of tokens
to renew. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6596">HADOOP-6596</a>. Add a version field to the seialization of the
AbstractDelegationTokenIdentifier. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5561">HADOOP-5561</a>. Add javadoc.maxmemory to build.xml to allow larger memory.
(jkhoman via omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6579">HADOOP-6579</a>. Add a mechanism for encoding and decoding Tokens in to
url-safe strings. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1354">MAPREDUCE-1354</a>. Make incremental changes in jobtracker for
improving scalability (acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-999">HDFS-999</a>.Secondary namenode should login using kerberos if security
is configured(boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1466">MAPREDUCE-1466</a>. Added a private configuration variable
mapreduce.input.num.files, to store number of input files
being processed by M/R job. (Arun Murthy via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1403">MAPREDUCE-1403</a>. Save file-sizes of each of the artifacts in
DistributedCache in the JobConf (Arun Murthy via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6543">HADOOP-6543</a>. Fixes a compilation problem in the original commit. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1520">MAPREDUCE-1520</a>. Moves a call to setWorkingDirectory in Child to within
a doAs block. (Amareshwari Sriramadasu via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6543">HADOOP-6543</a>. Allows secure clients to talk to unsecure clusters.
(Kan Zhang via ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1505">MAPREDUCE-1505</a>. Delays construction of the job client until it is really
required. (Arun C Murthy via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6549">HADOOP-6549</a>. TestDoAsEffectiveUser should use ip address of the host
for superuser ip check. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-464">HDFS-464</a>. Fix memory leaks in libhdfs. (Christian Kunz via suresh)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-946">HDFS-946</a>. NameNode should not return full path name when lisitng a
diretory or getting the status of a file. (hairong)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1398">MAPREDUCE-1398</a>. Fix TaskLauncher to stop waiting for slots on a TIP
that is killed / failed. (Amareshwari Sriramadasu via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1476">MAPREDUCE-1476</a>. Fix the M/R framework to not call commit for special
tasks like job setup/cleanup and task cleanup.
(Amareshwari Sriramadasu via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6467">HADOOP-6467</a>. Performance improvement for liststatus on directories in
hadoop archives. (mahadev)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6558">HADOOP-6558</a>. archive does not work with distcp -update. (nicholas via
mahadev)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6583">HADOOP-6583</a>. Captures authentication and authorization metrics. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1316">MAPREDUCE-1316</a>. Fixes a memory leak of TaskInProgress instances in
the jobtracker. (Amar Kamat via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-670.">MAPREDUCE-670.</a> Creates ant target for 10 mins patch test build.
(Jothi Padmanabhan via gkesavan)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1430">MAPREDUCE-1430</a>. JobTracker should be able to renew delegation tokens
for the jobs(boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6551">HADOOP-6551</a>, <a href="https://issues.apache.org/jira/browse/HDFS-986, <a href="https://issues">HDFS-986, <a href="https://issues</a>.apache.org/jira/browse/MAPREDUCE-1503">MAPREDUCE-1503</a>. Change API for tokens to throw
exceptions instead of returning booleans. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6545">HADOOP-6545</a>. Changes the Key for the FileSystem to be UGI. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6572">HADOOP-6572</a>. Makes sure that SASL encryption and push to responder queue
for the RPC response happens atomically. (Kan Zhang via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-965">HDFS-965</a>. Split the HDFS TestDelegationToken into two tests, of which
one proxy users and the other normal users. (jitendra via omalley)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6560">HADOOP-6560</a>. HarFileSystem throws NPE for har://hdfs-/foo (nicholas via
mahadev)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-686.">MAPREDUCE-686.</a> Move TestSpeculativeExecution.Fake* into a separate class
so that it can be used by other tests. (Jothi Padmanabhan via sharad)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-181.">MAPREDUCE-181.</a> Fixes an issue in the use of the right config. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1026">MAPREDUCE-1026</a>. Fixes a bug in the backport. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6559">HADOOP-6559</a>. Makes the RPC client automatically re-login when the SASL
connection setup fails. This is applicable to only keytab based logins.
(ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-2141">HADOOP-2141</a>. Backport changes made in the original JIRA to aid
fast unit tests in Map/Reduce. (Amar Kamat via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6382">HADOOP-6382</a>. Import the mavenizable pom file structure and adjust
the build targets and bin scripts. (gkesvan via ltucker)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1425">MAPREDUCE-1425</a>. archive throws OutOfMemoryError (mahadev)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1399">MAPREDUCE-1399</a>. The archive command shows a null error message. (nicholas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6552">HADOOP-6552</a>. Puts renewTGT=true and useTicketCache=true for the keytab
kerberos options. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1433">MAPREDUCE-1433</a>. Adds delegation token for MapReduce (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4359">HADOOP-4359</a>. Fixes a bug in the earlier backport. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6547">HADOOP-6547</a>, <a href="https://issues.apache.org/jira/browse/HDFS-949, <a href="https://issues">HDFS-949, <a href="https://issues</a>.apache.org/jira/browse/MAPREDUCE-1470">MAPREDUCE-1470</a>. Move Delegation token into Common
so that we can use it for MapReduce also. It is a combined patch for
common, hdfs and mr. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6510">HADOOP-6510</a>,<a href="https://issues.apache.org/jira/browse/HDFS-935,<a href="https://issues">HDFS-935,<a href="https://issues</a>.apache.org/jira/browse/MAPREDUCE-1464">MAPREDUCE-1464</a>. Support for doAs to allow
authenticated superuser to impersonate proxy users. It is a combined
patch with compatible fixes in HDFS and MR. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1435">MAPREDUCE-1435</a>. Fixes the way symlinks are handled when cleaning up
work directory files. (Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-6419">MAPREDUCE-6419</a>. Fixes a bug in the backported patch. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1457">MAPREDUCE-1457</a>. Fixes JobTracker to get the FileSystem object within
getStagingAreaDir within a privileged block. Fixes Child.java to use the
appropriate UGIs while getting the TaskUmbilicalProtocol proxy and while
executing the task. Contributed by Jakob Homan. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1440">MAPREDUCE-1440</a>. Replace the long user name in MapReduce with the local
name. (ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6419">HADOOP-6419</a>. Adds SASL based authentication to RPC. Also includes the
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-1335">MAPREDUCE-1335</a> and <a href="https://issues.apache.org/jira/browse/HDFS-933 patches">HDFS-933 patches</a>. Contributed by Kan Zhang.
(ddas)
<a href="https://issues.apache.org/jira/browse/HADOOP-6538">HADOOP-6538</a>. Sets hadoop.security.authentication to simple by default.
(ddas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-938">HDFS-938</a>. Replace calls to UGI.getUserName() with
UGI.getShortUserName()(boryas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6544">HADOOP-6544</a>. fix ivy settings to include JSON jackson.codehause.org
libs for .20 (boryas)
<a href="https://issues.apache.org/jira/browse/HDFS-907">HDFS-907</a>. Add tests for getBlockLocations and totalLoad metrics. (rphulari)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6204">HADOOP-6204</a>. Implementing aspects development and fault injeciton
framework for Hadoop (cos)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1432">MAPREDUCE-1432</a>. Adds hooks in the jobtracker and tasktracker
for loading the tokens in the user's ugi. This is required for
the copying of files from the hdfs. (Devaraj Das vi boryas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1383">MAPREDUCE-1383</a>. Automates fetching of delegation tokens in File*Formats
Distributed Cache and Distcp. Also, provides a config
mapreduce.job.hdfs-servers that the jobs can populate with a comma
separated list of namenodes. The job client automatically fetches
delegation tokens from those namenodes.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6337">HADOOP-6337</a>. Update FilterInitializer class to be more visible
and take a conf for further development. (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6520">HADOOP-6520</a>. UGI should load tokens from the environment. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6517">HADOOP-6517</a>, <a href="https://issues.apache.org/jira/browse/HADOOP-6518">HADOOP-6518</a>. Ability to add/get tokens from
UserGroupInformation & Kerberos login in UGI should honor KRB5CCNAME
(jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6299">HADOOP-6299</a>. Reimplement the UserGroupInformation to use the OS
specific and Kerberos JAAS login. (jhoman, ddas, oom)
<a href="https://issues.apache.org/jira/browse/HADOOP-6524">HADOOP-6524</a>. Contrib tests are failing Clover'ed build. (cos)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-842.">MAPREDUCE-842.</a> Fixing a bug in the earlier version of the patch
related to improper localization of the job token file.
(Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-919">HDFS-919</a>. Create test to validate the BlocksVerified metric (Gary Murry
via cos)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1186">MAPREDUCE-1186</a>. Modified code in distributed cache to set
permissions only on required set of localized paths.
(Amareshwari Sriramadasu via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-899">HDFS-899</a>. Delegation Token Implementation. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-896.">MAPREDUCE-896.</a> Enhance tasktracker to cleanup files that might have
been created by user tasks with non-writable permissions.
(Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5879">HADOOP-5879</a>. Read compression level and strategy from Configuration for
gzip compression. (He Yongqiang via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6161">HADOOP-6161</a>. Add get/setEnum methods to Configuration. (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6382">HADOOP-6382</a> Mavenize the build.xml targets and update the bin scripts
in preparation for publishing POM files (giri kesavan via ltucker)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-737">HDFS-737</a>. Add full path name of the file to the block information and
summary of total number of files, blocks, live and deadnodes to
metasave output. (Jitendra Nath Pandey via suresh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6577">HADOOP-6577</a>. Add hidden configuration option "ipc.server.max.response.size"
to change the default 1 MB, the maximum size when large IPC handler
response buffer is reset. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6521">HADOOP-6521</a>. Fix backward compatiblity issue with umask when applications
use deprecated param dfs.umask in configuration or use
FsPermission.setUMask(). (suresh)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-737">HDFS-737</a>. Add full path name of the file to the block information and
summary of total number of files, blocks, live and deadnodes to
metasave output. (Jitendra Nath Pandey via suresh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6521">HADOOP-6521</a>. Fix backward compatiblity issue with umask when applications
use deprecated param dfs.umask in configuration or use
FsPermission.setUMask(). (suresh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-433.">MAPREDUCE-433.</a> Use more reliable counters in TestReduceFetch.
(Christopher Douglas via ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-744.">MAPREDUCE-744.</a> Introduces the notion of a public distributed cache.
(ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1140">MAPREDUCE-1140</a>. Fix DistributedCache to not decrement reference counts
for unreferenced files in error conditions.
(Amareshwari Sriramadasu via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1284">MAPREDUCE-1284</a>. Fix fts_open() call in task-controller that was failing
LinuxTaskController unit tests. (Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1098">MAPREDUCE-1098</a>. Fixed the distributed-cache to not do i/o while
holding a global lock.
(Amareshwari Sriramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1338">MAPREDUCE-1338</a>. Introduces the notion of token cache using which
tokens and secrets can be sent by the Job client to the JobTracker.
(Boris Shkolnik)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6495">HADOOP-6495</a>. Identifier should be serialized after the password is created
In Token constructor. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6506">HADOOP-6506</a>. Failing tests prevent the rest of test targets from
execution. (cos)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5457">HADOOP-5457</a>. Fix to continue to run builds even if contrib test fails.
(gkesavan)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-856.">MAPREDUCE-856.</a> Setup secure permissions for distributed cache files.
(Vinod Kumar Vavilapalli via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-871.">MAPREDUCE-871.</a> Fix ownership of Job/Task local files to have correct
group ownership according to the egid of the tasktracker.
(Vinod Kumar Vavilapalli via yhemanth)
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-476.">MAPREDUCE-476.</a> Extend DistributedCache to work locally (LocalJobRunner).
(Philip Zeyliger via tomwhite)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-711.">MAPREDUCE-711.</a> Removed Distributed Cache from Common, to move it under
Map/Reduce. (Vinod Kumar Vavilapalli via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-478.">MAPREDUCE-478.</a> Allow map and reduce jvm parameters, environment
variables and ulimit to be set separately. (acmurthy)
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-842.">MAPREDUCE-842.</a> Setup secure permissions for localized job files,
intermediate outputs and log files on tasktrackers.
(Vinod Kumar Vavilapalli via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-408.">MAPREDUCE-408.</a> Fixes an assertion problem in TestKillSubProcesses.
(Ravi Gummadi via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4041">HADOOP-4041</a>. IsolationRunner does not work as documented.
(Philip Zeyliger via tomwhite)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-181.">MAPREDUCE-181.</a> Changes the job submission process to be secure.
(Devaraj Das)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5737">HADOOP-5737</a>. Fixes a problem in the way the JobTracker used to talk to
other daemons like the NameNode to get the job's files. Also adds APIs
in the JobTracker to get the FileSystem objects as per the JobTracker's
configuration. (Amar Kamat via ddas)
<a href="https://issues.apache.org/jira/browse/HADOOP-5771">HADOOP-5771</a>. Implements unit tests for LinuxTaskController.
(Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4656">HADOOP-4656</a>, <a href="https://issues.apache.org/jira/browse/HDFS-685, <a href="https://issues">HDFS-685, <a href="https://issues</a>.apache.org/jira/browse/MAPREDUCE-1083">MAPREDUCE-1083</a>. Use the user-to-groups mapping
service in the NameNode and JobTracker. Combined patch for these 3 jiras
otherwise tests fail. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1250">MAPREDUCE-1250</a>. Refactor job token to use a common token interface.
(Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1026">MAPREDUCE-1026</a>. Shuffle should be secure. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4268">HADOOP-4268</a>. Permission checking in fsck. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6415">HADOOP-6415</a>. Adding a common token interface for both job token and
delegation token. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6367">HADOOP-6367</a>, <a href="https://issues.apache.org/jira/browse/HDFS-764">HDFS-764</a>. Moving Access Token implementation from Common to
HDFS. These two jiras must be committed together otherwise build will
fail. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-409">HDFS-409</a>. Add more access token tests
(Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6132">HADOOP-6132</a>. RPC client opens an extra connection for VersionedProtocol.
(Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-445">HDFS-445</a>. pread() fails when cached block locations are no longer valid.
(Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-195">HDFS-195</a>. Need to handle access token expiration when re-establishing the
pipeline for dfs write. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6176">HADOOP-6176</a>. Adding a couple private methods to AccessTokenHandler
for testing purposes. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5824">HADOOP-5824</a>. remove OP_READ_METADATA functionality from Datanode.
(Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4359">HADOOP-4359</a>. Access Token: Support for data access authorization
checking on DataNodes. (Jitendra Nath Pandey)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1372">MAPREDUCE-1372</a>. Fixed a ConcurrentModificationException in jobtracker.
(Arun C Murthy via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1316">MAPREDUCE-1316</a>. Fix jobs' retirement from the JobTracker to prevent memory
leaks via stale references. (Amar Kamat via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1342">MAPREDUCE-1342</a>. Fixed deadlock in global blacklisting of tasktrackers.
(Amareshwari Sriramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6460">HADOOP-6460</a>. Reinitializes buffers used for serializing responses in ipc
server on exceeding maximum response size to free up Java heap. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1100">MAPREDUCE-1100</a>. Truncate user logs to prevent TaskTrackers' disks from
filling up. (Vinod Kumar Vavilapalli via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1143">MAPREDUCE-1143</a>. Fix running task counters to be updated correctly
when speculative attempts are running for a TIP.
(Rahul Kumar Singh via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6151">HADOOP-6151</a>, 6281, 6285, 6441. Add HTML quoting of the parameters to all
of the servlets to prevent XSS attacks. (omalley)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-896.">MAPREDUCE-896.</a> Fix bug in earlier implementation to prevent
spurious logging in tasktracker logs for absent file paths.
(Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-676.">MAPREDUCE-676.</a> Fix Hadoop Vaidya to ensure it works for map-only jobs.
(Suhas Gogate via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5582">HADOOP-5582</a>. Fix Hadoop Vaidya to use new Counters in
org.apache.hadoop.mapreduce package. (Suhas Gogate via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-595">HDFS-595</a>. umask settings in configuration may now use octal or
symbolic instead of decimal. Update HDFS tests as such. (jghoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1068">MAPREDUCE-1068</a>. Added a verbose error message when user specifies an
incorrect -file parameter. (Amareshwari Sriramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1171">MAPREDUCE-1171</a>. Allow the read-error notification in shuffle to be
configurable. (Amareshwari Sriramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-353.">MAPREDUCE-353.</a> Allow shuffle read and connection timeouts to be
configurable. (Amareshwari Sriramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-781">HDFS-781</a>. Namenode metrics PendingDeletionBlocks is not decremented.
(suresh)
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-1185">MAPREDUCE-1185</a>. Redirect running job url to history url if job is already
retired. (Amareshwari Sriramadasu and Sharad Agarwal via sharad)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-754.">MAPREDUCE-754.</a> Fix NPE in expiry thread when a TT is lost. (Amar Kamat
via sharad)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-896.">MAPREDUCE-896.</a> Modify permissions for local files on tasktracker before
deletion so they can be deleted cleanly. (Ravi Gummadi via yhemanth)
<a href="https://issues.apache.org/jira/browse/HADOOP-5771">HADOOP-5771</a>. Implements unit tests for LinuxTaskController.
(Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1124">MAPREDUCE-1124</a>. Import Gridmix3 and Rumen. (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1063">MAPREDUCE-1063</a>. Document gridmix benchmark. (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-758">HDFS-758</a>. Changes to report status of decommissioining on the namenode web
UI. (jitendra)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6234">HADOOP-6234</a>. Add new option dfs.umaskmode to set umask in configuration
to use octal or symbolic instead of decimal. (Jakob Homan via suresh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1147">MAPREDUCE-1147</a>. Add map output counters to new API. (Amar Kamat via
cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1182">MAPREDUCE-1182</a>. Fix overflow in reduce causing allocations to exceed the
configured threshold. (cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4933">HADOOP-4933</a>. Fixes a ConcurrentModificationException problem that shows up
when the history viewer is accessed concurrently.
(Amar Kamat via ddas)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1140">MAPREDUCE-1140</a>. Fix DistributedCache to not decrement reference counts for
unreferenced files in error conditions.
(Amareshwari Sriramadasu via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6203">HADOOP-6203</a>. FsShell rm/rmr error message indicates exceeding Trash quota
and suggests using -skpTrash, when moving to trash fails.
(Boris Shkolnik via suresh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5675">HADOOP-5675</a>. Do not launch a job if DistCp has no work to do. (Tsz Wo
(Nicholas), SZE via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-457">HDFS-457</a>. Better handling of volume failure in Data Node storage,
This fix is a port from hdfs-0.22 to common-0.20 by Boris Shkolnik.
Contributed by Erik Steffl
<li> <a href="https://issues.apache.org/jira/browse/HDFS-625">HDFS-625</a>. Fix NullPointerException thrown from ListPathServlet.
Contributed by Suresh Srinivas.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6343">HADOOP-6343</a>. Log unexpected throwable object caught in RPC.
Contributed by Jitendra Nath Pandey
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1186">MAPREDUCE-1186</a>. Fixed DistributedCache to do a recursive chmod on just the
per-cache directory, not all of mapred.local.dir.
(Amareshwari Sriramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1231">MAPREDUCE-1231</a>. Add an option to distcp to ignore checksums when used with
the upgrade option.
(Jothi Padmanabhan via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1219">MAPREDUCE-1219</a>. Fixed JobTracker to not collect per-job metrics, thus
easing load on it. (Amareshwari Sriramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-761">HDFS-761</a>. Fix failure to process rename operation from edits log due to
quota verification. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1196">MAPREDUCE-1196</a>. Fix FileOutputCommitter to use the deprecated cleanupJob
api correctly. (acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6344">HADOOP-6344</a>. rm and rmr immediately delete files rather than sending
to trash, despite trash being enabled, if a user is over-quota. (jhoman)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1160">MAPREDUCE-1160</a>. Reduce verbosity of log lines in some Map/Reduce classes
to avoid filling up jobtracker logs on a busy cluster.
(Ravi Gummadi and Hong Tang via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-587">HDFS-587</a>. Add ability to run HDFS with MR test on non-default queue,
also updated junit dependendcy from junit-3.8.1 to junit-4.5 (to make
it possible to use Configured and Tool to process command line to
be able to specify a queue). Contributed by Erik Steffl.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1158">MAPREDUCE-1158</a>. Fix JT running maps and running reduces metrics.
(sharad)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-947.">MAPREDUCE-947.</a> Fix bug in earlier implementation that was
causing unit tests to fail.
(Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1062">MAPREDUCE-1062</a>. Fix MRReliabilityTest to work with retired jobs
(Contributed by Sreekanth Ramakrishnan)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1090">MAPREDUCE-1090</a>. Modified log statement in TaskMemoryManagerThread to
include task attempt id. (yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1098">MAPREDUCE-1098</a>. Fixed the distributed-cache to not do i/o while
holding a global lock. (Amareshwari Sriramadasu via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1048">MAPREDUCE-1048</a>. Add occupied/reserved slot usage summary on
jobtracker UI. (Amareshwari Sriramadasu via sharad)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1103">MAPREDUCE-1103</a>. Added more metrics to Jobtracker. (sharad)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-947.">MAPREDUCE-947.</a> Added commitJob and abortJob apis to OutputCommitter.
Enhanced FileOutputCommitter to create a _SUCCESS file for successful
jobs. (Amar Kamat & Jothi Padmanabhan via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1105">MAPREDUCE-1105</a>. Remove max limit configuration in capacity scheduler in
favor of max capacity percentage thus allowing the limit to go over
queue capacity. (Rahul Kumar Singh via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1086">MAPREDUCE-1086</a>. Setup Hadoop logging environment for tasks to point to
task related parameters. (Ravi Gummadi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-739.">MAPREDUCE-739.</a> Allow relative paths to be created inside archives.
(mahadev)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6097">HADOOP-6097</a>. Multiple bugs w/ Hadoop archives (mahadev)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6231">HADOOP-6231</a>. Allow caching of filesystem instances to be disabled on a
per-instance basis (ben slusky via mahadev)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-826.">MAPREDUCE-826.</a> harchive doesn't use ToolRunner / harchive returns 0 even
if the job fails with exception (koji via mahadev)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-686">HDFS-686</a>. NullPointerException is thrown while merging edit log and
image. (hairong)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-709">HDFS-709</a>. Fix TestDFSShell failure due to rename bug introduced by
<a href="https://issues.apache.org/jira/browse/HDFS-677">HDFS-677</a>. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/HDFS-677">HDFS-677</a>. Rename failure when both source and destination quota exceeds
results in deletion of source. (suresh)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6284">HADOOP-6284</a>. Add a new parameter, HADOOP_JAVA_PLATFORM_OPTS, to
hadoop-config.sh so that it allows setting java command options for
JAVA_PLATFORM. (Koji Noguchi via szetszwo)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-732.">MAPREDUCE-732.</a> Removed spurious log statements in the node
blacklisting logic. (Sreekanth Ramakrishnan via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-144.">MAPREDUCE-144.</a> Includes dump of the process tree in task diagnostics when
a task is killed due to exceeding memory limits.
(Vinod Kumar Vavilapalli via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-979.">MAPREDUCE-979.</a> Fixed JobConf APIs related to memory parameters to
return values of new configuration variables when deprecated
variables are disabled. (Sreekanth Ramakrishnan via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-277.">MAPREDUCE-277.</a> Makes job history counters available on the job history
viewers. (Jothi Padmanabhan via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5625">HADOOP-5625</a>. Add operation duration to clienttrace. (Lei Xu
via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5222">HADOOP-5222</a>. Add offset to datanode clienttrace. (Lei Xu via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6218">HADOOP-6218</a>. Adds a feature where TFile can be split by Record
Sequence number. Contributed by Hong Tang and Raghu Angadi.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1088">MAPREDUCE-1088</a>. Changed permissions on JobHistory files on local disk to
0744. Contributed by Arun C. Murthy.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6304">HADOOP-6304</a>. Use java.io.File.set{Readable|Writable|Executable} where
possible in RawLocalFileSystem. Contributed by Arun C. Murthy.
<a href="https://issues.apache.org/jira/browse/MAPREDUCE-270.">MAPREDUCE-270.</a> Fix the tasktracker to optionally send an out-of-band
heartbeat on task-completion for better job-latency. Contributed by
Arun C. Murthy
Configuration changes:
add mapreduce.tasktracker.outofband.heartbeat
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1030">MAPREDUCE-1030</a>. Fix capacity-scheduler to assign a map and a reduce task
per-heartbeat. Contributed by Rahuk K Singh.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1028">MAPREDUCE-1028</a>. Fixed number of slots occupied by cleanup tasks to one
irrespective of slot size for the job. Contributed by Ravi Gummadi.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-964.">MAPREDUCE-964.</a> Fixed start and finish times of TaskStatus to be
consistent, thereby fixing inconsistencies in metering tasks.
Contributed by Sreekanth Ramakrishnan.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5976">HADOOP-5976</a>. Add a new command, classpath, to the hadoop
script. Contributed by Owen O'Malley and Gary Murry
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5784">HADOOP-5784</a>. Makes the number of heartbeats that should arrive
a second at the JobTracker configurable. Contributed by
Amareshwari Sriramadasu.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-945.">MAPREDUCE-945.</a> Modifies MRBench and TestMapRed to use
ToolRunner so that options such as queue name can be
passed via command line. Contributed by Sreekanth Ramakrishnan.
<li> HADOOP:5420 Correct bug in earlier implementation
by Arun C. Murthy
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5363">HADOOP-5363</a> Add support for proxying connections to multiple
clusters with different versions to hdfsproxy. Contributed
by Zhiyong Zhang
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5780">HADOOP-5780</a>. Improve per block message prited by -metaSave
in HDFS. (Raghu Angadi)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6227">HADOOP-6227</a>. Fix Configuration to allow final parameters to be set
to null and prevent them from being overridden. Contributed by
Amareshwari Sriramadasu.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-430 ">MAPREDUCE-430 </a> Added patch supplied by Amar Kamat to allow roll forward
on branch to includ externally committed patch.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-768.">MAPREDUCE-768.</a> Provide an option to dump jobtracker configuration in
JSON format to standard output. Contributed by V.V.Chaitanya
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-834 ">MAPREDUCE-834 </a>Correct an issue created by merging this issue with
patch attached to external Jira.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6184">HADOOP-6184</a> Provide an API to dump Configuration in a JSON format.
Contributed by V.V.Chaitanya Krishna.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-745 ">MAPREDUCE-745 </a> Patch added for this issue to allow branch-0.20 to
merge cleanly.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-478 ">MAPREDUCE-478 </a>Allow map and reduce jvm parameters, environment
variables and ulimit to be set separately.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-682 ">MAPREDUCE-682 </a>Removes reservations on tasktrackers which are blacklisted.
Contributed by Sreekanth Ramakrishnan.
<li> HADOOP:5420 Support killing of process groups in LinuxTaskController
binary
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5488">HADOOP-5488</a> Removes the pidfile management for the Task JVM from the
framework and instead passes the PID back and forth between the
TaskTracker and the Task processes. Contributed by Ravi Gummadi.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-467 ">MAPREDUCE-467 </a>Provide ability to collect statistics about total tasks and
succeeded tasks in different time windows.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-817.">MAPREDUCE-817.</a> Add a cache for retired jobs with minimal job
info and provide a way to access history file url
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-814.">MAPREDUCE-814.</a> Provide a way to configure completed job history
files to be on HDFS.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-838 ">MAPREDUCE-838 </a>Fixes a problem in the way commit of task outputs
happens. The bug was that even if commit failed, the task would be
declared as successful. Contributed by Amareshwari Sriramadasu.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-809 ">MAPREDUCE-809 </a>Fix job-summary logs to correctly record final status of
FAILED and KILLED jobs.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-740 ">MAPREDUCE-740 </a>Log a job-summary at the end of a job, while
allowing it to be configured to use a custom appender if desired.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-771 ">MAPREDUCE-771 </a>Fixes a bug which delays normal jobs in favor of
high-ram jobs.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5420">HADOOP-5420</a> Support setsid based kill in LinuxTaskController.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-733 ">MAPREDUCE-733 </a>Fixes a bug that when a task tracker is killed ,
it throws exception. Instead it should catch it and process it and
allow the rest of the flow to go through
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-734 ">MAPREDUCE-734 </a>Fixes a bug which prevented hi ram jobs from being
removed from the scheduler queue.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-693 ">MAPREDUCE-693 </a> Fixes a bug that when a job is submitted and the
JT is restarted (before job files have been written) and the job
is killed after recovery, the conf files fail to be moved to the
"done" subdirectory.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-722 ">MAPREDUCE-722 </a>Fixes a bug where more slots are getting reserved
for HiRAM job tasks than required.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-683 ">MAPREDUCE-683 </a>TestJobTrackerRestart failed because of stale
filemanager cache (which was created once per jvm). This patch makes
sure that the filemanager is inited upon every JobHistory.init()
and hence upon every restart. Note that this wont happen in production
as upon a restart the new jobtracker will start in a new jvm and
hence a new cache will be created.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-709 ">MAPREDUCE-709 </a>Fixes a bug where node health check script does
not display the correct message on timeout.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-708 ">MAPREDUCE-708 </a>Fixes a bug where node health check script does
not refresh the "reason for blacklisting".
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-522 ">MAPREDUCE-522 </a>Rewrote TestQueueCapacities to make it simpler
and avoid timeout errors.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-532 ">MAPREDUCE-532 </a>Provided ability in the capacity scheduler to
limit the number of slots that can be concurrently used per queue
at any given time.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-211 ">MAPREDUCE-211 </a>Provides ability to run a health check script on
the tasktracker nodes and blacklist nodes if they are unhealthy.
Contributed by Sreekanth Ramakrishnan.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-516 ">MAPREDUCE-516 </a>Remove .orig file included by mistake.
<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-416 ">MAPREDUCE-416 </a>Moves the history file to a "done" folder whenever
a job completes.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5980">HADOOP-5980</a> Previously, task spawned off by LinuxTaskController
didn't get LD_LIBRARY_PATH in their environment. The tasks will now
get same LD_LIBRARY_PATH value as when spawned off by
DefaultTaskController.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5981">HADOOP-5981</a> This issue completes the feature mentioned in
<a href="https://issues.apache.org/jira/browse/HADOOP-2838">HADOOP-2838</a>. <a href="https://issues.apache.org/jira/browse/HADOOP-2838">HADOOP-2838</a> provided a way to set env variables in
child process. This issue provides a way to inherit tt's env variables
and append or reset it. So now X=$X:y will inherit X (if there) and
append y to it.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5419">HADOOP-5419</a> This issue is to provide an improvement on the
existing M/R framework to let users know which queues they have
access to, and for what operations. One use case for this would
that currently there is no easy way to know if the user has access
to submit jobs to a queue, until it fails with an access control
exception.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5420">HADOOP-5420</a> Support setsid based kill in LinuxTaskController.
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5643">HADOOP-5643</a> Added the functionality to refresh jobtrackers node
list via command line (bin/hadoop mradmin -refreshNodes). The command
should be run as the jobtracker owner (jobtracker process owner)
or from a super group (mapred.permissions.supergroup).
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-2838">HADOOP-2838</a> Now the users can set environment variables using
mapred.child.env. They can do the following X=Y : set X to Y X=$X:Y
: Append Y to X (which should be taken from the tasktracker)
<a href="https://issues.apache.org/jira/browse/HADOOP-5818">HADOOP-5818</a>. Revert the renaming from FSNamesystem.checkSuperuserPrivilege
to checkAccess by <a href="https://issues.apache.org/jira/browse/HADOOP-5643">HADOOP-5643</a>. (Amar Kamat via szetszwo)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5801">HADOOP-5801</a>. Fixes the problem: If the hosts file is changed across restart
then it should be refreshed upon recovery so that the excluded hosts are
lost and the maps are re-executed. (Amar Kamat via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5643">HADOOP-5643</a>. <a href="https://issues.apache.org/jira/browse/HADOOP-5643">HADOOP-5643</a>. Adds a way to decommission TaskTrackers
while the JobTracker is running. (Amar Kamat via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5419">HADOOP-5419</a>. Provide a facility to query the Queue ACLs for the
current user. (Rahul Kumar Singh via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5733">HADOOP-5733</a>. Add map/reduce slot capacity and blacklisted capacity to
JobTracker metrics. (Sreekanth Ramakrishnan via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5738">HADOOP-5738</a>. Split "waiting_tasks" JobTracker metric into waiting maps and
waiting reduces. (Sreekanth Ramakrishnan via cdouglas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4842">HADOOP-4842</a>. Streaming now allows specifiying a command for the combiner.
(Amareshwari Sriramadasu via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4490">HADOOP-4490</a>. Provide ability to run tasks as job owners.
(Sreekanth Ramakrishnan via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5442">HADOOP-5442</a>. Paginate jobhistory display and added some search
capabilities. (Amar Kamat via acmurthy)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-3327">HADOOP-3327</a>. Improves handling of READ_TIMEOUT during map output copying.
(Amareshwari Sriramadasu via ddas)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-5113">HADOOP-5113</a>. Fixed logcondense to remove files for usernames
beginning with characters specified in the -l option.
(Peeyush Bishnoi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-2898">HADOOP-2898</a>. Provide an option to specify a port range for
Hadoop services provisioned by HOD.
(Peeyush Bishnoi via yhemanth)
<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4930">HADOOP-4930</a>. Implement a Linux native executable that can be used to
launch tasks as users. (Sreekanth Ramakrishnan via yhemanth)
</ul>
</body>
</html>