Release notes for hadoop-2.1.0-beta.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2.1.0-beta@1496796 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html b/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
index 0c0c2e7..e12cd83 100644
--- a/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
+++ b/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
@@ -1,4 +1,3835 @@
 <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<title>Hadoop  2.1.0-beta Release Notes</title>
+<STYLE type="text/css">
+	H1 {font-family: sans-serif}
+	H2 {font-family: sans-serif; margin-left: 7mm}
+	TABLE {margin-left: 7mm}
+</STYLE>
+</head>
+<body>
+<h1>Hadoop  2.1.0-beta Release Notes</h1>
+These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
+<a name="changes"/>
+<h2>Changes since Hadoop 2.0.5-alpha</h2>
+<ul>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-874">YARN-874</a>.
+     Blocker bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>Tracking YARN/MR test failures after HADOOP-9421 and YARN-827</b><br>
+     <blockquote>HADOOP-9421 and YARN-827 broke some YARN/MR tests. Tracking those..</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-869">YARN-869</a>.
+     Blocker bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>ResourceManagerAdministrationProtocol should neither be public(yet) nor in yarn.api</b><br>
+     <blockquote>This is a admin only api that we don't know yet if people can or should write new tools against. I am going to move it to yarn.server.api and make it @Private..</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-861">YARN-861</a>.
+     Critical bug reported by Devaraj K and fixed by Vinod Kumar Vavilapalli (nodemanager)<br>
+     <b>TestContainerManager is failing</b><br>
+     <blockquote>https://builds.apache.org/job/Hadoop-Yarn-trunk/246/

+

+{code:xml}

+Running org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager

+Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 19.249 sec &lt;&lt;&lt; FAILURE!

+testContainerManagerInitialization(org.apache.hadoop.yarn.server.nodemanager.containermanager.TestContainerManager)  Time elapsed: 286 sec  &lt;&lt;&lt; FAILURE!

+junit.framework.ComparisonFailure: expected:&lt;[asf009.sp2.ygridcore.ne]t&gt; but was:&lt;[localhos]t&gt;

+	at junit.framework.Assert.assertEquals(Assert.java:85)

+

+{code}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-854">YARN-854</a>.
+     Blocker bug reported by Ramya Sunil and fixed by Omkar Vinit Joshi <br>
+     <b>App submission fails on secure deploy</b><br>
+     <blockquote>App submission on secure cluster fails with the following exception:

+

+{noformat}

+INFO mapreduce.Job: Job jobID failed with state FAILED due to: Application applicationID failed 2 times due to AM Container for appattemptID exited with  exitCode: -1000 due to: App initialization failed (255) with output: main : command provided 0

+main : user is qa_user

+javax.security.sasl.SaslException: DIGEST-MD5: digest response format violation. Mismatched response. [Caused by org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): DIGEST-MD5: digest response format violation. Mismatched response.]

+	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

+	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)

+	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)

+	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)

+	at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)

+	at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:104)

+	at org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:65)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:235)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:169)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.main(ContainerLocalizer.java:348)

+Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): DIGEST-MD5: digest response format violation. Mismatched response.

+	at org.apache.hadoop.ipc.Client.call(Client.java:1298)

+	at org.apache.hadoop.ipc.Client.call(Client.java:1250)

+	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:204)

+	at $Proxy7.heartbeat(Unknown Source)

+	at org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62)

+	... 3 more

+

+.Failing this attempt.. Failing the application.

+

+{noformat}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-852">YARN-852</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>TestAggregatedLogFormat.testContainerLogsFileAccess fails on Windows</b><br>
+     <blockquote>The YARN unit test case fails on Windows when comparing expected message with log message in the file. The expected message constructed in the test case has two problems: 1) it uses Path.separator to concatenate path string. Path.separator is always a forward slash, which does not match the backslash used in the log message. 2) On Windows, the default file owner is Administrators group if the file is created by an Administrators user. The test expect the user to be the current user.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-851">YARN-851</a>.
+     Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Share NMTokens using NMTokenCache (api-based) instead of memory based approach which is used currently.</b><br>
+     <blockquote>It is a follow up ticket for YARN-694. Changing the way NMTokens are shared.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-850">YARN-850</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Rename getClusterAvailableResources to getAvailableResources in AMRMClients</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-848">YARN-848</a>.
+     Major bug reported by Hitesh Shah and fixed by Hitesh Shah <br>
+     <b>Nodemanager does not register with RM using the fully qualified hostname</b><br>
+     <blockquote>If the hostname is misconfigured to not be fully qualified ( i.e. hostname returns foo and hostname -f returns foo.bar.xyz ), the NM ends up registering with the RM using only "foo". This can create problems if DNS cannot resolve the hostname properly. 

+

+Furthermore, HDFS uses fully qualified hostnames which can end up affecting locality matches when allocating containers based on block locations. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-846">YARN-846</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move pb Impl from yarn-api to yarn-common</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-841">YARN-841</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Vinod Kumar Vavilapalli <br>
+     <b>Annotate and document AuxService APIs</b><br>
+     <blockquote>For users writing their own AuxServices, these APIs should be annotated and need better documentation. Also, the classes may need to move out of the NodeManager.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-840">YARN-840</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move ProtoUtils to  yarn.api.records.pb.impl</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-839">YARN-839</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>TestContainerLaunch.testContainerEnvVariables fails on Windows</b><br>
+     <blockquote>The unit test case fails on Windows due to job id or container id was not printed out as part of the container script. Later, the test tries to read the pid from output of the file, and fails.

+

+Exception in trunk:

+{noformat}

+Running org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch

+Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.903 sec &lt;&lt;&lt; FAILURE!

+testContainerEnvVariables(org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch)  Time elapsed: 1307 sec  &lt;&lt;&lt; ERROR!

+java.lang.NullPointerException

+        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch.testContainerEnvVariables(TestContainerLaunch.java:278)

+        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

+        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

+        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

+        at java.lang.reflect.Method.invoke(Method.java:597)

+        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)

+        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)

+        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)

+        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)

+        at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)

+{noformat}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-837">YARN-837</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>ClusterInfo.java doesn't seem to belong to org.apache.hadoop.yarn</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-834">YARN-834</a>.
+     Blocker sub-task reported by Arun C Murthy and fixed by Zhijie Shen <br>
+     <b>Review/fix annotations for yarn-client module and clearly differentiate *Async apis</b><br>
+     <blockquote>Review/fix annotations for yarn-client module</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-833">YARN-833</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Move Graph and VisualizeStateMachine into yarn.state package</b><br>
+     <blockquote>Graph and VisualizeStateMachine are only used by state machine, they should belong to state package.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-831">YARN-831</a>.
+     Blocker sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Remove resource min from GetNewApplicationResponse</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-829">YARN-829</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Rename RMTokenSelector to be RMDelegationTokenSelector</b><br>
+     <blockquote>Therefore, the name of it will be consistent with that of RMDelegationTokenIdentifier.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-828">YARN-828</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Remove YarnVersionAnnotation</b><br>
+     <blockquote>YarnVersionAnnotation is not used at all, and the version information can be accessed through YarnVersionInfo instead.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-827">YARN-827</a>.
+     Critical sub-task reported by Bikas Saha and fixed by Jian He <br>
+     <b>Need to make Resource arithmetic methods accessible</b><br>
+     <blockquote>org.apache.hadoop.yarn.server.resourcemanager.resource has stuff like Resources and Calculators that help compare/add resources etc. Without these users will be forced to replicate the logic, potentially incorrectly.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-826">YARN-826</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Move Clock/SystemClock to util package</b><br>
+     <blockquote>Clock/SystemClock should belong to util.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-825">YARN-825</a>.
+     Blocker sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>Fix yarn-common javadoc annotations</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-824">YARN-824</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Add  static factory to yarn client lib interface and change it to abstract class</b><br>
+     <blockquote>Do this for AMRMClient, NMClient, YarnClient. and annotate its impl as private.

+The purpose is not to expose impl</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-823">YARN-823</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move RMAdmin from yarn.client to yarn.client.cli and rename as RMAdminCLI</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-822">YARN-822</a>.
+     Major sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Rename ApplicationToken to AMRMToken</b><br>
+     <blockquote>API change. At present this token is getting used on scheduler api AMRMProtocol. Right now name wise it is little confusing as it might be useful for the application to talk to complete yarn system (RM/NM) but that is not the case after YARN-694. NM will have specific NMToken so it is better to name it as AMRMToken.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-821">YARN-821</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Rename FinishApplicationMasterRequest.setFinishApplicationStatus to setFinalApplicationStatus to be consistent with getter</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-812">YARN-812</a>.
+     Major bug reported by Ramya Sunil and fixed by Siddharth Seth <br>
+     <b>Enabling app summary logs causes 'FileNotFound' errors</b><br>
+     <blockquote>RM app summary logs have been enabled as per the default config:

+

+{noformat}

+#

+# Yarn ResourceManager Application Summary Log 

+#

+# Set the ResourceManager summary log filename

+yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log

+# Set the ResourceManager summary log level and appender

+yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY

+

+# Appender for ResourceManager Application Summary Log

+# Requires the following properties to be set

+#    - hadoop.log.dir (Hadoop Log directory)

+#    - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)

+#    - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)

+

+log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}

+log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false

+log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender

+log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}

+log4j.appender.RMSUMMARY.MaxFileSize=256MB

+log4j.appender.RMSUMMARY.MaxBackupIndex=20

+log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout

+log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n

+{noformat}

+

+This however, throws errors while running commands as non-superuser:

+{noformat}

+-bash-4.1$ hadoop dfs -ls /

+DEPRECATED: Use of this script to execute hdfs command is deprecated.

+Instead use the hdfs command for it.

+

+log4j:ERROR setFile(null,true) call failed.

+java.io.FileNotFoundException: /var/log/hadoop/hadoopqa/rm-appsummary.log (No such file or directory)

+        at java.io.FileOutputStream.openAppend(Native Method)

+        at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:192)

+        at java.io.FileOutputStream.&lt;init&gt;(FileOutputStream.java:116)

+        at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)

+        at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)

+        at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)

+        at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)

+        at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)

+        at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)

+        at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)

+        at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)

+        at org.apache.log4j.PropertyConfigurator.parseCatsAndRenderers(PropertyConfigurator.java:672)

+        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:516)

+        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)

+        at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)

+        at org.apache.log4j.LogManager.&lt;clinit&gt;(LogManager.java:127)

+        at org.apache.log4j.Logger.getLogger(Logger.java:104)

+        at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:289)

+        at org.apache.commons.logging.impl.Log4JLogger.&lt;init&gt;(Log4JLogger.java:109)

+        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

+        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)

+        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)

+        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)

+        at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1116)

+        at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:858)

+        at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)

+        at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)

+        at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:310)

+        at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)

+        at org.apache.hadoop.fs.FsShell.&lt;clinit&gt;(FsShell.java:41)

+Found 1 items

+drwxr-xr-x   - hadoop   hadoop            0 2013-06-12 21:28 /user

+{noformat}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-806">YARN-806</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move ContainerExitStatus from yarn.api to yarn.api.records</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-805">YARN-805</a>.
+     Blocker sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Fix yarn-api javadoc annotations</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-803">YARN-803</a>.
+     Major improvement reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (resourcemanager , scheduler)<br>
+     <b>factor out scheduler config validation from the ResourceManager to each scheduler implementation</b><br>
+     <blockquote>Per discussion in YARN-789 we should factor out from the ResourceManager class the scheduler config validations.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-799">YARN-799</a>.
+     Major bug reported by Chris Riccomini and fixed by Chris Riccomini (nodemanager)<br>
+     <b>CgroupsLCEResourcesHandler tries to write to cgroup.procs</b><br>
+     <blockquote>The implementation of

+

+bq. ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java

+

+Tells the container-executor to write PIDs to cgroup.procs:

+

+{code}

+  public String getResourcesOption(ContainerId containerId) {

+    String containerName = containerId.toString();

+    StringBuilder sb = new StringBuilder("cgroups=");

+

+    if (isCpuWeightEnabled()) {

+      sb.append(pathForCgroup(CONTROLLER_CPU, containerName) + "/cgroup.procs");

+      sb.append(",");

+    }

+

+    if (sb.charAt(sb.length() - 1) == ',') {

+      sb.deleteCharAt(sb.length() - 1);

+    } 

+    return sb.toString();

+  }

+{code}

+

+Apparently, this file has not always been writeable:

+

+https://patchwork.kernel.org/patch/116146/

+http://lkml.indiana.edu/hypermail/linux/kernel/1004.1/00536.html

+https://lists.linux-foundation.org/pipermail/containers/2009-July/019679.html

+

+The RHEL version of the Linux kernel that I'm using has a CGroup module that has a non-writeable cgroup.procs file.

+

+{quote}

+$ uname -a

+Linux criccomi-ld 2.6.32-131.4.1.el6.x86_64 #1 SMP Fri Jun 10 10:54:26 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux

+{quote}

+

+As a result, when the container-executor tries to run, it fails with this error message:

+

+bq.    fprintf(LOGFILE, "Failed to write pid %s (%d) to file %s - %s\n",

+

+This is because the executor is given a resource by the CgroupsLCEResourcesHandler that includes cgroup.procs, which is non-writeable:

+

+{quote}

+$ pwd 

+/cgroup/cpu/hadoop-yarn/container_1370986842149_0001_01_000001

+$ ls -l

+total 0

+-r--r--r-- 1 criccomi eng 0 Jun 11 14:43 cgroup.procs

+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_period_us

+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.rt_runtime_us

+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 cpu.shares

+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 notify_on_release

+-rw-r--r-- 1 criccomi eng 0 Jun 11 14:43 tasks

+{quote}

+

+I patched CgroupsLCEResourcesHandler to use /tasks instead of /cgroup.procs, and this appears to have fixed the problem.

+

+I can think of several potential resolutions to this ticket:

+

+1. Ignore the problem, and make people patch YARN when they hit this issue.

+2. Write to /tasks instead of /cgroup.procs for everyone

+3. Check permissioning on /cgroup.procs prior to writing to it, and fall back to /tasks.

+4. Add a config to yarn-site that lets admins specify which file to write to.

+

+Thoughts?</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-795">YARN-795</a>.
+     Major bug reported by Wei Yan and fixed by Wei Yan (scheduler)<br>
+     <b>Fair scheduler queue metrics should subtract allocated vCores from available vCores</b><br>
+     <blockquote>The queue metrics of fair scheduler doesn't subtract allocated vCores from available vCores, causing the available vCores returned is incorrect.

+This is happening because {code}QueueMetrics.getAllocateResources(){code} doesn't return the allocated vCores.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-792">YARN-792</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move NodeHealthStatus from yarn.api.record to yarn.server.api.record</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-789">YARN-789</a>.
+     Major improvement reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (scheduler)<br>
+     <b>Enable zero capabilities resource requests in fair scheduler</b><br>
+     <blockquote>Per discussion in YARN-689, reposting updated use case:

+

+1. I have a set of services co-existing with a Yarn cluster.

+

+2. These services run out of band from Yarn. They are not started as yarn containers and they don't use Yarn containers for processing.

+

+3. These services use, dynamically, different amounts of CPU and memory based on their load. They manage their CPU and memory requirements independently. In other words, depending on their load, they may require more CPU but not memory or vice-versa.

+By using YARN as RM for these services I'm able share and utilize the resources of the cluster appropriately and in a dynamic way. Yarn keeps tab of all the resources.

+

+These services run an AM that reserves resources on their behalf. When this AM gets the requested resources, the services bump up their CPU/memory utilization out of band from Yarn. If the Yarn allocations are released/preempted, the services back off on their resources utilization. By doing this, Yarn and these service correctly share the cluster resources, being Yarn RM the only one that does the overall resource bookkeeping.

+

+The services AM, not to break the lifecycle of containers, start containers in the corresponding NMs. These container processes do basically a sleep forever (i.e. sleep 10000d). They are almost not using any CPU nor memory (less than 1MB). Thus it is reasonable to assume their required CPU and memory utilization is NIL (more on hard enforcement later). Because of this almost NIL utilization of CPU and memory, it is possible to specify, when doing a request, zero as one of the dimensions (CPU or memory).

+

+The current limitation is that the increment is also the minimum. 

+

+If we set the memory increment to 1MB. When doing a pure CPU request, we would have to specify 1MB of memory. That would work. However it would allow discretionary memory requests without a desired normalization (increments of 256, 512, etc).

+

+If we set the CPU increment to 1CPU. When doing a pure memory request, we would have to specify 1CPU. CPU amounts a much smaller than memory amounts, and because we don't have fractional CPUs, it would mean that all my pure memory requests will be wasting 1 CPU thus reducing the overall utilization of the cluster.

+

+Finally, on hard enforcement. 

+

+* For CPU. Hard enforcement can be done via a cgroup cpu controller. Using an absolute minimum of a few CPU shares (ie 10) in the LinuxContainerExecutor we ensure there is enough CPU cycles to run the sleep process. This absolute minimum would only kick-in if zero is allowed, otherwise will never kick in as the shares for 1 CPU are 1024.

+

+* For Memory. Hard enforcement is currently done by the ProcfsBasedProcessTree.java, using a minimum absolute of 1 or 2 MBs would take care of zero memory resources. And again,  this absolute minimum would only kick-in if zero is allowed, otherwise will never kick in as the increment memory is in several MBs if not 1GB.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-787">YARN-787</a>.
+     Blocker sub-task reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (api)<br>
+     <b>Remove resource min from Yarn client API</b><br>
+     <blockquote>Per discussions in YARN-689 and YARN-769 we should remove minimum from the API as this is a scheduler internal thing.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-782">YARN-782</a>.
+     Critical improvement reported by Sandy Ryza and fixed by Sandy Ryza (nodemanager)<br>
+     <b>vcores-pcores ratio functions differently from vmem-pmem ratio in misleading way </b><br>
+     <blockquote>The vcores-pcores ratio functions differently from the vmem-pmem ratio in the sense that the vcores-pcores ratio has an impact on allocations and the vmem-pmem ratio does not.

+

+If I double my vmem-pmem ratio, the only change that occurs is that my containers, after being scheduled, are less likely to be killed for using too much virtual memory.  But if I double my vcore-pcore ratio, my nodes will appear to the ResourceManager to contain double the amount of CPU space, which will affect scheduling decisions.

+

+The lack of consistency will exacerbate the already difficult problem of resource configuration.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-781">YARN-781</a>.
+     Major sub-task reported by Devaraj Das and fixed by Jian He <br>
+     <b>Expose LOGDIR that containers should use for logging</b><br>
+     <blockquote>The LOGDIR is known. We should expose this to the container's environment.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-777">YARN-777</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Remove unreferenced objects from proto</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-773">YARN-773</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move YarnRuntimeException from package api.yarn to api.yarn.exceptions</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-767">YARN-767</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Initialize Application status metrics  when QueueMetrics is initialized</b><br>
+     <blockquote>Applications: ResourceManager.QueueMetrics.AppsSubmitted, ResourceManager.QueueMetrics.AppsRunning, ResourceManager.QueueMetrics.AppsPending, ResourceManager.QueueMetrics.AppsCompleted, ResourceManager.QueueMetrics.AppsKilled, ResourceManager.QueueMetrics.AppsFailed

+For now these metrics are created only when they are needed, we want to make them be seen when QueueMetrics is initialized</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-764">YARN-764</a>.
+     Major bug reported by nemon lou and fixed by nemon lou (resourcemanager)<br>
+     <b>blank Used Resources on Capacity Scheduler page </b><br>
+     <blockquote>Even when there are jobs running,used resources is empty on Capacity Scheduler page for leaf queue.(I use google-chrome on windows 7.)

+After changing resource.java's toString method by replacing "&lt;&gt;" with "{}",this bug gets fixed.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-761">YARN-761</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Zhijie Shen <br>
+     <b>TestNMClientAsync fails sometimes</b><br>
+     <blockquote>See https://builds.apache.org/job/PreCommit-YARN-Build/1101//testReport/.

+

+It passed on my machine though.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-760">YARN-760</a>.
+     Major bug reported by Sandy Ryza and fixed by Niranjan Singh (nodemanager)<br>
+     <b>NodeManager throws AvroRuntimeException on failed start</b><br>
+     <blockquote>NodeManager wraps exceptions that occur in its start method in AvroRuntimeExceptions, even though it doesn't use Avro anywhere else.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-759">YARN-759</a>.
+     Major sub-task reported by Bikas Saha and fixed by Bikas Saha <br>
+     <b>Create Command enum in AllocateResponse</b><br>
+     <blockquote>Use command enums for shutdown/resync instead of booleans.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-757">YARN-757</a>.
+     Blocker bug reported by Bikas Saha and fixed by Bikas Saha <br>
+     <b>TestRMRestart failing/stuck on trunk</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-756">YARN-756</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move PreemptionContainer/PremptionContract/PreemptionMessage/StrictPreemptionContract/PreemptionResourceRequest to api.records</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-755">YARN-755</a>.
+     Major sub-task reported by Bikas Saha and fixed by Bikas Saha <br>
+     <b>Rename AllocateResponse.reboot to AllocateResponse.resync</b><br>
+     <blockquote>For work preserving rm restart the am's will be resyncing instead of rebooting. rebooting is an action that currently satisfies the resync requirement. Changing the name now so that it continues to make sense in the real resync case. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-753">YARN-753</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Add individual factory method for api protocol records</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-752">YARN-752</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (api , applications)<br>
+     <b>In AMRMClient, automatically add corresponding rack requests for requested nodes</b><br>
+     <blockquote>A ContainerRequest that includes node-level requests must also include matching rack-level requests for the racks that those nodes are on.  When a node is present without its rack, it makes sense for the client to automatically add the node's rack.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-750">YARN-750</a>.
+     Major sub-task reported by Arun C Murthy and fixed by Arun C Murthy <br>
+     <b>Allow for black-listing resources in YARN API and Impl in CS</b><br>
+     <blockquote>YARN-392 and YARN-398 enhance scheduler api to allow for white-lists of resources.

+

+This jira is a companion to allow for black-listing (in CS).</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-749">YARN-749</a>.
+     Major sub-task reported by Arun C Murthy and fixed by Arun C Murthy <br>
+     <b>Rename ResourceRequest (get,set)HostName to (get,set)ResourceName</b><br>
+     <blockquote>We should rename ResourceRequest (get,set)HostName to (get,set)ResourceName since the name can be host, rack or *.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-748">YARN-748</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move BuilderUtils from yarn-common to yarn-server-common</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-746">YARN-746</a>.
+     Major sub-task reported by Steve Loughran and fixed by Steve Loughran <br>
+     <b>rename Service.register() and Service.unregister() to registerServiceListener() &amp; unregisterServiceListener() respectively</b><br>
+     <blockquote>make it clear what you are registering on a {{Service}} by naming the methods {{registerServiceListener()}} &amp; {{unregisterServiceListener()}} respectively.

+

+This only affects a couple of production classes; {{Service.register()}} and is used in some of the lifecycle tests of the YARN-530. There are no tests of {{Service.unregister()}}, which is something that could be corrected.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-742">YARN-742</a>.
+     Major bug reported by Kihwal Lee and fixed by Jason Lowe (nodemanager)<br>
+     <b>Log aggregation causes a lot of redundant setPermission calls</b><br>
+     <blockquote>In one of our clusters, namenode RPC is spending 45% of its time on serving setPermission calls. Further investigation has revealed that most calls are redundantly made on /mapred/logs/&lt;user&gt;/logs. Also mkdirs calls are made before this.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-739">YARN-739</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Omkar Vinit Joshi <br>
+     <b>NM startContainer should validate the NodeId</b><br>
+     <blockquote>The NM validates certain fields from the ContainerToken on a startContainer call. It shoudl also validate the NodeId (which needs to be added to the ContianerToken).</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-737">YARN-737</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Some Exceptions no longer need to be wrapped by YarnException and can be directly thrown out after YARN-142 </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-735">YARN-735</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Make ApplicationAttemptID, ContainerID, NodeID immutable</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-733">YARN-733</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>TestNMClient fails occasionally</b><br>
+     <blockquote>The problem happens at:

+{code}

+        // getContainerStatus can be called after stopContainer

+        try {

+          ContainerStatus status = nmClient.getContainerStatus(

+              container.getId(), container.getNodeId(),

+              container.getContainerToken());

+          assertEquals(container.getId(), status.getContainerId());

+          assertEquals(ContainerState.RUNNING, status.getState());

+          assertTrue("" + i, status.getDiagnostics().contains(

+              "Container killed by the ApplicationMaster."));

+          assertEquals(-1000, status.getExitStatus());

+        } catch (YarnRemoteException e) {

+          fail("Exception is not expected");

+        }

+{code}

+

+NMClientImpl#stopContainer returns, but container hasn't been stopped immediately. ContainerManangerImpl implements stopContainer in async style. Therefore, the container's status is in transition. NMClientImpl#getContainerStatus immediately after stopContainer will get either the RUNNING status or the COMPLETE one.

+

+There will be the similar problem wrt NMClientImpl#startContainer.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-731">YARN-731</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Zhijie Shen <br>
+     <b>RPCUtil.unwrapAndThrowException should unwrap remote RuntimeExceptions</b><br>
+     <blockquote>Will be required for YARN-662. Also, remote NPEs show up incorrectly for some unit tests.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-726">YARN-726</a>.
+     Critical bug reported by Siddharth Seth and fixed by Mayank Bansal <br>
+     <b>Queue, FinishTime fields broken on RM UI</b><br>
+     <blockquote>The queue shows up as "Invalid Date"

+Finish Time shows up as a Long value.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-724">YARN-724</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Move ProtoBase from api.records to api.records.impl.pb</b><br>
+     <blockquote>Simply move ProtoBase to records.impl.pb</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-720">YARN-720</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Zhijie Shen <br>
+     <b>container-log4j.properties should not refer to mapreduce properties</b><br>
+     <blockquote>This refers to yarn.app.mapreduce.container.log.dir and yarn.app.mapreduce.container.log.filesize. This should either be moved into the MR codebase. Alternately the parameters should be renamed.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-719">YARN-719</a>.
+     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>Move RMIdentifier from Container to ContainerTokenIdentifier</b><br>
+     <blockquote>This needs to be done for YARN-684 to happen.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-717">YARN-717</a>.
+     Major sub-task reported by Jian He and fixed by Jian He <br>
+     <b>Copy BuilderUtil methods into token-related records</b><br>
+     <blockquote>This is separated from YARN-711,as after changing yarn.api.token from interface to abstract class, eg: ClientTokenPBImpl has to extend two classes: both TokenPBImpl and ClientToken abstract class, which is not allowed in JAVA.

+

+We may remove the ClientToken/ContainerToken/DelegationToken interface and just use the common Token interface </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-716">YARN-716</a>.
+     Major task reported by Siddharth Seth and fixed by Siddharth Seth <br>
+     <b>Make ApplicationID immutable</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-715">YARN-715</a>.
+     Major bug reported by Siddharth Seth and fixed by Vinod Kumar Vavilapalli <br>
+     <b>TestDistributedShell and TestUnmanagedAMLauncher are failing</b><br>
+     <blockquote>Tests are timing out. Looks like this is related to YARN-617.

+{code}

+2013-05-21 17:40:23,693 ERROR [IPC Server handler 0 on 54024] containermanager.ContainerManagerImpl (ContainerManagerImpl.java:authorizeRequest(412)) - Unauthorized request to start container.

+Expected containerId: user Found: container_1369183214008_0001_01_000001

+2013-05-21 17:40:23,694 ERROR [IPC Server handler 0 on 54024] security.UserGroupInformation (UserGroupInformation.java:doAs(1492)) - PriviledgedActionException as:user (auth:SIMPLE) cause:org.apache.hado

+Expected containerId: user Found: container_1369183214008_0001_01_000001

+2013-05-21 17:40:23,695 INFO  [IPC Server handler 0 on 54024] ipc.Server (Server.java:run(1864)) - IPC Server handler 0 on 54024, call org.apache.hadoop.yarn.api.ContainerManagerPB.startContainer from 10.

+Expected containerId: user Found: container_1369183214008_0001_01_000001

+org.apache.hadoop.yarn.exceptions.YarnRemoteException: Unauthorized request to start container.

+Expected containerId: user Found: container_1369183214008_0001_01_000001

+  at org.apache.hadoop.yarn.ipc.RPCUtil.getRemoteException(RPCUtil.java:43)

+  at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.authorizeRequest(ContainerManagerImpl.java:413)

+  at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.startContainer(ContainerManagerImpl.java:440)

+  at org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagerPBServiceImpl.startContainer(ContainerManagerPBServiceImpl.java:72)

+  at org.apache.hadoop.yarn.proto.ContainerManager$ContainerManagerService$2.callBlockingMethod(ContainerManager.java:83)

+  at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)

+{code}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-714">YARN-714</a>.
+     Major sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>AMRM protocol changes for sending NMToken list</b><br>
+     <blockquote>NMToken will be sent to AM on allocate call if

+1) AM doesn't already have NMToken for the underlying NM

+2) Key rolled over on RM and AM gets new container on the same NM.

+On allocate call RM will send a consolidated list of all required NMTokens.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-711">YARN-711</a>.
+     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Jian He <br>
+     <b>Copy BuilderUtil methods into individual records</b><br>
+     <blockquote>BuilderUtils is one giant utils class which has all the factory methods needed for creating records. It is painful for users to figure out how to create records. We are better off having the factories in each record, that way users can easily create records.

+

+As a first step, we should just copy all the factory methods into individual classes, deprecate BuilderUtils and then slowly move all code off BuilderUtils.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-708">YARN-708</a>.
+     Major task reported by Siddharth Seth and fixed by Siddharth Seth <br>
+     <b>Move RecordFactory classes to hadoop-yarn-api, miscellaneous fixes to the interfaces</b><br>
+     <blockquote>This is required for additional changes in YARN-528. 

+Some of the interfaces could use some cleanup as well - they shouldn't be declaring YarnException (Runtime) in their signature.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-706">YARN-706</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Race Condition in TestFSDownload</b><br>
+     <blockquote>See the test failure in YARN-695

+

+https://builds.apache.org/job/PreCommit-YARN-Build/957//testReport/org.apache.hadoop.yarn.util/TestFSDownload/testDownloadPatternJar/</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-700">YARN-700</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestInfoBlock fails on Windows because of line ending missmatch</b><br>
+     <blockquote>Exception:

+{noformat}

+Running org.apache.hadoop.yarn.webapp.view.TestInfoBlock

+Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.962 sec &lt;&lt;&lt; FAILURE!

+testMultilineInfoBlock(org.apache.hadoop.yarn.webapp.view.TestInfoBlock)  Time elapsed: 873 sec  &lt;&lt;&lt; FAILURE!

+java.lang.AssertionError: 

+	at org.junit.Assert.fail(Assert.java:91)

+	at org.junit.Assert.assertTrue(Assert.java:43)

+	at org.junit.Assert.assertTrue(Assert.java:54)

+	at org.apache.hadoop.yarn.webapp.view.TestInfoBlock.testMultilineInfoBlock(TestInfoBlock.java:79)

+	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

+	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

+	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

+	at java.lang.reflect.Method.invoke(Method.java:597)

+	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)

+	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)

+	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)

+	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)

+	at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)

+{noformat}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-695">YARN-695</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>masterContainer and status are in ApplicationReportProto but not in ApplicationReport</b><br>
+     <blockquote>If masterContainer and status are no longer part of ApplicationReport, they should be removed from proto as well.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-694">YARN-694</a>.
+     Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Start using NMTokens to authenticate all communication with NM</b><br>
+     <blockquote>AM uses the NMToken to authenticate all the AM-NM communication.

+NM will validate NMToken in below manner

+* If NMToken is using current or previous master key then the NMToken is valid. In this case it will update its cache with this key corresponding to appId.

+* If NMToken is using the master key which is present in NM's cache corresponding to AM's appId then it will be validated based on this.

+* If NMToken is invalid then NM will reject AM calls.

+

+Modification for ContainerToken

+* At present RPC validates AM-NM communication based on ContainerToken. It will be replaced with NMToken. Also now onwards AM will use NMToken per NM (replacing earlier behavior of ContainerToken per container per NM).

+* startContainer in case of Secured environment is using ContainerToken from UGI YARN-617; however after this it will use it from the payload (Container).

+* ContainerToken will exist and it will only be used to validate the AM's container start request.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-693">YARN-693</a>.
+     Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Sending NMToken to AM on allocate call</b><br>
+     <blockquote>This is part of YARN-613.

+As per the updated design, AM will receive per NM, NMToken in following scenarios

+* AM is receiving first container on underlying NM.

+* AM is receiving container on underlying NM after either NM or RM rebooted.

+** After RM reboot, as RM doesn't remember (persist) the information about keys issued per AM per NM, it will reissue tokens in case AM gets new container on underlying NM. However on NM side NM will still retain older token until it receives new token to support long running jobs (in work preserving environment).

+** After NM reboot, RM will delete the token information corresponding to that AM for all AMs.

+* AM is receiving container on underlying NM after NMToken master key is rolled over on RM side.

+In all the cases if AM receives new NMToken then it is suppose to store it for future NM communication until it receives a new one.

+

+AMRMClient should expose these NMToken to client. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-692">YARN-692</a>.
+     Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Creating NMToken master key on RM and sharing it with NM as a part of RM-NM heartbeat.</b><br>
+     <blockquote>This is related to YARN-613 . Here we will be implementing NMToken generation on RM side and sharing it with NM during RM-NM heartbeat. As a part of this JIRA mater key will only be made available to NM but there will be no validation done until AM-NM communication is fixed.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-690">YARN-690</a>.
+     Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp (resourcemanager)<br>
+     <b>RM exits on token cancel/renew problems</b><br>
+     <blockquote>The DelegationTokenRenewer thread is critical to the RM.  When a non-IOException occurs, the thread calls System.exit to prevent the RM from running w/o the thread.  It should be exiting only on non-RuntimeExceptions.

+

+The problem is especially bad in 23 because the yarn protobuf layer converts IOExceptions into UndeclaredThrowableExceptions (RuntimeException) which causes the renewer to abort the process.  An UnknownHostException takes down the RM...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-686">YARN-686</a>.
+     Major sub-task reported by Sandy Ryza and fixed by Sandy Ryza (api)<br>
+     <b>Flatten NodeReport</b><br>
+     <blockquote>The NodeReport returned by getClusterNodes or given to AMs in heartbeat responses includes both a NodeState (enum) and a NodeHealthStatus (object).  As UNHEALTHY is already NodeState, a separate NodeHealthStatus doesn't seem necessary.  I propose eliminating NodeHealthStatus#getIsNodeHealthy and moving its two other methods, getHealthReport and getLastHealthReportTime, into NodeReport.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-684">YARN-684</a>.
+     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>ContainerManager.startContainer needs to only have ContainerTokenIdentifier instead of the whole Container</b><br>
+     <blockquote>The NM only needs the token, the whole Container is unnecessary.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-663">YARN-663</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Change ResourceTracker API and LocalizationProtocol API to throw YarnRemoteException and IOException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-660">YARN-660</a>.
+     Major sub-task reported by Bikas Saha and fixed by Bikas Saha <br>
+     <b>Improve AMRMClient with matching requests</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-655">YARN-655</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
+     <b>Fair scheduler metrics should subtract allocated memory from available memory</b><br>
+     <blockquote>In the scheduler web UI, cluster metrics reports that the "Memory Total" goes up when an application is allocated resources.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-651">YARN-651</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Change ContainerManagerPBClientImpl and RMAdminProtocolPBClientImpl to throw IOException and YarnRemoteException</b><br>
+     <blockquote>YARN-632 AND YARN-633 changes RMAdmin and ContainerManager api to throw YarnRemoteException and IOException. RMAdminProtocolPBClientImpl and ContainerManagerPBClientImpl should do the same changes</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-648">YARN-648</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (scheduler)<br>
+     <b>FS: Add documentation for pluggable policy</b><br>
+     <blockquote>YARN-469 and YARN-482 make the scheduling policy in FS pluggable. Need to add documentation on how to use this.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-646">YARN-646</a>.
+     Major bug reported by Dapeng Sun and fixed by Dapeng Sun (documentation)<br>
+     <b>Some issues in Fair Scheduler's document</b><br>
+     <blockquote>Issues are found in the doc page for Fair Scheduler http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html:

+1.In the section &#8220;Configuration&#8221;, It contains two properties named &#8220;yarn.scheduler.fair.minimum-allocation-mb&#8221;, the second one should be &#8220;yarn.scheduler.fair.maximum-allocation-mb&#8221;

+2.In the section &#8220;Allocation file format&#8221;, the document tells &#8220; The format contains three types of elements&#8221;, but it lists four types of elements following that.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-645">YARN-645</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Move RMDelegationTokenSecretManager from yarn-server-common to yarn-server-resourcemanager</b><br>
+     <blockquote>RMDelegationTokenSecretManager is specific to resource manager, should not belong to server-common</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-642">YARN-642</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (api , resourcemanager)<br>
+     <b>Fix up /nodes REST API to have 1 param and be consistent with the Java API</b><br>
+     <blockquote>The code behind the /nodes RM REST API is unnecessarily muddled, logs the same misspelled INFO message repeatedly, and does not return unhealthy nodes, even when asked.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-639">YARN-639</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen (applications/distributed-shell)<br>
+     <b>Make AM of Distributed Shell Use NMClient</b><br>
+     <blockquote>YARN-422 adds NMClient. AM of Distributed Shell should use it instead of using ContainerManager directly.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-638">YARN-638</a>.
+     Major sub-task reported by Jian He and fixed by Jian He (resourcemanager)<br>
+     <b>Restore RMDelegationTokens after RM Restart</b><br>
+     <blockquote>This is missed in YARN-581. After RM restart, RMDelegationTokens need to be added both in DelegationTokenRenewer (addressed in YARN-581), and delegationTokenSecretManager</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-637">YARN-637</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (scheduler)<br>
+     <b>FS: maxAssign is not honored</b><br>
+     <blockquote>maxAssign limits the number of containers that can be assigned in a single heartbeat. Currently, FS doesn't keep track of number of assigned containers to check this.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-635">YARN-635</a>.
+     Major sub-task reported by Xuan Gong and fixed by Siddharth Seth <br>
+     <b>Rename YarnRemoteException to YarnException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-634">YARN-634</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Siddharth Seth <br>
+     <b>Make YarnRemoteException not backed by PB and introduce a SerializedException</b><br>
+     <blockquote>LocalizationProtocol sends an exception over the wire. This currently uses YarnRemoteException. Post YARN-627, this needs to be changed and a new serialized exception is required.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-633">YARN-633</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Change RMAdminProtocol api to throw IOException and YarnRemoteException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-632">YARN-632</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Change ContainerManager api to throw IOException and YarnRemoteException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-631">YARN-631</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Change ClientRMProtocol api to throw IOException and YarnRemoteException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-630">YARN-630</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Change AMRMProtocol api to throw IOException and YarnRemoteException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-629">YARN-629</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Make YarnRemoteException not be rooted at IOException</b><br>
+     <blockquote>After HADOOP-9343, it should be possible for YarnException to not be rooted at IOException</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-628">YARN-628</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Siddharth Seth <br>
+     <b>Fix YarnException unwrapping</b><br>
+     <blockquote>Unwrapping of YarnRemoteExceptions (currently in YarnRemoteExceptionPBImpl, RPCUtil post YARN-625) is broken, and often ends up throwin UndeclaredThrowableException. This needs to be fixed.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-625">YARN-625</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Siddharth Seth <br>
+     <b>Move unwrapAndThrowException from YarnRemoteExceptionPBImpl to RPCUtil</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-618">YARN-618</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Modify RM_INVALID_IDENTIFIER to  a -ve number</b><br>
+     <blockquote>RM_INVALID_IDENTIFIER set to 0 doesnt sound right as many tests set it to 0. Probably a -ve number is what we want.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-617">YARN-617</a>.
+     Minor sub-task reported by Vinod Kumar Vavilapalli and fixed by Omkar Vinit Joshi <br>
+     <b>In unsercure mode, AM can fake resource requirements </b><br>
+     <blockquote>Without security, it is impossible to completely avoid AMs faking resources. We can at the least make it as difficult as possible by using the same container tokens and the RM-NM shared key mechanism over unauthenticated RM-NM channel.

+

+In the minimum, this will avoid accidental bugs in AMs in unsecure mode.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-615">YARN-615</a>.
+     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>ContainerLaunchContext.containerTokens should simply be called tokens</b><br>
+     <blockquote>ContainerToken is the name of the specific token that AMs use to launch containers on NMs, so we should rename CLC.containerTokens to be simply tokens.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-613">YARN-613</a>.
+     Major sub-task reported by Bikas Saha and fixed by Omkar Vinit Joshi <br>
+     <b>Create NM proxy per NM instead of per container</b><br>
+     <blockquote>Currently a new NM proxy has to be created per container since the secure authentication is using a containertoken from the container.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-610">YARN-610</a>.
+     Blocker sub-task reported by Siddharth Seth and fixed by Omkar Vinit Joshi <br>
+     <b>ClientToken (ClientToAMToken) should not be set in the environment</b><br>
+     <blockquote>Similar to YARN-579, this can be set via ContainerTokens</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-605">YARN-605</a>.
+     Major bug reported by Hitesh Shah and fixed by Hitesh Shah <br>
+     <b>Failing unit test in TestNMWebServices when using git for source control </b><br>
+     <blockquote>Failed tests:   testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789

+  testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789

+  testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789

+  testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789

+  testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789

+  testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789

+  testSingleNodesXML(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-600">YARN-600</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
+     <b>Hook up cgroups CPU settings to the number of virtual cores allocated</b><br>
+     <blockquote>YARN-3 introduced CPU isolation and monitoring through cgroups.  YARN-2 and introduced CPU scheduling in the capacity scheduler, and YARN-326 will introduce it in the fair scheduler.  The number of virtual cores allocated to a container should be used to weight the number of cgroups CPU shares given to it.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-599">YARN-599</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Refactoring submitApplication in ClientRMService and RMAppManager</b><br>
+     <blockquote>Currently, ClientRMService#submitApplication call RMAppManager#handle, and consequently call RMAppMangager#submitApplication directly, though the code looks like scheduling an APP_SUBMIT event.

+

+In addition, the validation code before creating an RMApp instance is not well organized. Ideally, the dynamic validation, which depends on the RM's configuration, should be put in RMAppMangager#submitApplication. RMAppMangager#submitApplication is called by ClientRMService#submitApplication and RMAppMangager#recover. Since the configuration may be changed after RM restarts, the validation needs to be done again even in recovery mode. Therefore, resource request validation, which based on min/max resource limits, should be moved from ClientRMService#submitApplication to RMAppMangager#submitApplication. On the other hand, the static validation, which is independent of the RM's configuration should be put in ClientRMService#submitApplication, because it is only need to be done once during the first submission.

+

+Furthermore, try-catch flow in RMAppMangager#submitApplication has a flaw. RMAppMangager#submitApplication has a flaw is not synchronized. If two application submissions with the same application ID enter the function, and one progresses to the completion of RMApp instantiation, and the other progresses the completion of putting the RMApp instance into rmContext, the slower submission will cause an exception due to the duplicate application ID. However, the exception will cause the RMApp instance already in rmContext (belongs to the faster submission) being rejected with the current code flow.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-598">YARN-598</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (resourcemanager , scheduler)<br>
+     <b>Add virtual cores to queue metrics</b><br>
+     <blockquote>QueueMetrics includes allocatedMB, availableMB, pendingMB, reservedMB.  It should have equivalents for CPU.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-597">YARN-597</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestFSDownload fails on Windows because of dependencies on tar/gzip/jar tools</b><br>
+     <blockquote>{{testDownloadArchive}}, {{testDownloadPatternJar}} and {{testDownloadArchiveZip}} fail with the similar Shell ExitCodeException:

+

+{code}

+testDownloadArchiveZip(org.apache.hadoop.yarn.util.TestFSDownload)  Time elapsed: 480 sec  &lt;&lt;&lt; ERROR!

+org.apache.hadoop.util.Shell$ExitCodeException: bash: line 0: cd: /D:/svn/t/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestFSDownload: No such file or directory

+gzip: 1: No such file or directory

+

+	at org.apache.hadoop.util.Shell.runCommand(Shell.java:377)

+	at org.apache.hadoop.util.Shell.run(Shell.java:292)

+	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:497)

+	at org.apache.hadoop.yarn.util.TestFSDownload.createZipFile(TestFSDownload.java:225)

+	at org.apache.hadoop.yarn.util.TestFSDownload.testDownloadArchiveZip(TestFSDownload.java:503)

+{code}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-595">YARN-595</a>.
+     Major sub-task reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
+     <b>Refactor fair scheduler to use common Resources</b><br>
+     <blockquote>resourcemanager.fair and resourcemanager.resources have two copies of basically the same code for operations on Resource objects</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-594">YARN-594</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Update test and add comments in YARN-534</b><br>
+     <blockquote>This jira is simply to add some comments in the patch YARN-534 and update the test case</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-593">YARN-593</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (nodemanager)<br>
+     <b>container launch on Windows does not correctly populate classpath with new process's environment variables and localized resources</b><br>
+     <blockquote>On Windows, we must bundle the classpath of a launched container in an intermediate jar with a manifest.  Currently, this logic incorrectly uses the nodemanager process's environment variables for substitution.  Instead, it needs to use the new environment for the launched process.  Also, the bundled classpath is missing some localized resources for directories, due to a quirk in the way {{File#toURI}} decides whether or not to append a trailing '/'.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-591">YARN-591</a>.
+     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>RM recovery related records do not belong to the API</b><br>
+     <blockquote>We need to move out AppliationStateData and ApplicationAttemptStateData into resourcemanager module. They are not part of the public API..</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-590">YARN-590</a>.
+     Major improvement reported by Vinod Kumar Vavilapalli and fixed by Mayank Bansal <br>
+     <b>Add an optional mesage to RegisterNodeManagerResponse as to why NM is being asked to resync or shutdown</b><br>
+     <blockquote>We should log such message in NM itself. Helps in debugging issues on NM directly instead of distributed debugging between RM and NM when such an action is received from RM.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-586">YARN-586</a>.
+     Trivial bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Typo in ApplicationSubmissionContext#setApplicationId</b><br>
+     <blockquote>The parameter should be applicationId instead of appplicationId</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-585">YARN-585</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>TestFairScheduler#testNotAllowSubmitApplication is broken due to YARN-514</b><br>
+     <blockquote>TestFairScheduler#testNotAllowSubmitApplication is broken due to YARN-514. See the discussions in YARN-514.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-583">YARN-583</a>.
+     Major sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Application cache files should be localized under local-dir/usercache/userid/appcache/appid/filecache</b><br>
+     <blockquote>Currently application cache files are getting localized under local-dir/usercache/userid/appcache/appid/. however they should be localized under filecache sub directory.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-582">YARN-582</a>.
+     Major sub-task reported by Bikas Saha and fixed by Jian He (resourcemanager)<br>
+     <b>Restore appToken and clientToken for app attempt after RM restart</b><br>
+     <blockquote>These need to be saved and restored on a per app attempt basis. This is required only when work preserving restart is implemented for secure clusters. In non-preserving restart app attempts are killed and so this does not matter.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-581">YARN-581</a>.
+     Major sub-task reported by Bikas Saha and fixed by Jian He (resourcemanager)<br>
+     <b>Test and verify that app delegation tokens are added to tokenRenewer after RM restart</b><br>
+     <blockquote>The code already saves the delegation tokens in AppSubmissionContext. Upon restart the AppSubmissionContext is used to submit the application again and so restores the delegation tokens. This jira tracks testing and verifying this functionality in a secure setup.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-579">YARN-579</a>.
+     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>Make ApplicationToken part of Container's token list to help RM-restart</b><br>
+     <blockquote>Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-578">YARN-578</a>.
+     Major sub-task reported by Vinod Kumar Vavilapalli and fixed by Omkar Vinit Joshi (nodemanager)<br>
+     <b>NodeManager should use SecureIOUtils for serving and aggregating logs</b><br>
+     <blockquote>Log servlets for serving logs and the ShuffleService for serving intermediate outputs both should use SecureIOUtils for avoiding symlink attacks.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-577">YARN-577</a>.
+     Major sub-task reported by Hitesh Shah and fixed by Hitesh Shah <br>
+     <b>ApplicationReport does not provide progress value of application</b><br>
+     <blockquote>An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-576">YARN-576</a>.
+     Major bug reported by Hitesh Shah and fixed by Kenji Kikushima <br>
+     <b>RM should not allow registrations from NMs that do not satisfy minimum scheduler allocations</b><br>
+     <blockquote>If the minimum resource allocation configured for the RM scheduler is 1 GB, the RM should drop all NMs that register with a total capacity of less than 1 GB. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-571">YARN-571</a>.
+     Major sub-task reported by Hitesh Shah and fixed by Omkar Vinit Joshi <br>
+     <b>User should not be part of ContainerLaunchContext</b><br>
+     <blockquote>Today, a user is expected to set the user name in the CLC when either submitting an application or launching a container from the AM. This does not make sense as the user can/has been identified by the RM as part of the RPC layer.

+

+Solution would be to move the user information into either the Container object or directly into the ContainerToken which can then be used by the NM to launch the container. This user information would set into the container by the RM.

+

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-568">YARN-568</a>.
+     Major improvement reported by Carlo Curino and fixed by Carlo Curino (scheduler)<br>
+     <b>FairScheduler: support for work-preserving preemption </b><br>
+     <blockquote>In the attached patch, we modified  the FairScheduler to substitute its preemption-by-killling with a work-preserving version of preemption (followed by killing if the AMs do not respond quickly enough). This should allows to run preemption checking more often, but kill less often (proper tuning to be investigated).  Depends on YARN-567 and YARN-45, is related to YARN-569.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-567">YARN-567</a>.
+     Major sub-task reported by Carlo Curino and fixed by Carlo Curino (resourcemanager)<br>
+     <b>RM changes to support preemption for FairScheduler and CapacityScheduler</b><br>
+     <blockquote>A common tradeoff in scheduling jobs is between keeping the cluster busy and enforcing capacity/fairness properties. FairScheduler and CapacityScheduler takes opposite stance on how to achieve this. 

+

+The FairScheduler, leverages task-killing to quickly reclaim resources from currently running jobs and redistributing them among new jobs, thus keeping the cluster busy but waste useful work. The CapacityScheduler is typically tuned

+to limit the portion of the cluster used by each queue so that the likelihood of violating capacity is low, thus never wasting work, but risking to keep the cluster underutilized or have jobs waiting to obtain their rightful capacity. 

+

+By introducing the notion of a work-preserving preemption we can remove this tradeoff.  This requires a protocol for preemption (YARN-45), and ApplicationMasters that can answer to preemption  efficiently (e.g., by saving their intermediate state, this will be posted for MapReduce in a separate JIRA soon), together with a scheduler that can issues preemption requests (discussed in separate JIRAs YARN-568 and YARN-569).

+

+The changes we track with this JIRA are common to FairScheduler and CapacityScheduler, and are mostly propagation of preemption decisions through the ApplicationMastersService.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-563">YARN-563</a>.
+     Major sub-task reported by Thomas Weise and fixed by Mayank Bansal <br>
+     <b>Add application type to ApplicationReport </b><br>
+     <blockquote>This field is needed to distinguish different types of applications (app master implementations). For example, we may run applications of type XYZ in a cluster alongside MR and would like to filter applications by type.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-562">YARN-562</a>.
+     Major sub-task reported by Jian He and fixed by Jian He (resourcemanager)<br>
+     <b>NM should reject containers allocated by previous RM</b><br>
+     <blockquote>Its possible that after RM shutdown, before AM goes down,AM still call startContainer on NM with containers allocated by previous RM. When RM comes back, NM doesn't know whether this container launch request comes from previous RM or the current RM. we should reject containers allocated by previous RM </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-561">YARN-561</a>.
+     Major sub-task reported by Hitesh Shah and fixed by Xuan Gong <br>
+     <b>Nodemanager should set some key information into the environment of every container that it launches.</b><br>
+     <blockquote>Information such as containerId, nodemanager hostname, nodemanager port is not set in the environment when any container is launched. 

+

+For an AM, the RM does all of this for it but for a container launched by an application, all of the above need to be set by the ApplicationMaster. 

+

+At the minimum, container id would be a useful piece of information. If the container wishes to talk to its local NM, the nodemanager related information would also come in handy. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-557">YARN-557</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (applications)<br>
+     <b>TestUnmanagedAMLauncher fails on Windows</b><br>
+     <blockquote>{{TestUnmanagedAMLauncher}} fails on Windows due to attempting to run a Unix-specific command in distributed shell and use of a Unix-specific environment variable to determine username for the {{ContainerLaunchContext}}.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-553">YARN-553</a>.
+     Minor sub-task reported by Harsh J and fixed by Karthik Kambatla (client)<br>
+     <b>Have YarnClient generate a directly usable ApplicationSubmissionContext</b><br>
+     <blockquote>Right now, we're doing multiple steps to create a relevant ApplicationSubmissionContext for a pre-received GetNewApplicationResponse.

+

+{code}

+    GetNewApplicationResponse newApp = yarnClient.getNewApplication();

+    ApplicationId appId = newApp.getApplicationId();

+

+    ApplicationSubmissionContext appContext = Records.newRecord(ApplicationSubmissionContext.class);

+

+    appContext.setApplicationId(appId);

+{code}

+

+A simplified way may be to have the GetNewApplicationResponse itself provide a helper method that builds a usable ApplicationSubmissionContext for us. Something like:

+

+{code}

+GetNewApplicationResponse newApp = yarnClient.getNewApplication();

+ApplicationSubmissionContext appContext = newApp.generateApplicationSubmissionContext();

+{code}

+

+[The above method can also take an arg for the container launch spec, or perhaps pre-load defaults like min-resource, etc. in the returned object, aside of just associating the application ID automatically.]</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-549">YARN-549</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>YarnClient.submitApplication should wait for application to be accepted by the RM</b><br>
+     <blockquote>Currently, when submitting an application, storeApplication will be called for recovery. However, it is a blocking API, and is likely to block concurrent application submissions. Therefore, it is good to make application submission asynchronous, and postpone storeApplication. YarnClient needs to change to wait for the whole operation to complete so that clients can be notified after the application is really submitted. YarnClient needs to wait for application to reach SUBMITTED state or beyond.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-548">YARN-548</a>.
+     Major sub-task reported by Vadim Bondarev and fixed by Vadim Bondarev <br>
+     <b>Add tests for YarnUncaughtExceptionHandler</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-547">YARN-547</a>.
+     Major sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Race condition in Public / Private Localizer may result into resource getting downloaded again</b><br>
+     <blockquote>Public Localizer :

+At present when multiple containers try to request a localized resource 

+* If the resource is not present then first it is created and Resource Localization starts ( LocalizedResource is in DOWNLOADING state)

+* Now if in this state multiple ResourceRequestEvents arrive then ResourceLocalizationEvents are sent for all of them.

+

+Most of the times it is not resulting into a duplicate resource download but there is a race condition present there. Inside ResourceLocalization (for public download) all the requests are added to local attempts map. If a new request comes in then first it is checked in this map before a new download starts for the same. For the current download the request will be there in the map. Now if a same resource request comes in then it will rejected (i.e. resource is getting downloaded already). However if the current download completes then the request will be removed from this local map. Now after this removal if the LocalizerRequestEvent comes in then as it is not present in local map the resource will be downloaded again.

+

+PrivateLocalizer :

+Here a different but similar race condition is present.

+* Here inside findNextResource method call; each LocalizerRunner tries to grab a lock on LocalizerResource. If the lock is not acquired then it will keep trying until the resource state changes to LOCALIZED. This lock will be released by the LocalizerRunner when download completes.

+* Now if another ContainerLocalizer tries to grab the lock on a resource before LocalizedResource state changes to LOCALIZED then resource will be downloaded again.

+

+At both the places the root cause of this is that all the threads try to acquire the lock on resource however current state of the LocalizedResource is not taken into consideration.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-542">YARN-542</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Zhijie Shen <br>
+     <b>Change the default global AM max-attempts value to be not one</b><br>
+     <blockquote>Today, the global AM max-attempts is set to 1 which is a bad choice. AM max-attempts accounts for both AM level failures as well as container crashes due to localization issue, lost nodes etc. To account for AM crashes due to problems that are not caused by user code, mainly lost nodes, we want to give AMs some retires.

+

+I propose we change it to atleast two. Can change it to 4 to match other retry-configs.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-539">YARN-539</a>.
+     Major sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>LocalizedResources are leaked in memory in case resource localization fails</b><br>
+     <blockquote>If resource localization fails then resource remains in memory and is

+1) Either cleaned up when next time cache cleanup runs and there is space crunch. (If sufficient space in cache is available then it will remain in memory).

+2) reused if LocalizationRequest comes again for the same resource.

+

+I think when resource localization fails then that event should be sent to LocalResourceTracker which will then remove it from its cache.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-538">YARN-538</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza <br>
+     <b>RM address DNS lookup can cause unnecessary slowness on every JHS page load </b><br>
+     <blockquote>When I run the job history server locally, every page load takes in the 10s of seconds.  I profiled the process and discovered that all the extra time was spent inside YarnConfiguration#getRMWebAppURL, trying to resolve 0.0.0.0 to a hostname.  When I changed my yarn.resourcemanager.address to localhost, the page load times decreased drastically.

+

+There's no that we need to perform this resolution on every page load.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-536">YARN-536</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Remove ContainerStatus, ContainerState from Container api interface as they will not be called by the container object</b><br>
+     <blockquote>Remove containerstate, containerStatus from container interface. They will not be called by container object</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-534">YARN-534</a>.
+     Major sub-task reported by Jian He and fixed by Jian He (resourcemanager)<br>
+     <b>AM max attempts is not checked when RM restart and try to recover attempts</b><br>
+     <blockquote>Currently,AM max attempts is only checked if the current attempt fails and check to see whether to create new attempt. If the RM restarts before the max-attempt fails, it'll not clean the state store, when RM comes back, it will retry attempt again.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-532">YARN-532</a>.
+     Major bug reported by Siddharth Seth and fixed by Siddharth Seth <br>
+     <b>RMAdminProtocolPBClientImpl should implement Closeable</b><br>
+     <blockquote>Required for RPC.stopProxy to work. Already done in most of the other protocols. (MAPREDUCE-5117 addressing the one other protocol missing this)</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-530">YARN-530</a>.
+     Major sub-task reported by Steve Loughran and fixed by Steve Loughran <br>
+     <b>Define Service model strictly, implement AbstractService for robust subclassing, migrate yarn-common services</b><br>
+     <blockquote># Extend the YARN {{Service}} interface as discussed in YARN-117

+# Implement the changes in {{AbstractService}} and {{FilterService}}.

+# Migrate all services in yarn-common to the more robust service model, test.

+

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-525">YARN-525</a>.
+     Major improvement reported by Thomas Graves and fixed by Thomas Graves (capacityscheduler)<br>
+     <b>make CS node-locality-delay refreshable</b><br>
+     <blockquote>the config yarn.scheduler.capacity.node-locality-delay doesn't change when you change the value in capacity_scheduler.xml and then run yarn rmadmin -refreshQueues.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-518">YARN-518</a>.
+     Major improvement reported by Dapeng Sun and fixed by Sandy Ryza (documentation)<br>
+     <b>Fair Scheduler's document link could be added to the hadoop 2.x main doc page</b><br>
+     <blockquote>Currently the doc page for Fair Scheduler looks good and it&#8217;s here, http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html.

+It would be better to add the document link to the YARN section in the Hadoop 2.x main doc page, so that users can easily find the doc to experimentally try Fair Scheduler as Capacity Scheduler. 

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-515">YARN-515</a>.
+     Blocker bug reported by Robert Joseph Evans and fixed by Robert Joseph Evans <br>
+     <b>Node Manager not getting the master key</b><br>
+     <blockquote>On branch-2 the latest version I see the following on a secure cluster.

+

+{noformat}

+2013-03-28 19:21:06,243 [main] INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Security enabled - updating secret keys now

+2013-03-28 19:21:06,243 [main] INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager as RM:PORT with total resource of &lt;me

+mory:12288, vCores:16&gt;

+2013-03-28 19:21:06,244 [main] INFO org.apache.hadoop.yarn.service.AbstractService: Service:org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl is started.

+2013-03-28 19:21:06,245 [main] INFO org.apache.hadoop.yarn.service.AbstractService: Service:org.apache.hadoop.yarn.server.nodemanager.NodeManager is started.

+2013-03-28 19:21:07,257 [Node Status Updater] ERROR org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Caught exception in status-updater

+java.lang.NullPointerException

+        at org.apache.hadoop.yarn.server.security.BaseContainerTokenSecretManager.getCurrentKey(BaseContainerTokenSecretManager.java:121)

+        at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl$1.run(NodeStatusUpdaterImpl.java:407)

+{noformat}

+

+The Null pointer exception just keeps repeating and all of the nodes end up being lost.  It looks like it never gets the secret key when it registers.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-514">YARN-514</a>.
+     Major sub-task reported by Bikas Saha and fixed by Zhijie Shen (resourcemanager)<br>
+     <b>Delayed store operations should not result in RM unavailability for app submission</b><br>
+     <blockquote>Currently, app submission is the only store operation performed synchronously because the app must be stored before the request returns with success. This makes the RM susceptible to blocking all client threads on slow store operations, resulting in RM being perceived as unavailable by clients.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-512">YARN-512</a>.
+     Minor bug reported by Jason Lowe and fixed by Maysam Yabandeh (nodemanager)<br>
+     <b>Log aggregation root directory check is more expensive than it needs to be</b><br>
+     <blockquote>The log aggregation root directory check first does an {{exists}} call followed by a {{getFileStatus}} call.  That effectively stats the file twice.  It should just use {{getFileStatus}} and catch {{FileNotFoundException}} to handle the non-existent case.

+

+In addition we may consider caching the presence of the directory rather than checking it each time a node aggregates logs for an application.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-507">YARN-507</a>.
+     Minor bug reported by Karthik Kambatla and fixed by Karthik Kambatla (scheduler)<br>
+     <b>Add interface visibility and stability annotations to FS interfaces/classes</b><br>
+     <blockquote>Many of FS classes/interfaces are missing annotations on visibility and stability.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-506">YARN-506</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>Move to common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute</b><br>
+     <blockquote>Move to common utils described in HADOOP-9413 that work well cross-platform.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-500">YARN-500</a>.
+     Major bug reported by Nishan Shetty and fixed by Kenji Kikushima (resourcemanager)<br>
+     <b>ResourceManager webapp is using next port if configured port is already in use</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-496">YARN-496</a>.
+     Minor bug reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
+     <b>Fair scheduler configs are refreshed inconsistently in reinitialize</b><br>
+     <blockquote>When FairScheduler#reinitialize is called, some of the scheduler-wide configs are refreshed and others aren't.  They should all be refreshed.

+

+Ones that are refreshed: userAsDefaultQueue, nodeLocalityThreshold, rackLocalityThreshold, preemptionEnabled

+

+Ones that aren't: minimumAllocation, maximumAllocation, assignMultiple, maxAssign</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-495">YARN-495</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Change NM behavior of reboot to resync</b><br>
+     <blockquote>When a reboot command is sent from RM, the node manager doesn't clean up the containers while its stopping.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-493">YARN-493</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (nodemanager)<br>
+     <b>NodeManager job control logic flaws on Windows</b><br>
+     <blockquote>Both product and test code contain some platform-specific assumptions, such as availability of bash for executing a command in a container and signals to check existence of a process and terminate it.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-491">YARN-491</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (nodemanager)<br>
+     <b>TestContainerLogsPage fails on Windows</b><br>
+     <blockquote>{{TestContainerLogsPage}} contains some code for initializing a log directory that doesn't work correctly on Windows.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-490">YARN-490</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (applications/distributed-shell)<br>
+     <b>TestDistributedShell fails on Windows</b><br>
+     <blockquote>There are a few platform-specific assumption in distributed shell (both main code and test code) that prevent it from working correctly on Windows.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-488">YARN-488</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (nodemanager)<br>
+     <b>TestContainerManagerSecurity fails on Windows</b><br>
+     <blockquote>These tests are failing to launch containers correctly when running on Windows.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-487">YARN-487</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (nodemanager)<br>
+     <b>TestDiskFailures fails on Windows due to path mishandling</b><br>
+     <blockquote>{{TestDiskFailures#testDirFailuresOnStartup}} fails due to insertion of an extra leading '/' on the path within {{LocalDirsHandlerService}} when running on Windows.  The test assertions also fail to account for the fact that {{Path}} normalizes '\' to '/'.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-486">YARN-486</a>.
+     Major sub-task reported by Bikas Saha and fixed by Xuan Gong <br>
+     <b>Change startContainer NM API to accept Container as a parameter and make ContainerLaunchContext user land</b><br>
+     <blockquote>Currently, id, resource request etc need to be copied over from Container to ContainerLaunchContext. This can be brittle. Also it leads to duplication of information (such as Resource from CLC and Resource from Container and Container.tokens). Sending Container directly to startContainer solves these problems. It also makes CLC clean by only having stuff in it that it set by the client/AM.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-485">YARN-485</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla <br>
+     <b>TestProcfsProcessTree#testProcessTree() doesn't wait long enough for the process to die</b><br>
+     <blockquote>TestProcfsProcessTree#testProcessTree fails occasionally with the following stack trace

+

+{noformat}

+Stack Trace:

+junit.framework.AssertionFailedError: expected:&lt;false&gt; but was:&lt;true&gt;

+&#160; &#160; &#160; &#160; at org.apache.hadoop.util.TestProcfsBasedProcessTree.testProcessTree(TestProcfsBasedProcessTree.java)

+{noformat}

+

+kill -9 is executed asynchronously, the signal is delivered when the process comes out of the kernel (sys call). Checking if the process died immediately after can fail at times.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-482">YARN-482</a>.
+     Major sub-task reported by Karthik Kambatla and fixed by Karthik Kambatla (scheduler)<br>
+     <b>FS: Extend SchedulingMode to intermediate queues</b><br>
+     <blockquote>FS allows setting {{SchedulingMode}} for leaf queues. Extending this to non-leaf queues allows using different kinds of fairness: e.g., root can have three child queues - fair-mem, drf-cpu-mem, drf-cpu-disk-mem taking different number of resources into account. In turn, this allows users to decide on the scheduling latency vs sophistication of the scheduling mode.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-481">YARN-481</a>.
+     Major bug reported by Chris Riccomini and fixed by Chris Riccomini (client)<br>
+     <b>Add AM Host and RPC Port to ApplicationCLI Status Output</b><br>
+     <blockquote>Hey Guys,

+

+I noticed that the ApplicationCLI is just randomly not printing some of the values in the ApplicationReport. I've added the getHost and getRpcPort. These are useful for me, since I want to make an RPC call to the AM (not the tracker call).

+

+Thanks!

+Chris</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-479">YARN-479</a>.
+     Major bug reported by Hitesh Shah and fixed by Jian He <br>
+     <b>NM retry behavior for connection to RM should be similar for lost heartbeats</b><br>
+     <blockquote>Regardless of connection loss at the start or at an intermediate point, NM's retry behavior to the RM should follow the same flow. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-476">YARN-476</a>.
+     Minor bug reported by Jason Lowe and fixed by Sandy Ryza <br>
+     <b>ProcfsBasedProcessTree info message confuses users</b><br>
+     <blockquote>ProcfsBasedProcessTree has a habit of emitting not-so-helpful messages such as the following:

+

+{noformat}

+2013-03-13 12:41:51,957 INFO [communication thread] org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: The process 28747 may have finished in the interim.

+2013-03-13 12:41:51,958 INFO [communication thread] org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: The process 28978 may have finished in the interim.

+2013-03-13 12:41:51,958 INFO [communication thread] org.apache.hadoop.yarn.util.ProcfsBasedProcessTree: The process 28979 may have finished in the interim.

+{noformat}

+

+As described in MAPREDUCE-4570, this is something that naturally occurs in the process of monitoring processes via procfs.  It's uninteresting at best and can confuse users who think it's a reason their job isn't running as expected when it appears in their logs.

+

+We should either make this DEBUG or remove it entirely.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-475">YARN-475</a>.
+     Major sub-task reported by Hitesh Shah and fixed by Hitesh Shah <br>
+     <b>Remove ApplicationConstants.AM_APP_ATTEMPT_ID_ENV as it is no longer set in an AM's environment</b><br>
+     <blockquote>AMs are expected to use ApplicationConstants.AM_CONTAINER_ID_ENV and derive the application attempt id from the container id. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-474">YARN-474</a>.
+     Major bug reported by Hitesh Shah and fixed by Zhijie Shen (capacityscheduler)<br>
+     <b>CapacityScheduler does not activate applications when maximum-am-resource-percent configuration is refreshed</b><br>
+     <blockquote>Submit 3 applications to a cluster where capacity scheduler limits allow only 1 running application. Modify capacity scheduler config to increase value of yarn.scheduler.capacity.maximum-am-resource-percent and invoke refresh queues. 

+

+The 2 applications not yet in running state do not get launched even though limits are increased.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-469">YARN-469</a>.
+     Major sub-task reported by Karthik Kambatla and fixed by Karthik Kambatla (scheduler)<br>
+     <b>Make scheduling mode in FS pluggable</b><br>
+     <blockquote>Currently, scheduling mode in FS is limited to Fair and FIFO. The code typically has an if condition at multiple places to determine the correct course of action.

+

+Making the scheduling mode pluggable helps in simplifying this process, particularly as we add new modes (DRF in this case).</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-468">YARN-468</a>.
+     Major sub-task reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov <br>
+     <b>coverage fix for org.apache.hadoop.yarn.server.webproxy.amfilter </b><br>
+     <blockquote>coverage fix org.apache.hadoop.yarn.server.webproxy.amfilter

+

+patch YARN-468-trunk.patch for trunk, branch-2, branch-0.23</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-467">YARN-467</a>.
+     Major sub-task reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi (nodemanager)<br>
+     <b>Jobs fail during resource localization when public distributed-cache hits unix directory limits</b><br>
+     <blockquote>If we have multiple jobs which uses distributed cache with small size of files, the directory limit reaches before reaching the cache size and fails to create any directories in file cache (PUBLIC). The jobs start failing with the below exception.

+

+java.io.IOException: mkdir of /tmp/nm-local-dir/filecache/3901886847734194975 failed

+	at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:909)

+	at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)

+	at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)

+	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)

+	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)

+	at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)

+	at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)

+	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:147)

+	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49)

+	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

+	at java.util.concurrent.FutureTask.run(FutureTask.java:138)

+	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)

+	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

+	at java.util.concurrent.FutureTask.run(FutureTask.java:138)

+	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

+	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

+	at java.lang.Thread.run(Thread.java:662)

+

+we need to have a mechanism where in we can create directory hierarchy and limit number of files per directory.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-460">YARN-460</a>.
+     Blocker bug reported by Thomas Graves and fixed by Thomas Graves (capacityscheduler)<br>
+     <b>CS user left in list of active users for the queue even when application finished</b><br>
+     <blockquote>We have seen a user get left in the queues list of active users even though the application was removed. This can cause everyone else in the queue to get less resources if using the minimum user limit percent config.

+

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-458">YARN-458</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (nodemanager , resourcemanager)<br>
+     <b>YARN daemon addresses must be placed in many different configs</b><br>
+     <blockquote>The YARN resourcemanager's address is included in four different configs: yarn.resourcemanager.scheduler.address, yarn.resourcemanager.resource-tracker.address, yarn.resourcemanager.address, and yarn.resourcemanager.admin.address

+

+A new user trying to configure a cluster needs to know the names of all these four configs.

+

+The same issue exists for nodemanagers.

+

+It would be much easier if they could simply specify yarn.resourcemanager.hostname and yarn.nodemanager.hostname and default ports for the other ones would kick in.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-450">YARN-450</a>.
+     Major sub-task reported by Bikas Saha and fixed by Zhijie Shen <br>
+     <b>Define value for * in the scheduling protocol</b><br>
+     <blockquote>The ResourceRequest has a string field to specify node/rack locations. For the cross-rack/cluster-wide location (ie when there is no locality constraint) the "*" string is used everywhere. However, its not defined anywhere and each piece of code either defines a local constant or uses the string literal. Defining "*" in the protocol and removing other local references from the code base will be good.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-448">YARN-448</a>.
+     Major bug reported by Kihwal Lee and fixed by Kihwal Lee (nodemanager)<br>
+     <b>Remove unnecessary hflush from log aggregation</b><br>
+     <blockquote>AggregatedLogFormat#writeVersion() calls hflush() after writing the version. Calling hflush does not seem to be necessary. It can add a lot of load to hdfs in a big busy cluster.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-447">YARN-447</a>.
+     Minor improvement reported by nemon lou and fixed by nemon lou (scheduler)<br>
+     <b>applicationComparator improvement for CS</b><br>
+     <blockquote>Now the compare code is :

+return a1.getApplicationId().getId() - a2.getApplicationId().getId();

+

+Will be replaced with :

+return a1.getApplicationId().compareTo(a2.getApplicationId());

+

+This will bring some benefits:

+1,leave applicationId compare logic to ApplicationId class;

+2,In future's HA mode,cluster time stamp may change,ApplicationId class already takes care of this condition.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-444">YARN-444</a>.
+     Major sub-task reported by Sandy Ryza and fixed by Sandy Ryza (api , applications/distributed-shell)<br>
+     <b>Move special container exit codes from YarnConfiguration to API</b><br>
+     <blockquote>YarnConfiguration currently contains the special container exit codes INVALID_CONTAINER_EXIT_STATUS = -1000, ABORTED_CONTAINER_EXIT_STATUS = -100, and DISKS_FAILED = -101.

+

+These are not really not really related to configuration, and YarnConfiguration should not become a place to put miscellaneous constants.

+

+Per discussion on YARN-417, appmaster writers need to be able to provide special handling for them, so it might make sense to move these to their own user-facing class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-441">YARN-441</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Xuan Gong <br>
+     <b>Clean up unused collection methods in various APIs</b><br>
+     <blockquote>There's a bunch of unused methods like getAskCount() and getAsk(index) in AllocateRequest, and other interfaces. These should be removed.

+

+In YARN, found them in. MR will have it's own set.

+AllocateRequest

+StartContaienrResponse</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-440">YARN-440</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Xuan Gong <br>
+     <b>Flatten RegisterNodeManagerResponse</b><br>
+     <blockquote>RegisterNodeManagerResponse has another wrapper RegistrationResponse under it, which can be removed.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-439">YARN-439</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Xuan Gong <br>
+     <b>Flatten NodeHeartbeatResponse</b><br>
+     <blockquote>NodeheartbeatResponse has another wrapper HeartbeatResponse under it, which can be removed.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-426">YARN-426</a>.
+     Critical bug reported by Jason Lowe and fixed by Jason Lowe (nodemanager)<br>
+     <b>Failure to download a public resource on a node prevents further downloads of the resource from that node</b><br>
+     <blockquote>If the NM encounters an error while downloading a public resource, it fails to empty the list of request events corresponding to the resource request in {{attempts}}.  If the same public resource is subsequently requested on that node, {{PublicLocalizer.addResource}} will skip the download since it will mistakenly believe a download of that resource is already in progress.  At that point any container that requests the public resource will just hang in the {{LOCALIZING}} state.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-422">YARN-422</a>.
+     Major sub-task reported by Bikas Saha and fixed by Zhijie Shen <br>
+     <b>Add NM client library</b><br>
+     <blockquote>Create a simple wrapper over the ContainerManager protocol to provide hide the details of the protocol implementation.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-417">YARN-417</a>.
+     Major sub-task reported by Sandy Ryza and fixed by Sandy Ryza (api , applications)<br>
+     <b>Create AMRMClient wrapper that provides asynchronous callbacks</b><br>
+     <blockquote>Writing AMs would be easier for some if they did not have to handle heartbeating to the RM on their own.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-412">YARN-412</a>.
+     Minor bug reported by Roger Hoover and fixed by Roger Hoover (scheduler)<br>
+     <b>FifoScheduler incorrectly checking for node locality</b><br>
+     <blockquote>In the FifoScheduler, the assignNodeLocalContainers method is checking if the data is local to a node by searching for the nodeAddress of the node in the set of outstanding requests for the app.  This seems to be incorrect as it should be checking hostname instead.  The offending line of code is 455:

+

+application.getResourceRequest(priority, node.getRMNode().getNodeAddress());

+

+Requests are formated by hostname (e.g. host1.foo.com) whereas node addresses are a concatenation of hostname and command port (e.g. host1.foo.com:1234)

+

+In the CapacityScheduler, it's done using hostname.  See LeafQueue.assignNodeLocalContainers, line 1129

+

+application.getResourceRequest(priority, node.getHostName());

+

+Note that this bug does not affect the actual scheduling decisions made by the FifoScheduler because even though it incorrect determines that a request is not local to the node, it will still schedule the request immediately because it's rack-local.  However, this bug may be adversely affecting the reporting of job status by underreporting the number of tasks that were node local.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-410">YARN-410</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Omkar Vinit Joshi <br>
+     <b>New lines in diagnostics for a failed app on the per-application page make it hard to read</b><br>
+     <blockquote>We need to fix the following issues on YARN web-UI:

+ - Remove the "Note" column from the application list. When a failure happens, this "Note" spoils the table layout.

+ - When the Application is still not running, the Tracking UI should be title "UNASSIGNED", for some reason it is titled "ApplicationMaster" but (correctly) links to "#".

+ - The per-application page has all the RM related information like version, start-time etc. Must be some accidental change by one of the patches.

+ - The diagnostics for a failed app on the per-application page don't retain new lines and wrap'em around - looks hard to read.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-406">YARN-406</a>.
+     Minor improvement reported by Hitesh Shah and fixed by Hitesh Shah <br>
+     <b>TestRackResolver fails when local network resolves "host1" to a valid host</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-400">YARN-400</a>.
+     Critical bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
+     <b>RM can return null application resource usage report leading to NPE in client</b><br>
+     <blockquote>RMAppImpl.createAndGetApplicationReport can return a report with a null resource usage report if full access to the app is allowed but the application has no current attempt.  This leads to NPEs in client code that assumes an app report will always have at least an empty resource usage report.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-398">YARN-398</a>.
+     Major sub-task reported by Arun C Murthy and fixed by Arun C Murthy <br>
+     <b>Enhance CS to allow for white-list of resources</b><br>
+     <blockquote>Allow white-list and black-list of resources in scheduler api.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-396">YARN-396</a>.
+     Major sub-task reported by Bikas Saha and fixed by Zhijie Shen <br>
+     <b>Rationalize AllocateResponse in RM scheduler API</b><br>
+     <blockquote>AllocateResponse contains an AMResponse and cluster node count. AMResponse that more data. Unless there is a good reason for this object structure, there should be either AMResponse or AllocateResponse.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-392">YARN-392</a>.
+     Major sub-task reported by Bikas Saha and fixed by Sandy Ryza (resourcemanager)<br>
+     <b>Make it possible to specify hard locality constraints in resource requests</b><br>
+     <blockquote>Currently its not possible to specify scheduling requests for specific nodes and nowhere else. The RM automatically relaxes locality to rack and * and assigns non-specified machines to the app.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-391">YARN-391</a>.
+     Trivial improvement reported by Steve Loughran and fixed by Steve Loughran (nodemanager)<br>
+     <b>detabify LCEResourcesHandler classes</b><br>
+     <blockquote>the LCEResourcesHandler classes from YARN-3 have had some tab chars that have snuck into the source tree. fix this before that code starts getting branched off and it's too late</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-390">YARN-390</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (client)<br>
+     <b>ApplicationCLI and NodeCLI use hard-coded platform-specific line separator, which causes test failures on Windows</b><br>
+     <blockquote>{{ApplicationCLI}}, {{NodeCLI}}, and the corresponding test {{TestYarnCLI}} all use a hard-coded '\n' as the line separator.  This causes test failures on Windows.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-387">YARN-387</a>.
+     Blocker sub-task reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>Fix inconsistent protocol naming</b><br>
+     <blockquote>We now have different and inconsistent naming schemes for various protocols. It was hard to explain to users, mainly in direct interactions at talks/presentations and user group meetings, with such naming.

+

+We should fix these before we go beta. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-385">YARN-385</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (api)<br>
+     <b>ResourceRequestPBImpl's toString() is missing location and # containers</b><br>
+     <blockquote>ResourceRequestPBImpl's toString method includes priority and resource capability, but omits location and number of containers.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-383">YARN-383</a>.
+     Minor bug reported by Hitesh Shah and fixed by Hitesh Shah <br>
+     <b>AMRMClientImpl should handle null rmClient in stop()</b><br>
+     <blockquote>2013-02-06 09:31:33,813 INFO  [Thread-2] service.CompositeService (CompositeService.java:stop(101)) - Error stopping org.apache.hadoop.yarn.client.AMRMClientImpl

+org.apache.hadoop.HadoopIllegalArgumentException: Cannot close proxy since it is null

+        at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:605)

+        at org.apache.hadoop.yarn.client.AMRMClientImpl.stop(AMRMClientImpl.java:150)

+        at org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:99)

+        at org.apache.hadoop.yarn.service.CompositeService.stop(CompositeService.java:89)

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-382">YARN-382</a>.
+     Major improvement reported by Thomas Graves and fixed by Zhijie Shen (scheduler)<br>
+     <b>SchedulerUtils improve way normalizeRequest sets the resource capabilities</b><br>
+     <blockquote>In YARN-370, we changed it from setting the capability to directly setting memory and cores:

+

+-    ask.setCapability(normalized);

++    ask.getCapability().setMemory(normalized.getMemory());

++    ask.getCapability().setVirtualCores(normalized.getVirtualCores());

+

+We did this because it is directly setting the values in the original resource object passed in when the AM gets allocated and without it the AM doesn't get the resource normalized correctly in the submission context. See YARN-370 for more details.

+

+I think we should find a better way of doing this long term, one so we don't have to keep adding things there when new resources are added, two because its a bit confusing as to what its doing and prone to someone accidentally breaking it in the future again.  Something closer to what Arun suggested in YARN-370 would be better but we need to make sure all the places work and get some more testing on it before putting it in. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-381">YARN-381</a>.
+     Minor improvement reported by Eli Collins and fixed by Sandy Ryza (documentation)<br>
+     <b>Improve FS docs</b><br>
+     <blockquote>The MR2 FS docs could use some improvements.

+

+Configuration:

+- sizebasedweight - what is the "size" here? Total memory usage?

+

+Pool properties:

+- minResources - what does min amount of aggregate memory mean given that this is not a reservation?

+- maxResources - is this a hard limit?

+- weight: How is this  ratio configured?  Eg base is 1 and all weights are relative to that?

+- schedulingMode - what is the default? Is fifo pure fifo, eg waits until all tasks for the job are finished before launching the next job?

+

+There's no mention of ACLs, even though they're supported. See the CS docs for comparison.

+

+Also there are a couple typos worth fixing while we're at it, eg "finish. apps to run"

+

+Worth keeping in mind that some of these will need to be updated to reflect that resource calculators are now pluggable.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-380">YARN-380</a>.
+     Major bug reported by Thomas Graves and fixed by Omkar Vinit Joshi (client)<br>
+     <b>yarn node -status prints Last-Last-Health-Update</b><br>
+     <blockquote>I assume the Last-Last-Health-Update is a typo and it should just be Last-Health-Update.

+

+

+$ yarn node -status foo.com:8041

+Node Report : 

+        Node-Id : foo.com:8041

+        Rack : /10.10.10.0

+        Node-State : RUNNING

+        Node-Http-Address : foo.com:8042

+        Health-Status(isNodeHealthy) : true

+        Last-Last-Health-Update : 1360118400219

+        Health-Report : 

+        Containers : 0

+        Memory-Used : 0M

+        Memory-Capacity : 24576</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-378">YARN-378</a>.
+     Major sub-task reported by xieguiming and fixed by Zhijie Shen (client , resourcemanager)<br>
+     <b>ApplicationMaster retry times should be set by Client</b><br>
+     <blockquote>We should support that different client or user have different ApplicationMaster retry times. It also say that "yarn.resourcemanager.am.max-retries" should be set by client. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-377">YARN-377</a>.
+     Minor bug reported by Tsz Wo (Nicholas), SZE and fixed by Chris Nauroth <br>
+     <b>Fix TestContainersMonitor for HADOOP-9252</b><br>
+     <blockquote>HADOOP-9252 slightly changed the format of some StringUtils outputs.  It caused TestContainersMonitor to fail.

+

+Also, some methods were deprecated by HADOOP-9252.  The use of them should be replaced with the new methods.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-376">YARN-376</a>.
+     Blocker bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
+     <b>Apps that have completed can appear as RUNNING on the NM UI</b><br>
+     <blockquote>On a busy cluster we've noticed a growing number of applications appear as RUNNING on a nodemanager web pages but the applications have long since finished.  Looking at the NM logs, it appears the RM never told the nodemanager that the application had finished.  This is also reflected in a jstack of the NM process, since many more log aggregation threads are running then one would expect from the number of actively running applications.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-365">YARN-365</a>.
+     Major sub-task reported by Siddharth Seth and fixed by Xuan Gong (resourcemanager , scheduler)<br>
+     <b>Each NM heartbeat should not generate an event for the Scheduler</b><br>
+     <blockquote>Follow up from YARN-275

+https://issues.apache.org/jira/secure/attachment/12567075/Prototype.txt</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-363">YARN-363</a>.
+     Major bug reported by Jason Lowe and fixed by Kenji Kikushima <br>
+     <b>yarn proxyserver fails to find webapps/proxy directory on startup</b><br>
+     <blockquote>Starting up the proxy server fails with this error:

+

+{noformat}

+2013-01-29 17:37:41,357 FATAL webproxy.WebAppProxy (WebAppProxy.java:start(99)) - Could not start proxy web server

+java.io.FileNotFoundException: webapps/proxy not found in CLASSPATH

+	at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:533)

+	at org.apache.hadoop.http.HttpServer.&lt;init&gt;(HttpServer.java:225)

+	at org.apache.hadoop.http.HttpServer.&lt;init&gt;(HttpServer.java:164)

+	at org.apache.hadoop.yarn.server.webproxy.WebAppProxy.start(WebAppProxy.java:90)

+	at org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)

+	at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.main(WebAppProxyServer.java:94)

+{noformat}

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-362">YARN-362</a>.
+     Minor bug reported by Jason Lowe and fixed by Ravi Prakash <br>
+     <b>Unexpected extra results when using webUI table search</b><br>
+     <blockquote>When using the search box on the web UI to search for a specific task number (e.g.: "0831"), sometimes unexpected extra results are shown.  Using the web browser's built-in search-within-page does not show any hits, so these look like completely spurious results.

+

+It looks like the raw timestamp value for time columns, which is not shown in the table, is also being searched with the search box.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-345">YARN-345</a>.
+     Critical bug reported by Devaraj K and fixed by Robert Parker (nodemanager)<br>
+     <b>Many InvalidStateTransitonException errors for ApplicationImpl in Node Manager</b><br>
+     <blockquote>{code:xml}

+org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: FINISH_APPLICATION at FINISHED

+	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)

+	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)

+	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)

+	at java.lang.Thread.run(Thread.java:662)

+{code}

+{code:xml}

+2013-01-17 04:03:46,726 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Can't handle this event at current state

+org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: FINISH_APPLICATION at APPLICATION_RESOURCES_CLEANINGUP

+	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)

+	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)

+	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)

+	at java.lang.Thread.run(Thread.java:662)

+{code}

+{code:xml}

+2013-01-17 00:01:11,006 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Can't handle this event at current state

+org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: FINISH_APPLICATION at FINISHING_CONTAINERS_WAIT

+	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)

+	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)

+	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)

+	at java.lang.Thread.run(Thread.java:662)

+{code}

+{code:xml}

+

+2013-01-17 10:56:36,975 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1358385982671_1304_01_000001 transitioned from NEW to DONE

+2013-01-17 10:56:36,975 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Can't handle this event at current state

+org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: APPLICATION_CONTAINER_FINISHED at FINISHED

+	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)

+	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)

+	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)

+	at java.lang.Thread.run(Thread.java:662)

+2013-01-17 10:56:36,975 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1358385982671_1304 transitioned from FINISHED to null

+{code}

+{code:xml}

+

+2013-01-17 10:56:36,026 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Can't handle this event at current state

+org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: INIT_CONTAINER at FINISHED

+	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)

+	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)

+	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:398)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.handle(ApplicationImpl.java:58)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:520)

+	at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher.handle(ContainerManagerImpl.java:512)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:126)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:75)

+	at java.lang.Thread.run(Thread.java:662)

+2013-01-17 10:56:36,026 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1358385982671_1304 transitioned from FINISHED to null

+{code}

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-326">YARN-326</a>.
+     Major new feature reported by Sandy Ryza and fixed by Sandy Ryza (scheduler)<br>
+     <b>Add multi-resource scheduling to the fair scheduler</b><br>
+     <blockquote>With YARN-2 in, the capacity scheduler has the ability to schedule based on multiple resources, using dominant resource fairness.  The fair scheduler should be able to do multiple resource scheduling as well, also using dominant resource fairness.

+

+More details to come on how the corner cases with fair scheduler configs such as min and max resources will be handled.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-319">YARN-319</a>.
+     Major bug reported by shenhong and fixed by shenhong (resourcemanager , scheduler)<br>
+     <b>Submit a job to a queue that not allowed in fairScheduler, client will hold forever.</b><br>
+     <blockquote>RM use fairScheduler, when client submit a job to a queue, but the queue do not allow the user to submit job it, in this case, client  will hold forever.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-309">YARN-309</a>.
+     Major sub-task reported by Xuan Gong and fixed by Xuan Gong (resourcemanager)<br>
+     <b>Make RM provide heartbeat interval to NM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-297">YARN-297</a>.
+     Major improvement reported by Arun C Murthy and fixed by Xuan Gong <br>
+     <b>Improve hashCode implementations for PB records</b><br>
+     <blockquote>As [~hsn] pointed out in YARN-2, we use very small primes in all our hashCode implementations.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-289">YARN-289</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza <br>
+     <b>Fair scheduler allows reservations that won't fit on node</b><br>
+     <blockquote>An application requests a container with 1024 MB.  It then requests a container with 2048 MB.  A node shows up with 1024 MB available.  Even if the application is the only one running, neither request will be scheduled on it.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-269">YARN-269</a>.
+     Major bug reported by Thomas Graves and fixed by Jason Lowe (resourcemanager)<br>
+     <b>Resource Manager not logging the health_check_script result when taking it out</b><br>
+     <blockquote>The Resource Manager not logging the health_check_script result when taking it out. This was added to jobtracker in 1.x with MAPREDUCE-2451, we should do the same thing for RM.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-249">YARN-249</a>.
+     Major improvement reported by Ravi Prakash and fixed by Ravi Prakash (capacityscheduler)<br>
+     <b>Capacity Scheduler web page should show list of active users per queue like it used to (in 1.x)</b><br>
+     <blockquote>On the jobtracker, the web ui showed the active users for each queue and how much resources each of those users were using. That currently isn't being displayed on the RM capacity scheduler web ui.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-237">YARN-237</a>.
+     Major improvement reported by Ravi Prakash and fixed by Jian He (resourcemanager)<br>
+     <b>Refreshing the RM page forgets how many rows I had in my Datatables</b><br>
+     <blockquote>If I choose a 100 rows, and then refresh the page, DataTables goes back to showing me 20 rows.

+This user preference should be stored in a cookie.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-236">YARN-236</a>.
+     Major bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
+     <b>RM should point tracking URL to RM web page when app fails to start</b><br>
+     <blockquote>Similar to YARN-165, the RM should redirect the tracking URL to the specific app page on the RM web UI when the application fails to start.  For example, if the AM completely fails to start due to bad AM config or bad job config like invalid queuename, then the user gets the unhelpful "The requested application exited before setting a tracking URL".

+

+Usually the diagnostic string on the RM app page has something useful, so we might as well point there.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-227">YARN-227</a>.
+     Major bug reported by Jason Lowe and fixed by Jason Lowe (resourcemanager)<br>
+     <b>Application expiration difficult to debug for end-users</b><br>
+     <blockquote>When an AM attempt expires the AMLivelinessMonitor in the RM will kill the job and mark it as failed.  However there are no diagnostic messages set for the application indicating that the application failed because of expiration.  Even if the AM logs are examined, it's often not obvious that the application was externally killed.  The only evidence of what happened to the application is currently in the RM logs, and those are often not accessible by users.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-209">YARN-209</a>.
+     Major bug reported by Bikas Saha and fixed by Zhijie Shen (capacityscheduler)<br>
+     <b>Capacity scheduler doesn't trigger app-activation after adding nodes</b><br>
+     <blockquote>Say application A is submitted but at that time it does not meet the bar for activation because of resource limit settings for applications. After that if more hardware is added to the system and the application becomes valid it still remains in pending state, likely forever.

+This might be rare to hit in real life because enough NM's heartbeat to the RM before applications can get submitted. But a change in settings or heartbeat interval might make it easier to repro. In RM restart scenarios, this will likely hit more if its implemented by re-playing events and re-submitting applications to the scheduler before the RPC to NM's is activated.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-200">YARN-200</a>.
+     Major sub-task reported by Robert Joseph Evans and fixed by Ravi Prakash <br>
+     <b>yarn log does not output all needed information, and is in a binary format</b><br>
+     <blockquote>yarn logs does not output attemptid, nodename, or container-id.  Missing these makes it very difficult to look through the logs for failed containers and tie them back to actual tasks and task attempts.

+

+Also the output currently includes several binary characters.  This is OK for being machine readable, but difficult for being human readable, or even for using standard tool like grep.

+

+The help message can also be more useful to users</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-198">YARN-198</a>.
+     Minor improvement reported by Ramgopal N and fixed by Jian He (nodemanager)<br>
+     <b>If we are navigating to Nodemanager UI from Resourcemanager,then there is not link to navigate back to Resource manager</b><br>
+     <blockquote>If we are navigating to Nodemanager by clicking on the node link in RM,there is no link provided on the NM to navigate back to RM.

+ If there is a link to navigate back to RM it would be good</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-196">YARN-196</a>.
+     Major bug reported by Ramgopal N and fixed by Xuan Gong (nodemanager)<br>
+     <b>Nodemanager should be more robust in handling connection failure  to ResourceManager when a cluster is started</b><br>
+     <blockquote>If NM is started before starting the RM ,NM is shutting down with the following error

+{code}

+ERROR org.apache.hadoop.yarn.service.CompositeService: Error starting services org.apache.hadoop.yarn.server.nodemanager.NodeManager

+org.apache.avro.AvroRuntimeException: java.lang.reflect.UndeclaredThrowableException

+	at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:149)

+	at org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)

+	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:167)

+	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:242)

+Caused by: java.lang.reflect.UndeclaredThrowableException

+	at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:66)

+	at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:182)

+	at org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:145)

+	... 3 more

+Caused by: com.google.protobuf.ServiceException: java.net.ConnectException: Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

+	at org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:131)

+	at $Proxy23.registerNodeManager(Unknown Source)

+	at org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)

+	... 5 more

+Caused by: java.net.ConnectException: Call From HOST-10-18-52-230/10.18.52.230 to HOST-10-18-52-250:8025 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

+	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:857)

+	at org.apache.hadoop.ipc.Client.call(Client.java:1141)

+	at org.apache.hadoop.ipc.Client.call(Client.java:1100)

+	at org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:128)

+	... 7 more

+Caused by: java.net.ConnectException: Connection refused

+	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

+	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)

+	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

+	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:659)

+	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:469)

+	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:563)

+	at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:211)

+	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1247)

+	at org.apache.hadoop.ipc.Client.call(Client.java:1117)

+	... 9 more

+2012-01-16 15:04:13,336 WARN org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher thread interrupted

+java.lang.InterruptedException

+	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)

+	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)

+	at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:76)

+	at java.lang.Thread.run(Thread.java:619)

+2012-01-16 15:04:13,337 INFO org.apache.hadoop.yarn.service.AbstractService: Service:Dispatcher is stopped.

+2012-01-16 15:04:13,392 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:9999

+2012-01-16 15:04:13,493 INFO org.apache.hadoop.yarn.service.AbstractService: Service:org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer is stopped.

+2012-01-16 15:04:13,493 INFO org.apache.hadoop.ipc.Server: Stopping server on 24290

+2012-01-16 15:04:13,494 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 24290

+2012-01-16 15:04:13,495 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder

+2012-01-16 15:04:13,496 INFO org.apache.hadoop.yarn.service.AbstractService: Service:org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler is stopped.

+2012-01-16 15:04:13,496 WARN org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher thread interrupted

+java.lang.InterruptedException

+	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)

+	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)

+	at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)

+	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:76)

+	at java.lang.Thread.run(Thread.java:619)

+{code}</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-193">YARN-193</a>.
+     Major bug reported by Hitesh Shah and fixed by Zhijie Shen (resourcemanager)<br>
+     <b>Scheduler.normalizeRequest does not account for allocation requests that exceed maximumAllocation limits </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-142">YARN-142</a>.
+     Blocker task reported by Siddharth Seth and fixed by  <br>
+     <b>[Umbrella] Cleanup YARN APIs w.r.t exceptions</b><br>
+     <blockquote>Ref: MAPREDUCE-4067

+

+All YARN APIs currently throw YarnRemoteException.

+1) This cannot be extended in it's current form.

+2) The RPC layer can throw IOExceptions. These end up showing up as UndeclaredThrowableExceptions.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-125">YARN-125</a>.
+     Minor sub-task reported by Steve Loughran and fixed by Steve Loughran <br>
+     <b>Make Yarn Client service shutdown operations robust</b><br>
+     <blockquote>Make the yarn client services more robust against being shut down while not started, or shutdown more than once, by null-checking fields before closing them, setting to null afterwards to prevent double-invocation. This is a subset of MAPREDUCE-3502</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-124">YARN-124</a>.
+     Minor sub-task reported by Steve Loughran and fixed by Steve Loughran <br>
+     <b>Make Yarn Node Manager services robust against shutdown</b><br>
+     <blockquote>Add the nodemanager bits of MAPREDUCE-3502 to shut down the Nodemanager services. This is done by checking for fields being non-null before shutting down/closing etc, and setting the fields to null afterwards -to be resilient against re-entrancy.

+

+No tests other than manual review.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-123">YARN-123</a>.
+     Minor sub-task reported by Steve Loughran and fixed by Steve Loughran <br>
+     <b>Make yarn Resource Manager services robust against shutdown</b><br>
+     <blockquote>Split MAPREDUCE-3502 patches to make the RM code more resilient to being stopped more than once, or before started.

+

+This depends on MAPREDUCE-4014.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-117">YARN-117</a>.
+     Major improvement reported by Steve Loughran and fixed by Steve Loughran <br>
+     <b>Enhance YARN service model</b><br>
+     <blockquote>Having played the YARN service model, there are some issues

+that I've identified based on past work and initial use.

+

+This JIRA issue is an overall one to cover the issues, with solutions pushed out to separate JIRAs.

+

+h2. state model prevents stopped state being entered if you could not successfully start the service.

+

+In the current lifecycle you cannot stop a service unless it was successfully started, but

+* {{init()}} may acquire resources that need to be explicitly released

+* if the {{start()}} operation fails partway through, the {{stop()}} operation may be needed to release resources.

+

+*Fix:* make {{stop()}} a valid state transition from all states and require the implementations to be able to stop safely without requiring all fields to be non null.

+

+Before anyone points out that the {{stop()}} operations assume that all fields are valid; and if called before a {{start()}} they will NPE; MAPREDUCE-3431 shows that this problem arises today, MAPREDUCE-3502 is a fix for this. It is independent of the rest of the issues in this doc but it will aid making {{stop()}} execute from all states other than "stopped".

+

+MAPREDUCE-3502 is too big a patch and needs to be broken down for easier review and take up; this can be done with issues linked to this one.

+h2. AbstractService doesn't prevent duplicate state change requests.

+

+The {{ensureState()}} checks to verify whether or not a state transition is allowed from the current state are performed in the base {{AbstractService}} class -yet subclasses tend to call this *after* their own {{init()}}, {{start()}} &amp; {{stop()}} operations. This means that these operations can be performed out of order, and even if the outcome of the call is an exception, all actions performed by the subclasses will have taken place. MAPREDUCE-3877 demonstrates this.

+

+This is a tricky one to address. In HADOOP-3128 I used a base class instead of an interface and made the {{init()}}, {{start()}} &amp; {{stop()}} methods {{final}}. These methods would do the checks, and then invoke protected inner methods, {{innerStart()}}, {{innerStop()}}, etc. It should be possible to retrofit the same behaviour to everything that extends {{AbstractService}} -something that must be done before the class is considered stable (because once the lifecycle methods are declared final, all subclasses that are out of the source tree will need fixing by the respective developers.

+

+h2. AbstractService state change doesn't defend against race conditions.

+

+There's no concurrency locks on the state transitions. Whatever fix for wrong state calls is added should correct this to prevent re-entrancy, such as {{stop()}} being called from two threads.

+

+h2.  Static methods to choreograph of lifecycle operations

+

+Helper methods to move things through lifecycles. init-&gt;start is common, stop-if-service!=null another. Some static methods can execute these, and even call {{stop()}} if {{init()}} raises an exception. These could go into a class {{ServiceOps}} in the same package. These can be used by those services that wrap other services, and help manage more robust shutdowns.

+

+h2. state transition failures are something that registered service listeners may wish to be informed of.

+

+When a state transition fails a {{RuntimeException}} can be thrown -and the service listeners are not informed as the notification point isn't reached. They may wish to know this, especially for management and diagnostics.

+

+*Fix:* extend {{ServiceStateChangeListener}} with a callback such as {{stateChangeFailed(Service service,Service.State targeted-state, RuntimeException e)}} that is invoked from the (final) state change methods in the {{AbstractService}} class (once they delegate to their inner {{innerStart()}}, {{innerStop()}} methods; make a no-op on the existing implementations of the interface.

+

+h2. Service listener failures not handled

+

+Is this an error an error or not? Log and ignore may not be what is desired.

+

+*Proposed:* during {{stop()}} any exception by a listener is caught and discarded, to increase the likelihood of a better shutdown, but do not add try-catch clauses to the other state changes.

+

+h2. Support static listeners for all AbstractServices

+

+Add support to {{AbstractService}} that allow callers to register listeners for all instances. The existing listener interface could be used. This allows management tools to hook into the events.

+

+The static listeners would be invoked for all state changes except creation (base class shouldn't be handing out references to itself at this point).

+

+These static events could all be async, pushed through a shared {{ConcurrentLinkedQueue}}; failures logged at warn and the rest of the listeners invoked.

+

+h2. Add some example listeners for management/diagnostics

+* event to commons log for humans.

+* events for machines hooked up to the JSON logger.

+* for testing: something that be told to fail.

+

+h2.  Services should support signal interruptibility

+

+The services would benefit from a way of shutting them down on a kill signal; this can be done via a runtime hook. It should not be automatic though, as composite services will get into a very complex state during shutdown. Better to provide a hook that lets you register/unregister services to terminate, and have the relevant {{main()}} entry points tell their root services to register themselves.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-112">YARN-112</a>.
+     Major sub-task reported by Jason Lowe and fixed by Omkar Vinit Joshi (nodemanager)<br>
+     <b>Race in localization can cause containers to fail</b><br>
+     <blockquote>On one of our 0.23 clusters, I saw a case of two containers, corresponding to two map tasks of a MR job, that were launched almost simultaneously on the same node.  It appears they both tried to localize job.jar and job.xml at the same time.  One of the containers failed when it couldn't rename the temporary job.jar directory to its final name because the target directory wasn't empty.  Shortly afterwards the second container failed because job.xml could not be found, presumably because the first container removed it when it cleaned up.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-109">YARN-109</a>.
+     Major bug reported by Jason Lowe and fixed by Mayank Bansal (nodemanager)<br>
+     <b>.tmp file is not deleted for localized archives</b><br>
+     <blockquote>When archives are localized they are initially created as a .tmp file and unpacked from that file.  However the .tmp file is not deleted afterwards.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-101">YARN-101</a>.
+     Minor bug reported by xieguiming and fixed by Xuan Gong (nodemanager)<br>
+     <b>If  the heartbeat message loss, the nodestatus info of complete container will loss too.</b><br>
+     <blockquote>see the red color:

+

+org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.java

+

+ protected void startStatusUpdater() {

+

+    new Thread("Node Status Updater") {

+      @Override

+      @SuppressWarnings("unchecked")

+      public void run() {

+        int lastHeartBeatID = 0;

+        while (!isStopped) {

+          // Send heartbeat

+          try {

+            synchronized (heartbeatMonitor) {

+              heartbeatMonitor.wait(heartBeatInterval);

+            }

+        {color:red} 

+            // Before we send the heartbeat, we get the NodeStatus,

+            // whose method removes completed containers.

+            NodeStatus nodeStatus = getNodeStatus();

+         {color}

+            nodeStatus.setResponseId(lastHeartBeatID);

+            

+            NodeHeartbeatRequest request = recordFactory

+                .newRecordInstance(NodeHeartbeatRequest.class);

+            request.setNodeStatus(nodeStatus);   

+            {color:red} 

+

+           // But if the nodeHeartbeat fails, we've already removed the containers away to know about it. We aren't handling a nodeHeartbeat failure case here.

+            HeartbeatResponse response =

+              resourceTracker.nodeHeartbeat(request).getHeartbeatResponse();

+           {color} 

+

+            if (response.getNodeAction() == NodeAction.SHUTDOWN) {

+              LOG

+                  .info("Recieved SHUTDOWN signal from Resourcemanager as part of heartbeat," +

+                  		" hence shutting down.");

+              NodeStatusUpdaterImpl.this.stop();

+              break;

+            }

+            if (response.getNodeAction() == NodeAction.REBOOT) {

+              LOG.info("Node is out of sync with ResourceManager,"

+                  + " hence rebooting.");

+              NodeStatusUpdaterImpl.this.reboot();

+              break;

+            }

+

+            lastHeartBeatID = response.getResponseId();

+            List&lt;ContainerId&gt; containersToCleanup = response

+                .getContainersToCleanupList();

+            if (containersToCleanup.size() != 0) {

+              dispatcher.getEventHandler().handle(

+                  new CMgrCompletedContainersEvent(containersToCleanup));

+            }

+            List&lt;ApplicationId&gt; appsToCleanup =

+                response.getApplicationsToCleanupList();

+            //Only start tracking for keepAlive on FINISH_APP

+            trackAppsForKeepAlive(appsToCleanup);

+            if (appsToCleanup.size() != 0) {

+              dispatcher.getEventHandler().handle(

+                  new CMgrCompletedAppsEvent(appsToCleanup));

+            }

+          } catch (Throwable e) {

+            // TODO Better error handling. Thread can die with the rest of the

+            // NM still running.

+            LOG.error("Caught exception in status-updater", e);

+          }

+        }

+      }

+    }.start();

+  }

+

+

+

+  private NodeStatus getNodeStatus() {

+

+    NodeStatus nodeStatus = recordFactory.newRecordInstance(NodeStatus.class);

+    nodeStatus.setNodeId(this.nodeId);

+

+    int numActiveContainers = 0;

+    List&lt;ContainerStatus&gt; containersStatuses = new ArrayList&lt;ContainerStatus&gt;();

+    for (Iterator&lt;Entry&lt;ContainerId, Container&gt;&gt; i =

+        this.context.getContainers().entrySet().iterator(); i.hasNext();) {

+      Entry&lt;ContainerId, Container&gt; e = i.next();

+      ContainerId containerId = e.getKey();

+      Container container = e.getValue();

+

+      // Clone the container to send it to the RM

+      org.apache.hadoop.yarn.api.records.ContainerStatus containerStatus = 

+          container.cloneAndGetContainerStatus();

+      containersStatuses.add(containerStatus);

+      ++numActiveContainers;

+      LOG.info("Sending out status for container: " + containerStatus);

+      {color:red} 

+

+      // Here is the part that removes the completed containers.

+      if (containerStatus.getState() == ContainerState.COMPLETE) {

+        // Remove

+        i.remove();

+      {color} 

+

+        LOG.info("Removed completed container " + containerId);

+      }

+    }

+    nodeStatus.setContainersStatuses(containersStatuses);

+

+    LOG.debug(this.nodeId + " sending out status for "

+        + numActiveContainers + " containers");

+

+    NodeHealthStatus nodeHealthStatus = this.context.getNodeHealthStatus();

+    nodeHealthStatus.setHealthReport(healthChecker.getHealthReport());

+    nodeHealthStatus.setIsNodeHealthy(healthChecker.isHealthy());

+    nodeHealthStatus.setLastHealthReportTime(

+        healthChecker.getLastHealthReportTime());

+    if (LOG.isDebugEnabled()) {

+      LOG.debug("Node's health-status : " + nodeHealthStatus.getIsNodeHealthy()

+                + ", " + nodeHealthStatus.getHealthReport());

+    }

+    nodeStatus.setNodeHealthStatus(nodeHealthStatus);

+

+    List&lt;ApplicationId&gt; keepAliveAppIds = createKeepAliveApplicationList();

+    nodeStatus.setKeepAliveApplications(keepAliveAppIds);

+    

+    return nodeStatus;

+  }

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-99">YARN-99</a>.
+     Major sub-task reported by Devaraj K and fixed by Omkar Vinit Joshi (nodemanager)<br>
+     <b>Jobs fail during resource localization when private distributed-cache hits unix directory limits</b><br>
+     <blockquote>If we have multiple jobs which uses distributed cache with small size of files, the directory limit reaches before reaching the cache size and fails to create any directories in file cache. The jobs start failing with the below exception.

+

+

+{code:xml}

+java.io.IOException: mkdir of /tmp/nm-local-dir/usercache/root/filecache/1701886847734194975 failed

+	at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:909)

+	at org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)

+	at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)

+	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)

+	at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)

+	at org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2325)

+	at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)

+	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:147)

+	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49)

+	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

+	at java.util.concurrent.FutureTask.run(FutureTask.java:138)

+	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)

+	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)

+	at java.util.concurrent.FutureTask.run(FutureTask.java:138)

+	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

+	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

+	at java.lang.Thread.run(Thread.java:662)

+{code}

+

+We should have a mechanism to clean the cache files if it crosses specified number of directories like cache size.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-71">YARN-71</a>.
+     Critical bug reported by Vinod Kumar Vavilapalli and fixed by Xuan Gong (nodemanager)<br>
+     <b>Ensure/confirm that the NodeManager cleans up local-dirs on restart</b><br>
+     <blockquote>We have to make sure that NodeManagers cleanup their local files on restart.

+

+It may already be working like that in which case we should have tests validating this.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-45">YARN-45</a>.
+     Major sub-task reported by Chris Douglas and fixed by Carlo Curino (resourcemanager)<br>
+     <b>Scheduler feedback to AM to release containers</b><br>
+     <blockquote>The ResourceManager strikes a balance between cluster utilization and strict enforcement of resource invariants in the cluster. Individual allocations of containers must be reclaimed- or reserved- to restore the global invariants when cluster load shifts. In some cases, the ApplicationMaster can respond to fluctuations in resource availability without losing the work already completed by that task (MAPREDUCE-4584). Supplying it with this information would be helpful for overall cluster utilization [1]. To this end, we want to establish a protocol for the RM to ask the AM to release containers.

+

+[1] http://research.yahoo.com/files/yl-2012-003.pdf</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-24">YARN-24</a>.
+     Major bug reported by Jason Lowe and fixed by Sandy Ryza (nodemanager)<br>
+     <b>Nodemanager fails to start if log aggregation enabled and namenode unavailable</b><br>
+     <blockquote>If log aggregation is enabled and the namenode is currently unavailable, the nodemanager fails to startup.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5334">MAPREDUCE-5334</a>.
+     Blocker bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>TestContainerLauncherImpl is failing</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5326">MAPREDUCE-5326</a>.
+     Blocker bug reported by Arun C Murthy and fixed by Zhijie Shen <br>
+     <b>Add version to shuffle header</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5319">MAPREDUCE-5319</a>.
+     Major bug reported by yeshavora and fixed by Xuan Gong <br>
+     <b>Job.xml file does not has 'user.name' property for Hadoop2</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5315">MAPREDUCE-5315</a>.
+     Critical bug reported by Mithun Radhakrishnan and fixed by Mithun Radhakrishnan (distcp)<br>
+     <b>DistCp reports success even on failure.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5312">MAPREDUCE-5312</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Sandy Ryza <br>
+     <b>TestRMNMInfo is failing</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5310">MAPREDUCE-5310</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (applicationmaster)<br>
+     <b>MRAM should not normalize allocation request capabilities</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5308">MAPREDUCE-5308</a>.
+     Major bug reported by Nathan Roberts and fixed by Nathan Roberts <br>
+     <b>Shuffling to memory can get out-of-sync when fetching multiple compressed map outputs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5304">MAPREDUCE-5304</a>.
+     Blocker sub-task reported by Alejandro Abdelnur and fixed by Karthik Kambatla <br>
+     <b>mapreduce.Job killTask/failTask/getTaskCompletionEvents methods have incompatible signature changes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5303">MAPREDUCE-5303</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Changes on MR after moving ProtoBase to package impl.pb on YARN-724</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5301">MAPREDUCE-5301</a>.
+     Major bug reported by Siddharth Seth and fixed by  <br>
+     <b>Update MR code to work with YARN-635 changes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5300">MAPREDUCE-5300</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Two function signature changes in filecache.DistributedCache</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5299">MAPREDUCE-5299</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Mapred API: void setTaskID(TaskAttemptID) is missing in TaskCompletionEvent </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5298">MAPREDUCE-5298</a>.
+     Major new feature reported by Steve Loughran and fixed by Steve Loughran (applicationmaster)<br>
+     <b>Move MapReduce services to YARN-117 stricter lifecycle</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5297">MAPREDUCE-5297</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Update MR App  since BuilderUtils is moved to yarn-server-common after YARN-748</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5296">MAPREDUCE-5296</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Mapred API: Function signature change in JobControl</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5291">MAPREDUCE-5291</a>.
+     Major bug reported by Siddharth Seth and fixed by Zhijie Shen <br>
+     <b>Change MR App to use update property names in container-log4j.properties</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5289">MAPREDUCE-5289</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Jian He <br>
+     <b>Update MR App to use Token directly after YARN-717</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5286">MAPREDUCE-5286</a>.
+     Major task reported by Siddharth Seth and fixed by Vinod Kumar Vavilapalli <br>
+     <b>startContainer call should use the ContainerToken instead of Container [YARN-684]</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5285">MAPREDUCE-5285</a>.
+     Major bug reported by Jian He and fixed by  <br>
+     <b>Update MR App to use immutable ApplicationAttemptID, ContainerID, NodeID after YARN-735</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5283">MAPREDUCE-5283</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (applicationmaster , test)<br>
+     <b>Over 10 different tests have near identical implementations of AppContext</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5282">MAPREDUCE-5282</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Siddharth Seth <br>
+     <b>Update MR App to use immutable ApplicationID after YARN-716</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5280">MAPREDUCE-5280</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>Mapreduce API: ClusterMetrics incompatibility issues with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5275">MAPREDUCE-5275</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>Mapreduce API: TokenCache incompatibility issues with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5274">MAPREDUCE-5274</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>Mapreduce API: String toHex(byte[]) is removed from SecureShuffleUtils</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5273">MAPREDUCE-5273</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>Protected variables are removed from CombineFileRecordReader in both mapred and mapreduce</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5270">MAPREDUCE-5270</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Migrate from using BuilderUtil factory methods to individual record factory method on MapReduce side</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5268">MAPREDUCE-5268</a>.
+     Major improvement reported by Jason Lowe and fixed by Karthik Kambatla (jobhistoryserver)<br>
+     <b>Improve history server startup performance</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5263">MAPREDUCE-5263</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>filecache.DistributedCache incompatiblity issues with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5259">MAPREDUCE-5259</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic (test)<br>
+     <b>TestTaskLog fails on Windows because of path separators missmatch</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5257">MAPREDUCE-5257</a>.
+     Major bug reported by Jason Lowe and fixed by Omkar Vinit Joshi (mr-am , mrv2)<br>
+     <b>TestContainerLauncherImpl fails</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5246">MAPREDUCE-5246</a>.
+     Major improvement reported by Mayank Bansal and fixed by Mayank Bansal <br>
+     <b>Adding application type to submission context</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5245">MAPREDUCE-5245</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>A number of public static variables are removed from JobConf</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5244">MAPREDUCE-5244</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Two functions changed their visibility in JobStatus</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5240">MAPREDUCE-5240</a>.
+     Blocker bug reported by Roman Shaposhnik and fixed by Vinod Kumar Vavilapalli (mrv2)<br>
+     <b>inside of FileOutputCommitter the initialized Credentials cache appears to be empty</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5239">MAPREDUCE-5239</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Siddharth Seth <br>
+     <b>Update MR App to reflect YarnRemoteException changes after YARN-634</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5237">MAPREDUCE-5237</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>ClusterStatus incompatiblity issues with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5235">MAPREDUCE-5235</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>mapred.Counters incompatiblity issues with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5234">MAPREDUCE-5234</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>Signature changes for getTaskId of TaskReport in mapred</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5233">MAPREDUCE-5233</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>Functions are changed or removed from Job in jobcontrol</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5231">MAPREDUCE-5231</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Constructor of DBInputFormat.DBRecordReader in mapred is changed</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5230">MAPREDUCE-5230</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>createFileSplit is removed from NLineInputFormat of mapred</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5229">MAPREDUCE-5229</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>TEMP_DIR_NAME is removed from of FileOutputCommitter of mapreduce</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5228">MAPREDUCE-5228</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Mayank Bansal <br>
+     <b>Enum Counter is removed from FileInputFormat and FileOutputFormat of both mapred and mapreduce</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5226">MAPREDUCE-5226</a>.
+     Major bug reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Handle exception related changes in YARN's AMRMProtocol api after YARN-630</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5222">MAPREDUCE-5222</a>.
+     Major sub-task reported by Karthik Kambatla and fixed by Karthik Kambatla <br>
+     <b>Fix JobClient incompatibilities with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5220">MAPREDUCE-5220</a>.
+     Major sub-task reported by Sandy Ryza and fixed by Zhijie Shen (client)<br>
+     <b>Mapred API: TaskCompletionEvent incompatibility issues with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5212">MAPREDUCE-5212</a>.
+     Major bug reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Handle exception related changes in YARN's ClientRMProtocol api after YARN-631</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5209">MAPREDUCE-5209</a>.
+     Minor bug reported by Radim Kolar and fixed by Tsuyoshi OZAWA (mrv2)<br>
+     <b>ShuffleScheduler log message incorrect</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5208">MAPREDUCE-5208</a>.
+     Major bug reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>SpillRecord and ShuffleHandler should use SecureIOUtils for reading index file and map output</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5205">MAPREDUCE-5205</a>.
+     Blocker bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>Apps fail in secure cluster setup</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5204">MAPREDUCE-5204</a>.
+     Major bug reported by Xuan Gong and fixed by Xuan Gong <br>
+     <b>Handle YarnRemoteException separately from IOException in MR api </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5199">MAPREDUCE-5199</a>.
+     Blocker sub-task reported by Vinod Kumar Vavilapalli and fixed by Daryn Sharp (security)<br>
+     <b>AppTokens file can/should be removed</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5194">MAPREDUCE-5194</a>.
+     Minor task reported by Chris Douglas and fixed by Chris Douglas (task)<br>
+     <b>Heed interrupts during Fetcher shutdown</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5193">MAPREDUCE-5193</a>.
+     Major bug reported by Aaron T. Myers and fixed by Andrew Wang (test)<br>
+     <b>A few MR tests use block sizes which are smaller than the default minimum block size</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5192">MAPREDUCE-5192</a>.
+     Minor task reported by Chris Douglas and fixed by Chris Douglas (task)<br>
+     <b>Separate TCE resolution from fetch</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5191">MAPREDUCE-5191</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestQueue#testQueue fails with timeout on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5184">MAPREDUCE-5184</a>.
+     Major sub-task reported by Arun C Murthy and fixed by Zhijie Shen (documentation)<br>
+     <b>Document MR Binary Compatibility vis-a-vis hadoop-1 and hadoop-2</b><br>
+     <blockquote>Document MR Binary Compatibility vis-a-vis hadoop-1 and hadoop-2 for end-users.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5181">MAPREDUCE-5181</a>.
+     Major bug reported by Siddharth Seth and fixed by Vinod Kumar Vavilapalli (applicationmaster)<br>
+     <b>RMCommunicator should not use AMToken from the env</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5179">MAPREDUCE-5179</a>.
+     Major bug reported by Hitesh Shah and fixed by Hitesh Shah <br>
+     <b>Change TestHSWebServices to do string equal check on hadoop build version similar to YARN-605</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5178">MAPREDUCE-5178</a>.
+     Major bug reported by Hitesh Shah and fixed by Hitesh Shah <br>
+     <b>Fix use of BuilderUtils#newApplicationReport as a result of YARN-577.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5177">MAPREDUCE-5177</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>Move to common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5176">MAPREDUCE-5176</a>.
+     Major improvement reported by Carlo Curino and fixed by Carlo Curino (mrv2)<br>
+     <b>Preemptable annotations (to support preemption in MR)</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5175">MAPREDUCE-5175</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Xuan Gong <br>
+     <b>Update MR App to not set envs that will be set by NMs anyways after YARN-561</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5171">MAPREDUCE-5171</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (applicationmaster)<br>
+     <b>Expose blacklisted nodes from the MR AM REST API </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5167">MAPREDUCE-5167</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Jian He <br>
+     <b>Update MR App after YARN-562</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5166">MAPREDUCE-5166</a>.
+     Blocker bug reported by Gunther Hagleitner and fixed by Sandy Ryza <br>
+     <b>ConcurrentModificationException in LocalJobRunner</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5163">MAPREDUCE-5163</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Xuan Gong <br>
+     <b>Update MR App after YARN-441</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5159">MAPREDUCE-5159</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Aggregatewordcount and aggregatewordhist in hadoop-1 examples are not binary compatible with hadoop-2 mapred.lib.aggregate</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5157">MAPREDUCE-5157</a>.
+     Major sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Sort in hadoop-1 examples is not binary compatible with hadoop-2 mapred.lib</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5156">MAPREDUCE-5156</a>.
+     Blocker sub-task reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Hadoop-examples-1.x.x.jar cannot run on Yarn</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5152">MAPREDUCE-5152</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Vinod Kumar Vavilapalli <br>
+     <b>MR App is not using Container from RM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5151">MAPREDUCE-5151</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Sandy Ryza <br>
+     <b>Update MR App after YARN-444</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5147">MAPREDUCE-5147</a>.
+     Major bug reported by Robert Parker and fixed by Robert Parker (mrv2)<br>
+     <b>Maven build should create hadoop-mapreduce-client-app-VERSION.jar directly</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5146">MAPREDUCE-5146</a>.
+     Minor bug reported by Sangjin Lee and fixed by Sangjin Lee (task)<br>
+     <b>application classloader may be used too early to load classes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5145">MAPREDUCE-5145</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Change default max-attempts to be more than one for MR jobs as well</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5140">MAPREDUCE-5140</a>.
+     Major bug reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>MR part of YARN-514</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5139">MAPREDUCE-5139</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Xuan Gong <br>
+     <b>Update MR App after YARN-486</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5138">MAPREDUCE-5138</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Omkar Vinit Joshi <br>
+     <b>Fix LocalDistributedCacheManager after YARN-112</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5137">MAPREDUCE-5137</a>.
+     Major bug reported by Thomas Graves and fixed by Thomas Graves (applicationmaster)<br>
+     <b>AM web UI: clicking on Map Task results in 500 error</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5136">MAPREDUCE-5136</a>.
+     Major bug reported by Amir Sanjar and fixed by Amir Sanjar <br>
+     <b>TestJobImpl-&gt;testJobNoTasks fails with IBM JAVA</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5129">MAPREDUCE-5129</a>.
+     Minor new feature reported by Billie Rinaldi and fixed by Billie Rinaldi <br>
+     <b>Add tag info to JH files</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5128">MAPREDUCE-5128</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (documentation , jobhistoryserver)<br>
+     <b>mapred-default.xml is missing a bunch of history server configs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5113">MAPREDUCE-5113</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza <br>
+     <b>Streaming input/output types are ignored with java mapper/reducer</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5098">MAPREDUCE-5098</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (contrib/gridmix)<br>
+     <b>Fix findbugs warnings in gridmix</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5086">MAPREDUCE-5086</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>MR app master deletes staging dir when sent a reboot command from the RM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5079">MAPREDUCE-5079</a>.
+     Critical improvement reported by Jason Lowe and fixed by Jason Lowe (mr-am)<br>
+     <b>Recovery should restore task state from job history info directly</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5078">MAPREDUCE-5078</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (client)<br>
+     <b>TestMRAppMaster fails on Windows due to mismatched path separators</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5077">MAPREDUCE-5077</a>.
+     Minor bug reported by Karthik Kambatla and fixed by Karthik Kambatla (mrv2)<br>
+     <b>Cleanup: mapreduce.util.ResourceCalculatorPlugin and related code should be removed</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5075">MAPREDUCE-5075</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (distcp)<br>
+     <b>DistCp leaks input file handles</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5069">MAPREDUCE-5069</a>.
+     Minor improvement reported by Sangjin Lee and fixed by  (mrv1 , mrv2)<br>
+     <b>add concrete common implementations of CombineFileInputFormat</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5066">MAPREDUCE-5066</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>JobTracker should set a timeout when calling into job.end.notification.url</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5065">MAPREDUCE-5065</a>.
+     Major bug reported by Mithun Radhakrishnan and fixed by Mithun Radhakrishnan (distcp)<br>
+     <b>DistCp should skip checksum comparisons if block-sizes are different on source/target.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5062">MAPREDUCE-5062</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Zhijie Shen <br>
+     <b>MR AM should read max-retries information from the RM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5060">MAPREDUCE-5060</a>.
+     Critical bug reported by Robert Joseph Evans and fixed by Robert Joseph Evans <br>
+     <b>Fetch failures that time out only count against the first map task</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5059">MAPREDUCE-5059</a>.
+     Major bug reported by Jason Lowe and fixed by Omkar Vinit Joshi (jobhistoryserver , webapps)<br>
+     <b>Job overview shows average merge time larger than for any reduce attempt</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5043">MAPREDUCE-5043</a>.
+     Blocker bug reported by Jason Lowe and fixed by Jason Lowe (mr-am)<br>
+     <b>Fetch failure processing can cause AM event queue to backup and eventually OOM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5042">MAPREDUCE-5042</a>.
+     Blocker bug reported by Jason Lowe and fixed by Jason Lowe (mr-am , security)<br>
+     <b>Reducer unable to fetch for a map task that was recovered</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5033">MAPREDUCE-5033</a>.
+     Minor improvement reported by Andrew Wang and fixed by Andrew Wang <br>
+     <b>mapred shell script should respect usage flags (--help -help -h)</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5027">MAPREDUCE-5027</a>.
+     Major bug reported by Jason Lowe and fixed by Robert Parker <br>
+     <b>Shuffle does not limit number of outstanding connections</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5015">MAPREDUCE-5015</a>.
+     Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov <br>
+     <b>Coverage fix for org.apache.hadoop.mapreduce.tools.CLI</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5013">MAPREDUCE-5013</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (client)<br>
+     <b>mapred.JobStatus compatibility: MR2 missing constructors from MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5009">MAPREDUCE-5009</a>.
+     Critical bug reported by Robert Parker and fixed by Robert Parker (mrv1)<br>
+     <b>Killing the Task Attempt slated for commit does not clear the value from the Task commitAttempt member</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5008">MAPREDUCE-5008</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza <br>
+     <b>Merger progress miscounts with respect to EOF_MARKER</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5007">MAPREDUCE-5007</a>.
+     Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov <br>
+     <b>fix coverage org.apache.hadoop.mapreduce.v2.hs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5000">MAPREDUCE-5000</a>.
+     Critical bug reported by Jason Lowe and fixed by Jason Lowe (mr-am)<br>
+     <b>TaskImpl.getCounters() can return the counters for the wrong task attempt when task is speculating</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4994">MAPREDUCE-4994</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (client)<br>
+     <b>-jt generic command line option does not work</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4992">MAPREDUCE-4992</a>.
+     Critical bug reported by Robert Parker and fixed by Robert Parker (mr-am)<br>
+     <b>AM hangs in RecoveryService when recovering tasks with speculative attempts</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4991">MAPREDUCE-4991</a>.
+     Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov <br>
+     <b>coverage for gridmix</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4990">MAPREDUCE-4990</a>.
+     Trivial improvement reported by Karthik Kambatla and fixed by Karthik Kambatla <br>
+     <b>Construct debug strings conditionally in ShuffleHandler.Shuffle#sendMapOutput()</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4989">MAPREDUCE-4989</a>.
+     Major improvement reported by Ravi Prakash and fixed by Ravi Prakash (jobhistoryserver , mr-am)<br>
+     <b>JSONify DataTables input data for Attempts page</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4987">MAPREDUCE-4987</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (distributed-cache , nodemanager)<br>
+     <b>TestMRJobs#testDistributedCache fails on Windows due to classpath problems and unexpected behavior of symlinks</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4985">MAPREDUCE-4985</a>.
+     Trivial bug reported by Plamen Jeliazkov and fixed by Plamen Jeliazkov <br>
+     <b>TestDFSIO supports compression but usages doesn't reflect</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4981">MAPREDUCE-4981</a>.
+     Minor bug reported by Plamen Jeliazkov and fixed by Plamen Jeliazkov <br>
+     <b>WordMean, WordMedian, WordStandardDeviation missing from ExamplesDriver</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4974">MAPREDUCE-4974</a>.
+     Major improvement reported by Arun A K and fixed by Gelesh (mrv1 , mrv2 , performance)<br>
+     <b>Optimising the LineRecordReader initialize() method</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4972">MAPREDUCE-4972</a>.
+     Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov <br>
+     <b>Coverage fixing for org.apache.hadoop.mapreduce.jobhistory </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4951">MAPREDUCE-4951</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (applicationmaster , mr-am , mrv2)<br>
+     <b>Container preemption interpreted as task failure</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4942">MAPREDUCE-4942</a>.
+     Major sub-task reported by Robert Kanter and fixed by Robert Kanter (mrv2)<br>
+     <b>mapreduce.Job has a bunch of methods that throw InterruptedException so its incompatible with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4932">MAPREDUCE-4932</a>.
+     Major bug reported by Robert Kanter and fixed by Robert Kanter (mrv2)<br>
+     <b>mapreduce.job#getTaskCompletionEvents incompatible with Hadoop 1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4927">MAPREDUCE-4927</a>.
+     Major bug reported by Jason Lowe and fixed by Ashwin Shankar (jobhistoryserver)<br>
+     <b>Historyserver 500 error due to NPE when accessing specific counters page for failed job</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4898">MAPREDUCE-4898</a>.
+     Major bug reported by Robert Kanter and fixed by Robert Kanter (mrv2)<br>
+     <b>FileOutputFormat.checkOutputSpecs and FileOutputFormat.setOutputPath incompatible with MR1</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4896">MAPREDUCE-4896</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (client , scheduler)<br>
+     <b>"mapred queue -info" spits out ugly exception when queue does not exist</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4892">MAPREDUCE-4892</a>.
+     Major bug reported by Bikas Saha and fixed by Bikas Saha <br>
+     <b>CombineFileInputFormat node input split can be skewed on small clusters</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4885">MAPREDUCE-4885</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (contrib/streaming , test)<br>
+     <b>Streaming tests have multiple failures on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4875">MAPREDUCE-4875</a>.
+     Major test reported by Aleksey Gorshkov and fixed by Aleksey Gorshkov (test)<br>
+     <b>coverage fixing for org.apache.hadoop.mapred</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4871">MAPREDUCE-4871</a>.
+     Major bug reported by Jason Lowe and fixed by Jason Lowe (mrv2)<br>
+     <b>AM uses mapreduce.jobtracker.split.metainfo.maxsize but mapred-default has mapreduce.job.split.metainfo.maxsize</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4846">MAPREDUCE-4846</a>.
+     Major improvement reported by Sandy Ryza and fixed by Sandy Ryza (client)<br>
+     <b>Some JobQueueInfo methods are public in MR1 but protected in MR2</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4794">MAPREDUCE-4794</a>.
+     Major bug reported by Jason Lowe and fixed by Jason Lowe (applicationmaster)<br>
+     <b>DefaultSpeculator generates error messages on normal shutdown</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4737">MAPREDUCE-4737</a>.
+     Major bug reported by Daniel Dai and fixed by Arun C Murthy <br>
+     <b> Hadoop does not close output file / does not call Mapper.cleanup if exception in map</b><br>
+     <blockquote>Ensure that mapreduce APIs are semantically consistent with mapred API w.r.t Mapper.cleanup and Reducer.cleanup; in the sense that cleanup is now called even if there is an error. The old mapred API already ensures that Mapper.close and Reducer.close are invoked during error handling. Note that it is an incompatible change, however end-users can override Mapper.run and Reducer.run to get the old (inconsistent) behaviour.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4716">MAPREDUCE-4716</a>.
+     Major bug reported by Thomas Graves and fixed by Thomas Graves (jobhistoryserver)<br>
+     <b>TestHsWebServicesJobsQuery.testJobsQueryStateInvalid fails with jdk7</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4693">MAPREDUCE-4693</a>.
+     Major bug reported by Jason Lowe and fixed by Xuan Gong (jobhistoryserver , mrv2)<br>
+     <b>Historyserver should provide counters for failed tasks</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4671">MAPREDUCE-4671</a>.
+     Major bug reported by Bikas Saha and fixed by Bikas Saha <br>
+     <b>AM does not tell the RM about container requests that are no longer needed</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4571">MAPREDUCE-4571</a>.
+     Major bug reported by Thomas Graves and fixed by Thomas Graves (webapps)<br>
+     <b>TestHsWebServicesJobs fails on jdk7</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4356">MAPREDUCE-4356</a>.
+     Major bug reported by Ravi Gummadi and fixed by Ravi Gummadi (tools/rumen)<br>
+     <b>Provide access to ParsedTask.obtainTaskAttempts()</b><br>
+     <blockquote>Made the method ParsedTask.obtainTaskAttempts() public.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4149">MAPREDUCE-4149</a>.
+     Major bug reported by Ravi Gummadi and fixed by Ravi Gummadi (tools/rumen)<br>
+     <b>Rumen fails to parse certain counter strings</b><br>
+     <blockquote>Fixes Rumen to parse counter strings containing the special characters "{" and "}".</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4100">MAPREDUCE-4100</a>.
+     Minor bug reported by Karam Singh and fixed by Amar Kamat (contrib/gridmix)<br>
+     <b>Sometimes gridmix emulates data larger much larger then acutal counter for map only jobs</b><br>
+     <blockquote>Bug fixed in compression emulation feature for map only jobs.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4087">MAPREDUCE-4087</a>.
+     Major bug reported by Ravi Gummadi and fixed by Ravi Gummadi <br>
+     <b>[Gridmix] GenerateDistCacheData job of Gridmix can become slow in some cases</b><br>
+     <blockquote>Fixes the issue of GenerateDistCacheData  job slowness.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4083">MAPREDUCE-4083</a>.
+     Major bug reported by Karam Singh and fixed by Amar Kamat (contrib/gridmix)<br>
+     <b>GridMix emulated job tasks.resource-usage emulator for CPU usage throws NPE when Trace contains cumulativeCpuUsage value of 0 at attempt level</b><br>
+     <blockquote>Fixes NPE in cpu emulation in Gridmix</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4067">MAPREDUCE-4067</a>.
+     Critical bug reported by Jitendra Nath Pandey and fixed by Xuan Gong <br>
+     <b>Replace YarnRemoteException with IOException in MRv2 APIs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4019">MAPREDUCE-4019</a>.
+     Minor bug reported by B Anil Kumar and fixed by Ashwin Shankar (client)<br>
+     <b>-list-attempt-ids  is not working</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3953">MAPREDUCE-3953</a>.
+     Major bug reported by Ravi Gummadi and fixed by Ravi Gummadi <br>
+     <b>Gridmix throws NPE and does not simulate a job if the trace contains null taskStatus for a task</b><br>
+     <blockquote>Fixes NPE and makes Gridmix simulate succeeded-jobs-with-failed-tasks. All tasks of such simulated jobs(including the failed ones of original job) will succeed.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3872">MAPREDUCE-3872</a>.
+     Major bug reported by Patrick Hunt and fixed by Robert Kanter (client , mrv2)<br>
+     <b>event handling races in ContainerLauncherImpl and TestContainerLauncher</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3829">MAPREDUCE-3829</a>.
+     Major bug reported by Ravi Gummadi and fixed by Ravi Gummadi (contrib/gridmix)<br>
+     <b>[Gridmix] Gridmix should give better error message when input-data directory already exists and -generate option is given</b><br>
+     <blockquote>Makes Gridmix emit out correct error message when the input data directory already exists and -generate option is used. Makes Gridmix exit with proper exit codes when Gridmix fails in args-processing, startup/setup.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3787">MAPREDUCE-3787</a>.
+     Major improvement reported by Amar Kamat and fixed by Amar Kamat (contrib/gridmix)<br>
+     <b>[Gridmix] Improve STRESS mode</b><br>
+     <blockquote>JobMonitor can now deploy multiple threads for faster job-status polling. Use 'gridmix.job-monitor.thread-count' to set the number of threads. Stress mode now relies on the updates from the job monitor instead of polling for job status. Failures in job submission now get reported to the statistics module and ultimately reported to the user via summary.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3757">MAPREDUCE-3757</a>.
+     Major bug reported by Ravi Gummadi and fixed by Ravi Gummadi (tools/rumen)<br>
+     <b>Rumen Folder is not adjusting the shuffleFinished and sortFinished times of reduce task attempts</b><br>
+     <blockquote>Fixed the sortFinishTime and shuffleFinishTime adjustments in Rumen Folder.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3685">MAPREDUCE-3685</a>.
+     Critical bug reported by anty.rao and fixed by anty (mrv2)<br>
+     <b>There are some bugs in implementation of MergeManager</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3533">MAPREDUCE-3533</a>.
+     Minor improvement reported by Steve Loughran and fixed by  (mrv2)<br>
+     <b>have the service interface extend Closeable and use close() as its shutdown operation</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3502">MAPREDUCE-3502</a>.
+     Major task reported by Steve Loughran and fixed by Steve Loughran (mrv2)<br>
+     <b>Review all Service.stop() operations and make sure that they work before a service is started</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3008">MAPREDUCE-3008</a>.
+     Major sub-task reported by Amar Kamat and fixed by Amar Kamat (contrib/gridmix)<br>
+     <b>[Gridmix] Improve cumulative CPU usage emulation for short running tasks</b><br>
+     <blockquote>Improves cumulative CPU emulation for short running tasks.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2722">MAPREDUCE-2722</a>.
+     Major bug reported by Ravi Gummadi and fixed by Ravi Gummadi (contrib/gridmix)<br>
+     <b>Gridmix simulated job's map's hdfsBytesRead counter is wrong when compressed input is used</b><br>
+     <blockquote>Makes Gridmix use the uncompressed input data size while simulating map tasks in the case where compressed input data was used in original job.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4917">HDFS-4917</a>.
+     Major bug reported by Fengdong Yu and fixed by Fengdong Yu (datanode , namenode)<br>
+     <b>Start-dfs.sh cannot pass the parameters correctly</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4914">HDFS-4914</a>.
+     Minor improvement reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (hdfs-client)<br>
+     <b>When possible, Use DFSClient.Conf instead of Configuration </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4910">HDFS-4910</a>.
+     Major bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>TestPermission failed in branch-2</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4906">HDFS-4906</a>.
+     Major bug reported by Aaron T. Myers and fixed by Aaron T. Myers (hdfs-client)<br>
+     <b>HDFS Output streams should not accept writes after being closed</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4902">HDFS-4902</a>.
+     Major bug reported by Binglin Chang and fixed by Binglin Chang (snapshots)<br>
+     <b>DFSClient.getSnapshotDiffReport should use string path rather than o.a.h.fs.Path</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4883">HDFS-4883</a>.
+     Major bug reported by Konstantin Shvachko and fixed by Tao Luo (namenode)<br>
+     <b>complete() should verify fileId</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4880">HDFS-4880</a>.
+     Major bug reported by Arpit Agarwal and fixed by Suresh Srinivas (namenode)<br>
+     <b>Diagnostic logging while loading name/edits files</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4878">HDFS-4878</a>.
+     Major bug reported by Tao Luo and fixed by Tao Luo (namenode)<br>
+     <b>On Remove Block, Block is not Removed from neededReplications queue</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4877">HDFS-4877</a>.
+     Blocker bug reported by Jing Zhao and fixed by Jing Zhao (snapshots)<br>
+     <b>Snapshot: fix the scenario where a directory is renamed under its prior descendant</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4876">HDFS-4876</a>.
+     Minor sub-task reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (snapshots)<br>
+     <b>The javadoc of FileWithSnapshot is incorrect</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4875">HDFS-4875</a>.
+     Minor sub-task reported by Tsz Wo (Nicholas), SZE and fixed by Arpit Agarwal (snapshots , test)<br>
+     <b>Add a test for testing snapshot file length</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4873">HDFS-4873</a>.
+     Major bug reported by Hari Mankude and fixed by Jing Zhao (snapshots)<br>
+     <b>callGetBlockLocations returns incorrect number of blocks for snapshotted files</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4867">HDFS-4867</a>.
+     Major bug reported by Kihwal Lee and fixed by Plamen Jeliazkov (namenode)<br>
+     <b>metaSave NPEs when there are invalid blocks in repl queue.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4866">HDFS-4866</a>.
+     Blocker bug reported by Ralph Castain and fixed by Arpit Agarwal (namenode)<br>
+     <b>Protocol buffer support cannot compile under C</b><br>
+     <blockquote>The Protocol Buffers definition of the inter-namenode protocol required a change for compatibility with compiled C clients.  This is a backwards-incompatible change.  A namenode prior to this change will not be able to communicate with a namenode after this change.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4865">HDFS-4865</a>.
+     Major bug reported by Wei Yan and fixed by Wei Yan <br>
+     <b>Remove sub resource warning from httpfs log at startup time</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4863">HDFS-4863</a>.
+     Major bug reported by Jing Zhao and fixed by Jing Zhao (snapshots)<br>
+     <b>The root directory should be added to the snapshottable directory list while loading fsimage </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4862">HDFS-4862</a>.
+     Major bug reported by Ravi Prakash and fixed by Ravi Prakash <br>
+     <b>SafeModeInfo.isManual() returns true when resources are low even if it wasn't entered into manually</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4857">HDFS-4857</a>.
+     Major bug reported by Jing Zhao and fixed by Jing Zhao (snapshots)<br>
+     <b>Snapshot.Root and AbstractINodeDiff#snapshotINode should not be put into INodeMap when loading FSImage</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4850">HDFS-4850</a>.
+     Major bug reported by Stephen Chu and fixed by Jing Zhao (tools)<br>
+     <b>fix OfflineImageViewer to work on fsimages with empty files or snapshots</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4848">HDFS-4848</a>.
+     Minor improvement reported by Stephen Chu and fixed by Jing Zhao (snapshots)<br>
+     <b>copyFromLocal and renaming a file to ".snapshot" should output that ".snapshot" is a reserved name</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4846">HDFS-4846</a>.
+     Minor bug reported by Stephen Chu and fixed by Jing Zhao (snapshots)<br>
+     <b>Clean up snapshot CLI commands output stacktrace for invalid arguments</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4845">HDFS-4845</a>.
+     Critical bug reported by Kihwal Lee and fixed by Arpit Agarwal (namenode)<br>
+     <b>FSEditLogLoader gets NPE while accessing INodeMap in TestEditLogRace</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4842">HDFS-4842</a>.
+     Major sub-task reported by Jing Zhao and fixed by Jing Zhao (snapshots)<br>
+     <b>Snapshot: identify the correct prior snapshot when deleting a snapshot under a renamed subtree</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4840">HDFS-4840</a>.
+     Major bug reported by Kihwal Lee and fixed by Kihwal Lee (namenode)<br>
+     <b>ReplicationMonitor gets NPE during shutdown</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4832">HDFS-4832</a>.
+     Critical bug reported by Ravi Prakash and fixed by Ravi Prakash <br>
+     <b>Namenode doesn't change the number of missing blocks in safemode when DNs rejoin or leave</b><br>
+     <blockquote>This change makes name node keep its internal replication queues and data node state updated in manual safe mode. This allows metrics and UI to present up-to-date information while in safe mode. The behavior during start-up safe mode is unchanged. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4830">HDFS-4830</a>.
+     Minor bug reported by Aaron T. Myers and fixed by Aaron T. Myers <br>
+     <b>Typo in config settings for AvailableSpaceVolumeChoosingPolicy in hdfs-default.xml</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4827">HDFS-4827</a>.
+     Major bug reported by Devaraj Das and fixed by Devaraj Das <br>
+     <b>Slight update to the implementation of API for handling favored nodes in DFSClient</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4826">HDFS-4826</a>.
+     Minor bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestNestedSnapshots times out due to repeated slow edit log flushes when running on virtualized disk</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4825">HDFS-4825</a>.
+     Major bug reported by Andrew Wang and fixed by Andrew Wang (webhdfs)<br>
+     <b>webhdfs / httpfs tests broken because of min block size change</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4824">HDFS-4824</a>.
+     Major bug reported by Henry Robinson and fixed by Colin Patrick McCabe (hdfs-client)<br>
+     <b>FileInputStreamCache.close leaves dangling reference to FileInputStreamCache.cacheCleaner</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4819">HDFS-4819</a>.
+     Minor sub-task reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (documentation)<br>
+     <b>Update Snapshot doc for HDFS-4758</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4818">HDFS-4818</a>.
+     Minor bug reported by Chris Nauroth and fixed by Chris Nauroth (namenode , test)<br>
+     <b>several HDFS tests that attempt to make directories unusable do not work correctly on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4815">HDFS-4815</a>.
+     Major bug reported by Tian Hong Wang and fixed by Tian Hong Wang (datanode , test)<br>
+     <b>TestRBWBlockInvalidation#testBlockInvalidationWhenRBWReplicaMissedInDN: Double call countReplicas() to fetch corruptReplicas and liveReplicas is not needed</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4813">HDFS-4813</a>.
+     Minor bug reported by Tsz Wo (Nicholas), SZE and fixed by Jing Zhao (namenode)<br>
+     <b>BlocksMap may throw NullPointerException during shutdown</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4810">HDFS-4810</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>several HDFS HA tests have timeouts that are too short</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4807">HDFS-4807</a>.
+     Major bug reported by Kihwal Lee and fixed by Cristina L. Abad <br>
+     <b>DFSOutputStream.createSocketForPipeline() should not include timeout extension on connect</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4805">HDFS-4805</a>.
+     Critical bug reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
+     <b>Webhdfs client is fragile to token renewal errors</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4804">HDFS-4804</a>.
+     Minor improvement reported by Stephen Chu and fixed by Stephen Chu <br>
+     <b>WARN when users set the block balanced preference percent below 0.5 or above 1.0</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4799">HDFS-4799</a>.
+     Blocker bug reported by Todd Lipcon and fixed by Todd Lipcon (namenode)<br>
+     <b>Corrupt replica can be prematurely removed from corruptReplicas map</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4787">HDFS-4787</a>.
+     Major improvement reported by Tian Hong Wang and fixed by Tian Hong Wang <br>
+     <b>Create a new HdfsConfiguration before each TestDFSClientRetries testcases</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4785">HDFS-4785</a>.
+     Major sub-task reported by Suresh Srinivas and fixed by Suresh Srinivas (namenode)<br>
+     <b>Concat operation does not remove concatenated files from InodeMap</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4784">HDFS-4784</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (namenode)<br>
+     <b>NPE in FSDirectory.resolvePath()</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4783">HDFS-4783</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestDelegationTokensWithHA#testHAUtilClonesDelegationTokens fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4780">HDFS-4780</a>.
+     Minor bug reported by Kihwal Lee and fixed by Robert Parker (namenode)<br>
+     <b>Use the correct relogin method for services</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4778">HDFS-4778</a>.
+     Major bug reported by Devaraj Das and fixed by Devaraj Das (namenode)<br>
+     <b>Invoke getPipeline in the chooseTarget implementation that has favoredNodes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4772">HDFS-4772</a>.
+     Minor improvement reported by Brandon Li and fixed by Brandon Li (namenode)<br>
+     <b>Add number of children in HdfsFileStatus</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4768">HDFS-4768</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (datanode)<br>
+     <b>File handle leak in datanode when a block pool is removed</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4765">HDFS-4765</a>.
+     Major bug reported by Andrew Wang and fixed by Andrew Wang (namenode)<br>
+     <b>Permission check of symlink deletion incorrectly throws UnresolvedLinkException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4751">HDFS-4751</a>.
+     Minor bug reported by Andrew Wang and fixed by Andrew Wang (test)<br>
+     <b>TestLeaseRenewer#testThreadName flakes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4748">HDFS-4748</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (qjm , test)<br>
+     <b>MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic test failures</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4745">HDFS-4745</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestDataTransferKeepalive#testSlowReader has race condition that causes sporadic failure</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4743">HDFS-4743</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestNNStorageRetentionManager fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4741">HDFS-4741</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (test)<br>
+     <b>TestStorageRestore#testStorageRestoreFailure fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4740">HDFS-4740</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (test)<br>
+     <b>Fixes for a few test failures on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4739">HDFS-4739</a>.
+     Major bug reported by Aaron T. Myers and fixed by Aaron T. Myers (namenode)<br>
+     <b>NN can miscalculate the number of extra edit log segments to retain</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4737">HDFS-4737</a>.
+     Major bug reported by Sean Mackrory and fixed by Sean Mackrory <br>
+     <b>JVM path embedded in fuse binaries</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4734">HDFS-4734</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal <br>
+     <b>HDFS Tests that use ShellCommandFencer are broken on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4733">HDFS-4733</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur <br>
+     <b>Make HttpFS username pattern configurable</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4732">HDFS-4732</a>.
+     Minor bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image tarball that contains hard links</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4725">HDFS-4725</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (namenode , test , tools)<br>
+     <b>fix HDFS file handle leaks</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4722">HDFS-4722</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic (test)<br>
+     <b>TestGetConf#testFederation times out on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4721">HDFS-4721</a>.
+     Major improvement reported by Varun Sharma and fixed by Varun Sharma (namenode)<br>
+     <b>Speed up lease/block recovery when DN fails and a block goes into recovery</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4714">HDFS-4714</a>.
+     Major bug reported by Kihwal Lee and fixed by  (namenode)<br>
+     <b>Log short messages in Namenode RPC server for exceptions meant for clients</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4705">HDFS-4705</a>.
+     Minor bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4699">HDFS-4699</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestPipelinesFailover#testPipelineRecoveryStress fails sporadically</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4698">HDFS-4698</a>.
+     Minor improvement reported by Colin Patrick McCabe and fixed by Colin Patrick McCabe (hdfs-client)<br>
+     <b>provide client-side metrics for remote reads, local reads, and short-circuit reads</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4695">HDFS-4695</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic (test)<br>
+     <b>TestEditLog leaks open file handles between tests</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4693">HDFS-4693</a>.
+     Minor bug reported by Arpit Agarwal and fixed by Arpit Agarwal (test)<br>
+     <b>Some test cases in TestCheckpoint do not clean up after themselves</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4679">HDFS-4679</a>.
+     Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas (namenode)<br>
+     <b>Namenode operation checks should be done in a consistent manner</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4677">HDFS-4677</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>Editlog should support synchronous writes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4676">HDFS-4676</a>.
+     Minor bug reported by Suresh Srinivas and fixed by Suresh Srinivas (test)<br>
+     <b>TestHDFSFileSystemContract should set MiniDFSCluster variable to null to free up memory</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4674">HDFS-4674</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestBPOfferService fails on Windows due to failure parsing datanode data directory as URI</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4669">HDFS-4669</a>.
+     Major bug reported by Tian Hong Wang and fixed by Tian Hong Wang (test)<br>
+     <b>TestBlockPoolManager fails using IBM java</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4659">HDFS-4659</a>.
+     Major bug reported by Brandon Li and fixed by Brandon Li (namenode)<br>
+     <b>Support setting execution bit for regular files</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4658">HDFS-4658</a>.
+     Trivial bug reported by Aaron T. Myers and fixed by Aaron T. Myers (ha , namenode)<br>
+     <b>Standby NN will log that it has received a block report "after becoming active"</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4655">HDFS-4655</a>.
+     Minor bug reported by Aaron T. Myers and fixed by Aaron T. Myers (datanode)<br>
+     <b>DNA_FINALIZE is logged as being an unknown command by the DN when received from the standby NN</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4646">HDFS-4646</a>.
+     Minor bug reported by Jagane Sundar and fixed by  (namenode)<br>
+     <b>createNNProxyWithClientProtocol ignores configured timeout value</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4643">HDFS-4643</a>.
+     Trivial bug reported by Todd Lipcon and fixed by Todd Lipcon (qjm , test)<br>
+     <b>Fix flakiness in TestQuorumJournalManager</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4639">HDFS-4639</a>.
+     Major bug reported by Konstantin Shvachko and fixed by Plamen Jeliazkov (namenode)<br>
+     <b>startFileInternal() should not increment generation stamp</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4635">HDFS-4635</a>.
+     Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas (namenode)<br>
+     <b>Move BlockManager#computeCapacity to LightWeightGSet</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4625">HDFS-4625</a>.
+     Minor bug reported by Arpit Agarwal and fixed by Ivan Mitic (test)<br>
+     <b>Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4621">HDFS-4621</a>.
+     Minor bug reported by Todd Lipcon and fixed by Todd Lipcon (ha , qjm)<br>
+     <b>additional logging to help diagnose slow QJM logSync</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4620">HDFS-4620</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (documentation)<br>
+     <b>Documentation for dfs.namenode.rpc-address specifies wrong format</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4618">HDFS-4618</a>.
+     Major bug reported by Todd Lipcon and fixed by Todd Lipcon (namenode)<br>
+     <b>default for checkpoint txn interval is too low</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4615">HDFS-4615</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (test)<br>
+     <b>Fix TestDFSShell failures on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4614">HDFS-4614</a>.
+     Trivial bug reported by Aaron T. Myers and fixed by Aaron T. Myers (namenode)<br>
+     <b>FSNamesystem#getContentSummary should use getPermissionChecker helper method</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4610">HDFS-4610</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4609">HDFS-4609</a>.
+     Minor bug reported by Ivan Mitic and fixed by Ivan Mitic (test)<br>
+     <b>TestAuditLogs should release log handles between tests</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4607">HDFS-4607</a>.
+     Minor bug reported by Ivan Mitic and fixed by Ivan Mitic (test)<br>
+     <b>TestGetConf#testGetSpecificKey fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4604">HDFS-4604</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestJournalNode fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4603">HDFS-4603</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestMiniDFSCluster fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4598">HDFS-4598</a>.
+     Minor bug reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (webhdfs)<br>
+     <b>WebHDFS concat: the default value of sources in the code does not match the doc</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4596">HDFS-4596</a>.
+     Major bug reported by Andrew Wang and fixed by Andrew Wang (namenode)<br>
+     <b>Shutting down namenode during checkpointing can lead to md5sum error</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4595">HDFS-4595</a>.
+     Major bug reported by Suresh Srinivas and fixed by Suresh Srinivas (hdfs-client)<br>
+     <b>When short circuit read is fails, DFSClient does not fallback to regular reads</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4593">HDFS-4593</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal <br>
+     <b>TestSaveNamespace fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4592">HDFS-4592</a>.
+     Minor bug reported by Aaron T. Myers and fixed by Aaron T. Myers (namenode)<br>
+     <b>Default values for access time precision are out of sync between hdfs-default.xml and the code</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4591">HDFS-4591</a>.
+     Major bug reported by Aaron T. Myers and fixed by Aaron T. Myers (ha , namenode)<br>
+     <b>HA clients can fail to fail over while Standby NN is performing long checkpoint</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4586">HDFS-4586</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4583">HDFS-4583</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestNodeCount fails </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4582">HDFS-4582</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestHostsFiles fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4573">HDFS-4573</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (test)<br>
+     <b>Fix TestINodeFile on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4572">HDFS-4572</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (namenode , test)<br>
+     <b>Fix TestJournal failures on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4569">HDFS-4569</a>.
+     Trivial improvement reported by Andrew Wang and fixed by Andrew Wang <br>
+     <b>Small image transfer related cleanups.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4565">HDFS-4565</a>.
+     Minor improvement reported by Arpit Gupta and fixed by Arpit Gupta (security)<br>
+     <b>use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in secondary namenode and namenode http server</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4544">HDFS-4544</a>.
+     Major bug reported by Amareshwari Sriramadasu and fixed by Arpit Agarwal <br>
+     <b>Error in deleting blocks should not do check disk, for all types of errors</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4542">HDFS-4542</a>.
+     Blocker sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
+     <b>Webhdfs doesn't support secure proxy users</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4541">HDFS-4541</a>.
+     Major bug reported by Arpit Gupta and fixed by Arpit Gupta (datanode , security)<br>
+     <b>set hadoop.log.dir and hadoop.id.str when starting secure datanode so it writes the logs to the correct dir by default</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4540">HDFS-4540</a>.
+     Major bug reported by Arpit Gupta and fixed by Arpit Gupta (security)<br>
+     <b>namenode http server should use the web authentication keytab for spnego principal</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4533">HDFS-4533</a>.
+     Major bug reported by Fengdong Yu and fixed by Fengdong Yu (datanode , namenode)<br>
+     <b>start-dfs.sh ignored additional parameters besides -upgrade</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4532">HDFS-4532</a>.
+     Critical bug reported by Daryn Sharp and fixed by Daryn Sharp (namenode)<br>
+     <b>RPC call queue may fill due to current user lookup</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4525">HDFS-4525</a>.
+     Major sub-task reported by Uma Maheswara Rao G and fixed by SreeHari (namenode)<br>
+     <b>Provide an API for knowing that whether file is closed or not.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4522">HDFS-4522</a>.
+     Minor bug reported by Colin Patrick McCabe and fixed by Colin Patrick McCabe <br>
+     <b>LightWeightGSet expects incrementing a volatile to be atomic</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4519">HDFS-4519</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (datanode , scripts)<br>
+     <b>Support override of jsvc binary and log file locations when launching secure datanode.</b><br>
+     <blockquote>With this improvement the following options are available in release 1.2.0 and later on 1.x release stream:

+1. jsvc location can be overridden by setting environment variable JSVC_HOME. Defaults to jsvc binary packaged within the Hadoop distro.

+2. jsvc log output is directed to the file defined by JSVC_OUTFILE. Defaults to $HADOOP_LOG_DIR/jsvc.out.

+3. jsvc error output is directed to the file defined by JSVC_ERRFILE file.  Defaults to $HADOOP_LOG_DIR/jsvc.err.

+

+With this improvement the following options are available in release 2.0.4 and later on 2.x release stream:

+1. jsvc log output is directed to the file defined by JSVC_OUTFILE. Defaults to $HADOOP_LOG_DIR/jsvc.out.

+2. jsvc error output is directed to the file defined by JSVC_ERRFILE file.  Defaults to $HADOOP_LOG_DIR/jsvc.err.

+

+For overriding jsvc location on 2.x releases, here is the release notes from HDFS-2303:

+To run secure Datanodes users must install jsvc for their platform and set JSVC_HOME to point to the location of jsvc in their environment.

+</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4518">HDFS-4518</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal <br>
+     <b>Finer grained metrics for HDFS capacity</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4502">HDFS-4502</a>.
+     Blocker sub-task reported by Alejandro Abdelnur and fixed by Brandon Li (webhdfs)<br>
+     <b>WebHdfsFileSystem handling of fileld breaks compatibility</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4495">HDFS-4495</a>.
+     Major bug reported by Kihwal Lee and fixed by Kihwal Lee (hdfs-client)<br>
+     <b>Allow client-side lease renewal to be retried beyond soft-limit</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4484">HDFS-4484</a>.
+     Minor bug reported by Colin Patrick McCabe and fixed by Colin Patrick McCabe <br>
+     <b>libwebhdfs compilation broken with gcc 4.6.2</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4477">HDFS-4477</a>.
+     Critical bug reported by Kihwal Lee and fixed by Daryn Sharp (security)<br>
+     <b>Secondary namenode may retain old tokens</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4471">HDFS-4471</a>.
+     Major bug reported by Andrew Wang and fixed by Andrew Wang (namenode)<br>
+     <b>Namenode WebUI file browsing does not work with wildcard addresses configured</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4470">HDFS-4470</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth <br>
+     <b>several HDFS tests attempt file operations on invalid HDFS paths when running on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4461">HDFS-4461</a>.
+     Minor improvement reported by Colin Patrick McCabe and fixed by Colin Patrick McCabe <br>
+     <b>DirectoryScanner: volume path prefix takes up memory for every block that is scanned </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4434">HDFS-4434</a>.
+     Major sub-task reported by Brandon Li and fixed by Suresh Srinivas (namenode)<br>
+     <b>Provide a mapping from INodeId to INode</b><br>
+     <blockquote>This change adds support for referencing files and directories based on fileID/inodeID using a path /.reserved/.inodes/&lt;inodeid&gt;. With this change creating a file or directory /.reserved is not longer allowed. Before upgrading to a release with this change, files /.reserved needs to be renamed to another name.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4382">HDFS-4382</a>.
+     Major bug reported by Ted Yu and fixed by Ted Yu <br>
+     <b>Fix typo MAX_NOT_CHANGED_INTERATIONS</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4346">HDFS-4346</a>.
+     Minor sub-task reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (namenode)<br>
+     <b>Refactor INodeId and GenerationStamp</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4342">HDFS-4342</a>.
+     Major bug reported by Mark Yang and fixed by Arpit Agarwal (namenode)<br>
+     <b>Edits dir in dfs.namenode.edits.dir.required will be silently ignored if it is not in dfs.namenode.edits.dir</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4340">HDFS-4340</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (hdfs-client , namenode)<br>
+     <b>Update addBlock() to inculde inode id as additional argument</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4334">HDFS-4334</a>.
+     Major sub-task reported by Brandon Li and fixed by Brandon Li (namenode)<br>
+     <b>Add a unique id to each INode</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4305">HDFS-4305</a>.
+     Minor bug reported by Todd Lipcon and fixed by Andrew Wang (namenode)<br>
+     <b>Add a configurable limit on number of blocks per file, and min block size</b><br>
+     <blockquote>This change introduces a maximum number of blocks per file, by default one million, and a minimum block size, by default 1MB. These can optionally be changed via the configuration settings "dfs.namenode.fs-limits.max-blocks-per-file" and "dfs.namenode.fs-limits.min-block-size", respectively.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4304">HDFS-4304</a>.
+     Major improvement reported by Todd Lipcon and fixed by Colin Patrick McCabe (namenode)<br>
+     <b>Make FSEditLogOp.MAX_OP_SIZE configurable</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4300">HDFS-4300</a>.
+     Critical bug reported by Todd Lipcon and fixed by Andrew Wang <br>
+     <b>TransferFsImage.downloadEditsToStorage should use a tmp file for destination</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4298">HDFS-4298</a>.
+     Major bug reported by Todd Lipcon and fixed by Aaron T. Myers (namenode)<br>
+     <b>StorageRetentionManager spews warnings when used with QJM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4296">HDFS-4296</a>.
+     Major bug reported by Suresh Srinivas and fixed by Suresh Srinivas (namenode)<br>
+     <b>Add layout version for HDFS-4256 for release 1.2.0</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4287">HDFS-4287</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (webhdfs)<br>
+     <b>HTTPFS tests fail on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4261">HDFS-4261</a>.
+     Major bug reported by Tsz Wo (Nicholas), SZE and fixed by Junping Du (balancer)<br>
+     <b>TestBalancerWithNodeGroup times out</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4246">HDFS-4246</a>.
+     Minor improvement reported by Harsh J and fixed by Harsh J (hdfs-client)<br>
+     <b>The exclude node list should be more forgiving, for each output stream</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4240">HDFS-4240</a>.
+     Major bug reported by Junping Du and fixed by Junping Du (namenode)<br>
+     <b>In nodegroup-aware case, make sure nodes are avoided to place replica if some replica are already under the same nodegroup</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4235">HDFS-4235</a>.
+     Minor bug reported by Colin Patrick McCabe and fixed by Colin Patrick McCabe <br>
+     <b>when outputting XML, OfflineEditsViewer can't handle some edits containing non-ASCII strings</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4234">HDFS-4234</a>.
+     Minor improvement reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (balancer)<br>
+     <b>Use the generic code for choosing datanode in Balancer</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4222">HDFS-4222</a>.
+     Minor bug reported by Xiaobo Peng and fixed by Xiaobo Peng (namenode)<br>
+     <b>NN is unresponsive and loses heartbeats of DNs when Hadoop is configured to use LDAP and LDAP has issues</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4215">HDFS-4215</a>.
+     Major improvement reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (namenode)<br>
+     <b>Improvements on INode and image loading</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4209">HDFS-4209</a>.
+     Major bug reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (namenode)<br>
+     <b>Clean up the addNode/addChild/addChildNoQuotaCheck methods in FSDirectory</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4206">HDFS-4206</a>.
+     Major improvement reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (namenode)<br>
+     <b>Change the fields in INode and its subclasses to private</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4205">HDFS-4205</a>.
+     Major bug reported by Andy Isaacson and fixed by Jason Lowe (hdfs-client)<br>
+     <b>fsck fails with symlinks</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4152">HDFS-4152</a>.
+     Minor improvement reported by Tsz Wo (Nicholas), SZE and fixed by Jing Zhao (namenode)<br>
+     <b>Add a new class for the parameter in INode.collectSubtreeBlocksAndClear(..)</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4151">HDFS-4151</a>.
+     Minor improvement reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (namenode)<br>
+     <b>Passing INodesInPath instead of INode[] in FSDirectory</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4129">HDFS-4129</a>.
+     Minor test reported by Tsz Wo (Nicholas), SZE and fixed by Tsz Wo (Nicholas), SZE (namenode)<br>
+     <b>Add utility methods to dump NameNode in memory tree for testing</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4128">HDFS-4128</a>.
+     Major bug reported by Todd Lipcon and fixed by Kihwal Lee (namenode)<br>
+     <b>2NN gets stuck in inconsistent state if edit log replay fails in the middle</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4124">HDFS-4124</a>.
+     Minor new feature reported by Jing Zhao and fixed by Jing Zhao <br>
+     <b>Refactor INodeDirectory#getExistingPathINodes() to enable returning more than INode array</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4053">HDFS-4053</a>.
+     Major improvement reported by Eli Collins and fixed by Eli Collins <br>
+     <b>Increase the default block size</b><br>
+     <blockquote>The default blocks size prior to this change was 64MB. This jira changes the default block size to 128MB. To go back to previous behavior, please configure the in hdfs-site.xml, the configuration parameter "dfs.blocksize" to 67108864.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4013">HDFS-4013</a>.
+     Trivial bug reported by Chao Shi and fixed by Chao Shi (hdfs-client)<br>
+     <b>TestHftpURLTimeouts throws NPE</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3940">HDFS-3940</a>.
+     Minor improvement reported by Eli Collins and fixed by Suresh Srinivas <br>
+     <b>Add Gset#clear method and clear the block map when namenode is shutdown</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3934">HDFS-3934</a>.
+     Minor bug reported by Andy Isaacson and fixed by Colin Patrick McCabe <br>
+     <b>duplicative dfs_hosts entries handled wrong</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3875">HDFS-3875</a>.
+     Critical bug reported by Todd Lipcon and fixed by Kihwal Lee (datanode , hdfs-client)<br>
+     <b>Issue handling checksum errors in write pipeline</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3817">HDFS-3817</a>.
+     Major improvement reported by Brandon Li and fixed by Brandon Li (namenode)<br>
+     <b>avoid printing stack information for SafeModeException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3792">HDFS-3792</a>.
+     Trivial bug reported by Todd Lipcon and fixed by Todd Lipcon (build , namenode)<br>
+     <b>Fix two findbugs introduced by HDFS-3695</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3769">HDFS-3769</a>.
+     Critical sub-task reported by liaowenrui and fixed by  (ha)<br>
+     <b>standby namenode become active fails because starting log segment fail on shared storage</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3601">HDFS-3601</a>.
+     Major new feature reported by Junping Du and fixed by Junping Du (namenode)<br>
+     <b>Implementation of ReplicaPlacementPolicyNodeGroup to support 4-layer network topology</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3498">HDFS-3498</a>.
+     Major improvement reported by Junping Du and fixed by Junping Du (namenode)<br>
+     <b>Make Replica Removal Policy pluggable and ReplicaPlacementPolicyDefault extensible for reusing code in subclass</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3495">HDFS-3495</a>.
+     Major new feature reported by Junping Du and fixed by Junping Du (balancer)<br>
+     <b>Update Balancer to support new NetworkTopology with NodeGroup</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3277">HDFS-3277</a>.
+     Major bug reported by Colin Patrick McCabe and fixed by Andrew Wang <br>
+     <b>fail over to loading a different FSImage if the first one we try to load is corrupt</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3180">HDFS-3180</a>.
+     Major bug reported by Daryn Sharp and fixed by Chris Nauroth (webhdfs)<br>
+     <b>Add socket timeouts to webhdfs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3163">HDFS-3163</a>.
+     Trivial improvement reported by Brandon Li and fixed by Brandon Li (test)<br>
+     <b>TestHDFSCLI.testAll fails if the user name is not all lowercase</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3009">HDFS-3009</a>.
+     Trivial bug reported by Hari Mankude and fixed by Hari Mankude (hdfs-client)<br>
+     <b>DFSClient islocaladdress() can use similar routine in netutils</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2857">HDFS-2857</a>.
+     Major improvement reported by Suresh Srinivas and fixed by Suresh Srinivas (namenode)<br>
+     <b>Cleanup BlockInfo class</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2576">HDFS-2576</a>.
+     Major new feature reported by Pritam Damania and fixed by Devaraj Das (hdfs-client , namenode)<br>
+     <b>Namenode should have a favored nodes hint to enable clients to have control over block placement.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2572">HDFS-2572</a>.
+     Trivial improvement reported by Harsh J and fixed by Harsh J (datanode)<br>
+     <b>Unnecessary double-check in DN#getHostName</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-1804">HDFS-1804</a>.
+     Minor new feature reported by Harsh J and fixed by Aaron T. Myers (datanode)<br>
+     <b>Add a new block-volume device choosing policy that looks at free space</b><br>
+     <blockquote>There is now a new option to have the DN take into account available disk space on each volume when choosing where to place a replica when performing an HDFS write. This can be enabled by setting the config "dfs.datanode.fsdataset.volume.choosing.policy" to the value "org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy".</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-347">HDFS-347</a>.
+     Major improvement reported by George Porter and fixed by Colin Patrick McCabe (datanode , hdfs-client , performance)<br>
+     <b>DFS read performance suboptimal when client co-located on nodes with data</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9656">HADOOP-9656</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu (test , tools)<br>
+     <b>Gridmix unit tests fail on Windows and Linux</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9649">HADOOP-9649</a>.
+     Blocker improvement reported by Zhijie Shen and fixed by Zhijie Shen <br>
+     <b>Promote YARN service life-cycle libraries into Hadoop Common</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9638">HADOOP-9638</a>.
+     Major bug reported by Chris Nauroth and fixed by Andrey Klochkov (test)<br>
+     <b>parallel test changes caused invalid test path for several HDFS tests on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9637">HADOOP-9637</a>.
+     Major bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>Adding Native Fstat for Windows as needed by YARN</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9632">HADOOP-9632</a>.
+     Minor bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>TestShellCommandFencer will fail if there is a 'host' machine in the network</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9630">HADOOP-9630</a>.
+     Major sub-task reported by Luke Lu and fixed by Junping Du (ipc)<br>
+     <b>Remove IpcSerializationType</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9625">HADOOP-9625</a>.
+     Minor improvement reported by Paul Han and fixed by  (bin , conf)<br>
+     <b>HADOOP_OPTS not picked up by hadoop command</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9624">HADOOP-9624</a>.
+     Minor test reported by Xi Fang and fixed by Xi Fang (test)<br>
+     <b>TestFSMainOperationsLocalFileSystem failed when the Hadoop test root path has "X" in its name</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9619">HADOOP-9619</a>.
+     Major sub-task reported by Sanjay Radia and fixed by Sanjay Radia (documentation)<br>
+     <b>Mark stability of .proto files</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9607">HADOOP-9607</a>.
+     Minor bug reported by Timothy St. Clair and fixed by  (documentation)<br>
+     <b>Fixes in Javadoc build</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9605">HADOOP-9605</a>.
+     Major improvement reported by Timothy St. Clair and fixed by  (build)<br>
+     <b>Update junit dependency</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9604">HADOOP-9604</a>.
+     Minor improvement reported by Jingguo Yao and fixed by Jingguo Yao (fs)<br>
+     <b>Wrong Javadoc of FSDataOutputStream</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9599">HADOOP-9599</a>.
+     Major bug reported by Mostafa Elhemali and fixed by Mostafa Elhemali <br>
+     <b>hadoop-config.cmd doesn't set JAVA_LIBRARY_PATH correctly</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9593">HADOOP-9593</a>.
+     Major bug reported by Steve Loughran and fixed by Steve Loughran (util)<br>
+     <b>stack trace printed at ERROR for all yarn clients without hadoop.home set</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9581">HADOOP-9581</a>.
+     Major bug reported by Ashwin Shankar and fixed by Ashwin Shankar (scripts)<br>
+     <b>hadoop --config non-existent directory should result in error </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9574">HADOOP-9574</a>.
+     Major bug reported by Jian He and fixed by Jian He <br>
+     <b>Add new methods in AbstractDelegationTokenSecretManager for restoring RMDelegationTokens on RMRestart</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9566">HADOOP-9566</a>.
+     Major bug reported by Lenni Kuff and fixed by Colin Patrick McCabe (native)<br>
+     <b>Performing direct read using libhdfs sometimes raises SIGPIPE (which in turn throws SIGABRT) causing client crashes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9563">HADOOP-9563</a>.
+     Major bug reported by Kihwal Lee and fixed by Tian Hong Wang (util)<br>
+     <b>Fix incompatibility introduced by HADOOP-9523</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9560">HADOOP-9560</a>.
+     Minor improvement reported by Tsuyoshi OZAWA and fixed by Tsuyoshi OZAWA (metrics)<br>
+     <b>metrics2#JvmMetrics should have max memory size of JVM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9556">HADOOP-9556</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (ha , test)<br>
+     <b>disable HA tests on Windows that fail due to ZooKeeper client connection management bug</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9553">HADOOP-9553</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (test)<br>
+     <b>TestAuthenticationToken fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9550">HADOOP-9550</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla <br>
+     <b>Remove aspectj dependency</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9549">HADOOP-9549</a>.
+     Blocker bug reported by Kihwal Lee and fixed by Daryn Sharp (security)<br>
+     <b>WebHdfsFileSystem hangs on close()</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9532">HADOOP-9532</a>.
+     Minor bug reported by Chris Nauroth and fixed by Chris Nauroth (bin)<br>
+     <b>HADOOP_CLIENT_OPTS is appended twice by Windows cmd scripts</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9526">HADOOP-9526</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (test)<br>
+     <b>TestShellCommandFencer and TestShell fail on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9524">HADOOP-9524</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (ha)<br>
+     <b>Fix ShellCommandFencer to work on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9523">HADOOP-9523</a>.
+     Major improvement reported by Tian Hong Wang and fixed by Tian Hong Wang <br>
+     <b>Provide a generic IBM java vendor flag in PlatformName.java to support non-Sun JREs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9517">HADOOP-9517</a>.
+     Blocker bug reported by Arun C Murthy and fixed by Karthik Kambatla (documentation)<br>
+     <b>Document Hadoop Compatibility</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9511">HADOOP-9511</a>.
+     Major improvement reported by Omkar Vinit Joshi and fixed by Omkar Vinit Joshi <br>
+     <b>Adding support for additional input streams (FSDataInputStream and RandomAccessFile) in SecureIOUtils.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9503">HADOOP-9503</a>.
+     Minor improvement reported by Varun Sharma and fixed by Varun Sharma (ipc)<br>
+     <b>Remove sleep between IPC client connect timeouts</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9500">HADOOP-9500</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestUserGroupInformation#testGetServerSideGroups fails on Windows due to failure to find winutils.exe</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9496">HADOOP-9496</a>.
+     Critical bug reported by Gopal V and fixed by Harsh J (bin)<br>
+     <b>Bad merge of HADOOP-9450 on branch-2 breaks all bin/hadoop calls that need HADOOP_CLASSPATH </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9490">HADOOP-9490</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic (fs)<br>
+     <b>LocalFileSystem#reportChecksumFailure not closing the checksum file handle before rename</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9488">HADOOP-9488</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (fs)<br>
+     <b>FileUtil#createJarWithClassPath only substitutes environment variables from current process environment/does not support overriding when launching new process</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9486">HADOOP-9486</a>.
+     Major bug reported by Vinod Kumar Vavilapalli and fixed by Chris Nauroth <br>
+     <b>Promote Windows and Shell related utils from YARN to Hadoop Common</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9485">HADOOP-9485</a>.
+     Minor bug reported by Colin Patrick McCabe and fixed by Colin Patrick McCabe (net)<br>
+     <b>No default value in the code for hadoop.rpc.socket.factory.class.default</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9483">HADOOP-9483</a>.
+     Major improvement reported by Chris Nauroth and fixed by Arpit Agarwal (util)<br>
+     <b>winutils support for readlink command</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9481">HADOOP-9481</a>.
+     Minor bug reported by Vadim Bondarev and fixed by Vadim Bondarev <br>
+     <b>Broken conditional logic with HADOOP_SNAPPY_LIBRARY</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9473">HADOOP-9473</a>.
+     Trivial bug reported by Glen Mazza and fixed by  (fs)<br>
+     <b>typo in FileUtil copy() method</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9469">HADOOP-9469</a>.
+     Major bug reported by Thomas Graves and fixed by Robert Parker <br>
+     <b>mapreduce/yarn source jars not included in dist tarball</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9459">HADOOP-9459</a>.
+     Critical bug reported by Vinay and fixed by Vinay (ha)<br>
+     <b>ActiveStandbyElector can join election even before Service HEALTHY, and results in null data at ActiveBreadCrumb</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9455">HADOOP-9455</a>.
+     Minor bug reported by Sangjin Lee and fixed by Chris Nauroth (bin)<br>
+     <b>HADOOP_CLIENT_OPTS appended twice causes JVM failures</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9451">HADOOP-9451</a>.
+     Major bug reported by Junping Du and fixed by Junping Du (net)<br>
+     <b>Node with one topology layer should be handled as fault topology when NodeGroup layer is enabled</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9450">HADOOP-9450</a>.
+     Major improvement reported by Mitch Wyle and fixed by Harsh J (scripts)<br>
+     <b>HADOOP_USER_CLASSPATH_FIRST is not honored; CLASSPATH is PREpended instead of APpended</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9443">HADOOP-9443</a>.
+     Major bug reported by Chuan Liu and fixed by Chuan Liu <br>
+     <b>Port winutils static code analysis change to trunk</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9437">HADOOP-9437</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestNativeIO#testRenameTo fails on Windows due to assumption that POSIX errno is embedded in NativeIOException</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9430">HADOOP-9430</a>.
+     Major bug reported by Amir Sanjar and fixed by  (security)<br>
+     <b>TestSSLFactory fails on IBM JVM</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9429">HADOOP-9429</a>.
+     Major bug reported by Amir Sanjar and fixed by  (test)<br>
+     <b>TestConfiguration fails with IBM JAVA </b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9425">HADOOP-9425</a>.
+     Major sub-task reported by Sanjay Radia and fixed by Sanjay Radia (ipc)<br>
+     <b>Add error codes to rpc-response</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9421">HADOOP-9421</a>.
+     Blocker sub-task reported by Sanjay Radia and fixed by Daryn Sharp <br>
+     <b>Convert SASL to use ProtoBuf and provide negotiation capabilities</b><br>
+     <blockquote>Raw SASL protocol now uses protobufs wrapped with RPC headers.

+The negotiation sequence incorporates the state of the exchange.

+The server now has the ability to advertise its supported auth types.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9413">HADOOP-9413</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9408">HADOOP-9408</a>.
+     Minor bug reported by rajeshbabu and fixed by rajeshbabu (conf)<br>
+     <b>misleading description for net.topology.table.file.name property in core-default.xml</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9407">HADOOP-9407</a>.
+     Major bug reported by Sangjin Lee and fixed by Sangjin Lee (build)<br>
+     <b>commons-daemon 1.0.3 dependency has bad group id causing build issues</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9405">HADOOP-9405</a>.
+     Minor bug reported by Andrew Wang and fixed by Andrew Wang (test , tools)<br>
+     <b>TestGridmixSummary#testExecutionSummarizer is broken</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9401">HADOOP-9401</a>.
+     Major improvement reported by Karthik Kambatla and fixed by Karthik Kambatla <br>
+     <b>CodecPool: Add counters for number of (de)compressors leased out</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9399">HADOOP-9399</a>.
+     Minor bug reported by Todd Lipcon and fixed by Konstantin Boudnik (build)<br>
+     <b>protoc maven plugin doesn't work on mvn 3.0.2</b><br>
+     <blockquote>Committed to 2.0.4-alpha branch</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9397">HADOOP-9397</a>.
+     Major bug reported by Jason Lowe and fixed by Chris Nauroth (build)<br>
+     <b>Incremental dist tar build fails</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9388">HADOOP-9388</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestFsShellCopy fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9380">HADOOP-9380</a>.
+     Major sub-task reported by Sanjay Radia and fixed by Sanjay Radia (ipc)<br>
+     <b>Add totalLength to rpc response</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9379">HADOOP-9379</a>.
+     Trivial improvement reported by Arpit Gupta and fixed by Arpit Gupta <br>
+     <b>capture the ulimit info after printing the log to the console</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9376">HADOOP-9376</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestProxyUserFromEnv fails on a Windows domain joined machine</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9373">HADOOP-9373</a>.
+     Minor bug reported by Suresh Srinivas and fixed by Suresh Srinivas <br>
+     <b>Merge CHANGES.branch-trunk-win.txt to CHANGES.txt</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9369">HADOOP-9369</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (net)<br>
+     <b>DNS#reverseDns() can return hostname with . appended at the end</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9365">HADOOP-9365</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>TestHAZKUtil fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9364">HADOOP-9364</a>.
+     Major bug reported by Ivan Mitic and fixed by Ivan Mitic <br>
+     <b>PathData#expandAsGlob does not return correct results for absolute paths on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9358">HADOOP-9358</a>.
+     Major bug reported by Todd Lipcon and fixed by Todd Lipcon (ipc , security)<br>
+     <b>"Auth failed" log should include exception string</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9353">HADOOP-9353</a>.
+     Major bug reported by Arpit Agarwal and fixed by Arpit Agarwal (build)<br>
+     <b>Activate native-win profile by default on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9352">HADOOP-9352</a>.
+     Major improvement reported by Daryn Sharp and fixed by Daryn Sharp (security)<br>
+     <b>Expose UGI.setLoginUser for tests</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9349">HADOOP-9349</a>.
+     Major bug reported by Sandy Ryza and fixed by Sandy Ryza (tools)<br>
+     <b>Confusing output when running hadoop version from one hadoop installation when HADOOP_HOME points to another</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9343">HADOOP-9343</a>.
+     Major improvement reported by Siddharth Seth and fixed by Siddharth Seth <br>
+     <b>Allow additional exceptions through the RPC layer</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9342">HADOOP-9342</a>.
+     Major bug reported by Thomas Weise and fixed by Thomas Weise (build)<br>
+     <b>Remove jline from distribution</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9339">HADOOP-9339</a>.
+     Major bug reported by Daryn Sharp and fixed by Daryn Sharp (ipc)<br>
+     <b>IPC.Server incorrectly sets UGI auth type</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9338">HADOOP-9338</a>.
+     Major new feature reported by Nick White and fixed by Nick White (fs)<br>
+     <b>FsShell Copy Commands Should Optionally Preserve File Attributes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9337">HADOOP-9337</a>.
+     Major bug reported by Ivan A. Veselovsky and fixed by Ivan A. Veselovsky <br>
+     <b>org.apache.hadoop.fs.DF.getMount() does not work on Mac OS</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9336">HADOOP-9336</a>.
+     Critical improvement reported by Daryn Sharp and fixed by Daryn Sharp (ipc)<br>
+     <b>Allow UGI of current connection to be queried</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9334">HADOOP-9334</a>.
+     Minor improvement reported by Nicolas Liochon and fixed by Nicolas Liochon (build)<br>
+     <b>Update netty version</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9323">HADOOP-9323</a>.
+     Minor bug reported by Hao Zhong and fixed by Suresh Srinivas (documentation , fs , io , record)<br>
+     <b>Typos in API documentation</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9322">HADOOP-9322</a>.
+     Minor improvement reported by Harsh J and fixed by Harsh J (security)<br>
+     <b>LdapGroupsMapping doesn't seem to set a timeout for its directory search</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9318">HADOOP-9318</a>.
+     Minor improvement reported by Colin Patrick McCabe and fixed by Colin Patrick McCabe <br>
+     <b>when exiting on a signal, print the signal name first</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9305">HADOOP-9305</a>.
+     Major bug reported by Aaron T. Myers and fixed by Aaron T. Myers (security)<br>
+     <b>Add support for running the Hadoop client on 64-bit AIX</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9304">HADOOP-9304</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (build)<br>
+     <b>remove addition of avro genreated-sources dirs to build</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9303">HADOOP-9303</a>.
+     Major bug reported by Thomas Graves and fixed by Andy Isaacson <br>
+     <b>command manual dfsadmin missing entry for restoreFailedStorage option</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9302">HADOOP-9302</a>.
+     Major bug reported by Thomas Graves and fixed by Andy Isaacson (documentation)<br>
+     <b>HDFS docs not linked from top level</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9299">HADOOP-9299</a>.
+     Blocker bug reported by Roman Shaposhnik and fixed by Daryn Sharp (security)<br>
+     <b>kerberos name resolution is kicking in even when kerberos is not configured</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9297">HADOOP-9297</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur <br>
+     <b>remove old record IO generation and tests</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9294">HADOOP-9294</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>GetGroupsTestBase fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9290">HADOOP-9290</a>.
+     Major bug reported by Arpit Agarwal and fixed by Chris Nauroth (build , native)<br>
+     <b>Some tests cannot load native library</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9287">HADOOP-9287</a>.
+     Major test reported by Tsuyoshi OZAWA and fixed by Andrey Klochkov (test)<br>
+     <b>Parallel testing hadoop-common</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9283">HADOOP-9283</a>.
+     Major new feature reported by Aaron T. Myers and fixed by Aaron T. Myers (security)<br>
+     <b>Add support for running the Hadoop client on AIX</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9279">HADOOP-9279</a>.
+     Major improvement reported by Tsuyoshi OZAWA and fixed by Tsuyoshi OZAWA (build , documentation)<br>
+     <b>Document the need to build hadoop-maven-plugins for eclipse and separate project builds</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9267">HADOOP-9267</a>.
+     Minor bug reported by Andrew Wang and fixed by Andrew Wang <br>
+     <b>hadoop -help, -h, --help should show usage instructions</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9264">HADOOP-9264</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (fs)<br>
+     <b>port change to use Java untar API on Windows from branch-1-win to trunk</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9253">HADOOP-9253</a>.
+     Major improvement reported by Arpit Gupta and fixed by Arpit Gupta <br>
+     <b>Capture ulimit info in the logs at service start time</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9246">HADOOP-9246</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (build)<br>
+     <b>Execution phase for hadoop-maven-plugin should be process-resources</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9245">HADOOP-9245</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (build)<br>
+     <b>mvn clean without running mvn install before fails</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9233">HADOOP-9233</a>.
+     Major test reported by Vadim Bondarev and fixed by Vadim Bondarev <br>
+     <b>Cover package org.apache.hadoop.io.compress.zlib with unit tests</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9230">HADOOP-9230</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (test)<br>
+     <b>TestUniformSizeInputFormat fails intermittently</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9222">HADOOP-9222</a>.
+     Major test reported by Vadim Bondarev and fixed by Vadim Bondarev <br>
+     <b>Cover package with org.apache.hadoop.io.lz4 unit tests</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9220">HADOOP-9220</a>.
+     Critical bug reported by Tom White and fixed by Tom White (ha)<br>
+     <b>Unnecessary transition to standby in ActiveStandbyElector</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9218">HADOOP-9218</a>.
+     Major sub-task reported by Sanjay Radia and fixed by Sanjay Radia (ipc)<br>
+     <b>Document the Rpc-wrappers used internally</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9211">HADOOP-9211</a>.
+     Major bug reported by Sarah Weissman and fixed by Plamen Jeliazkov (conf)<br>
+     <b>HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9209">HADOOP-9209</a>.
+     Major new feature reported by Todd Lipcon and fixed by Todd Lipcon (fs , tools)<br>
+     <b>Add shell command to dump file checksums</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9163">HADOOP-9163</a>.
+     Major sub-task reported by Sanjay Radia and fixed by Sanjay Radia (ipc)<br>
+     <b>The rpc msg in  ProtobufRpcEngine.proto should be moved out to avoid an extra copy</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9154">HADOOP-9154</a>.
+     Major bug reported by Karthik Kambatla and fixed by Karthik Kambatla (io)<br>
+     <b>SortedMapWritable#putAll() doesn't add key/value classes to the map</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9151">HADOOP-9151</a>.
+     Major sub-task reported by Sanjay Radia and fixed by Sanjay Radia (ipc)<br>
+     <b>Include RPC error info in RpcResponseHeader instead of sending it separately</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9150">HADOOP-9150</a>.
+     Critical bug reported by Todd Lipcon and fixed by Todd Lipcon (fs/s3 , ha , performance , viewfs)<br>
+     <b>Unnecessary DNS resolution attempts for logical URIs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9140">HADOOP-9140</a>.
+     Major sub-task reported by Sanjay Radia and fixed by Sanjay Radia (ipc)<br>
+     <b>Cleanup rpc PB protos</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9131">HADOOP-9131</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (test)<br>
+     <b>TestLocalFileSystem#testListStatusWithColons cannot run on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9125">HADOOP-9125</a>.
+     Major bug reported by Kai Zheng and fixed by Kai Zheng (security)<br>
+     <b>LdapGroupsMapping threw CommunicationException after some idle time</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9117">HADOOP-9117</a>.
+     Major improvement reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (build)<br>
+     <b>replace protoc ant plugin exec with a maven plugin</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9043">HADOOP-9043</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (util)<br>
+     <b>disallow in winutils creating symlinks with forwards slashes</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8982">HADOOP-8982</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (net)<br>
+     <b>TestSocketIOWithTimeout fails on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8973">HADOOP-8973</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (util)<br>
+     <b>DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8958">HADOOP-8958</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (viewfs)<br>
+     <b>ViewFs:Non absolute mount name failures when running multiple tests on Windows</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8957">HADOOP-8957</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (fs)<br>
+     <b>AbstractFileSystem#IsValidName should be overridden for embedded file systems like ViewFs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8924">HADOOP-8924</a>.
+     Major improvement reported by Chris Nauroth and fixed by Chris Nauroth (build)<br>
+     <b>Add maven plugin alternative to shell script to save package-info.java</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8917">HADOOP-8917</a>.
+     Major bug reported by Arpit Gupta and fixed by Arpit Gupta <br>
+     <b>add LOCALE.US to toLowerCase in SecurityUtil.replacePattern</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8886">HADOOP-8886</a>.
+     Major improvement reported by Eli Collins and fixed by Eli Collins (fs)<br>
+     <b>Remove KFS support</b><br>
+     <blockquote>Kosmos FS (KFS) is no longer maintained and Hadoop support has been removed. KFS has been replaced by QFS (HADOOP-8885).</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8711">HADOOP-8711</a>.
+     Major improvement reported by Brandon Li and fixed by Brandon Li (ipc)<br>
+     <b>provide an option for IPC server users to avoid printing stack information for certain exceptions</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8569">HADOOP-8569</a>.
+     Minor bug reported by Colin Patrick McCabe and fixed by Colin Patrick McCabe <br>
+     <b>CMakeLists.txt: define _GNU_SOURCE and _LARGEFILE_SOURCE</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8562">HADOOP-8562</a>.
+     Major new feature reported by Bikas Saha and fixed by Bikas Saha <br>
+     <b>Enhancements to support Hadoop on Windows Server and Windows Azure environments</b><br>
+     <blockquote>This umbrella jira makes enhancements to support Hadoop natively on Windows Server and Windows Azure environments.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8470">HADOOP-8470</a>.
+     Major sub-task reported by Junping Du and fixed by Junping Du <br>
+     <b>Implementation of 4-layer subclass of NetworkTopology (NetworkTopologyWithNodeGroup)</b><br>
+     <blockquote>This patch should be checked in together (or after) with JIRA Hadoop-8469: https://issues.apache.org/jira/browse/HADOOP-8469</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8469">HADOOP-8469</a>.
+     Major sub-task reported by Junping Du and fixed by Junping Du <br>
+     <b>Make NetworkTopology class pluggable</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8462">HADOOP-8462</a>.
+     Major improvement reported by Govind Kamat and fixed by Govind Kamat (io)<br>
+     <b>Native-code implementation of bzip2 codec</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8415">HADOOP-8415</a>.
+     Minor improvement reported by Jan van der Lugt and fixed by Jan van der Lugt (conf)<br>
+     <b>getDouble() and setDouble() in org.apache.hadoop.conf.Configuration</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7487">HADOOP-7487</a>.
+     Major bug reported by Todd Lipcon and fixed by Andrew Wang (fs)<br>
+     <b>DF should throw a more reasonable exception when mount cannot be determined</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7391">HADOOP-7391</a>.
+     Major bug reported by Sanjay Radia and fixed by Sanjay Radia <br>
+     <b>Document Interface Classification from HADOOP-5073</b><br>
+     <blockquote></blockquote></li>
+</ul>
+</body></html>
+<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<title>Hadoop  2.0.5-alpha Release Notes</title>
+<STYLE type="text/css">
+	H1 {font-family: sans-serif}
+	H2 {font-family: sans-serif; margin-left: 7mm}
+	TABLE {margin-left: 7mm}
+</STYLE>
+</head>
+<body>
+<h1>Hadoop  2.0.5-alpha Release Notes</h1>
+These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
+<a name="changes"/>
+<h2>Changes since Hadoop 2.0.4-alpha</h2>
+<ul>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5240">MAPREDUCE-5240</a>.
+     Blocker bug reported by Roman Shaposhnik and fixed by Vinod Kumar Vavilapalli (mrv2)<br>
+     <b>inside of FileOutputCommitter the initialized Credentials cache appears to be empty</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4482">HDFS-4482</a>.
+     Blocker bug reported by Uma Maheswara Rao G and fixed by Uma Maheswara Rao G (namenode)<br>
+     <b>ReplicationMonitor thread can exit with NPE due to the race between delete and replication of same file.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9407">HADOOP-9407</a>.
+     Major bug reported by Sangjin Lee and fixed by Sangjin Lee (build)<br>
+     <b>commons-daemon 1.0.3 dependency has bad group id causing build issues</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8419">HADOOP-8419</a>.
+     Major bug reported by Luke Lu and fixed by Yu Li (io)<br>
+     <b>GzipCodec NPE upon reset with IBM JDK</b><br>
+     <blockquote></blockquote></li>
+</ul>
+</body></html>
+<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<title>Hadoop  2.0.4-alpha Release Notes</title>
+<STYLE type="text/css">
+	H1 {font-family: sans-serif}
+	H2 {font-family: sans-serif; margin-left: 7mm}
+	TABLE {margin-left: 7mm}
+</STYLE>
+</head>
+<body>
+<h1>Hadoop  2.0.4-alpha Release Notes</h1>
+These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
+<a name="changes"/>
+<h2>Changes since Hadoop 2.0.3-alpha</h2>
+<ul>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-470">YARN-470</a>.
+     Major bug reported by Hitesh Shah and fixed by Siddharth Seth (nodemanager)<br>
+     <b>Support a way to disable resource monitoring on the NodeManager</b><br>
+     <blockquote>Currently, the memory management monitor's check is disabled when the maxMem is set to -1. However, the maxMem is also sent to the RM when the NM registers with it ( to define the max limit of allocate-able resources ). 

+

+We need an explicit flag to disable monitoring to avoid the problems caused by the overloading of the max memory value.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-449">YARN-449</a>.
+     Blocker bug reported by Siddharth Seth and fixed by  <br>
+     <b>HBase test failures when running against Hadoop 2</b><br>
+     <blockquote>Post YARN-429, unit tests for HBase continue to fail since the classpath for the MRAppMaster is not being set correctly.

+Reverting YARN-129 may fix this, but I'm not sure that's the correct solution. My guess is, as Alexandro pointed out in YARN-129, maven classloader magic is messing up java.class.path.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-443">YARN-443</a>.
+     Major improvement reported by Thomas Graves and fixed by Thomas Graves (nodemanager)<br>
+     <b>allow OS scheduling priority of NM to be different than the containers it launches</b><br>
+     <blockquote>It would be nice if we could have the nodemanager run at a different OS scheduling priority than the containers so that you can still communicate with the nodemanager if the containers out of control.  

+

+On linux we could launch the nodemanager at a higher priority, but then all the containers it launches would also be at that higher priority, so we need a way for the container executor to launch them at a lower priority.

+

+I'm not sure how this applies to windows if at all.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/YARN-429">YARN-429</a>.
+     Blocker bug reported by Siddharth Seth and fixed by Siddharth Seth (resourcemanager)<br>
+     <b>capacity-scheduler config missing from yarn-test artifact</b><br>
+     <blockquote>MiniYARNCluster and MiniMRCluster are unusable by downstream projects with the 2.0.3-alpha release, since the capacity-scheduler configuration is missing from the test artifact.

+hadoop-yarn-server-tests-3.0.0-SNAPSHOT-tests.jar should include the default capacity-scheduler configuration. Also, this doesn't need to be part of the default classpath - and should be moved out of the top level directory in the dist package.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5117">MAPREDUCE-5117</a>.
+     Blocker bug reported by Roman Shaposhnik and fixed by Siddharth Seth (security)<br>
+     <b>With security enabled HS delegation token renewer fails</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5094">MAPREDUCE-5094</a>.
+     Major bug reported by Siddharth Seth and fixed by Siddharth Seth <br>
+     <b>Disable mem monitoring by default in MiniMRYarnCluster</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5088">MAPREDUCE-5088</a>.
+     Blocker bug reported by Roman Shaposhnik and fixed by Daryn Sharp <br>
+     <b>MR Client gets an renewer token exception while Oozie is submitting a job</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5083">MAPREDUCE-5083</a>.
+     Major bug reported by Siddharth Seth and fixed by Siddharth Seth (mrv2)<br>
+     <b>MiniMRCluster should use a random component when creating an actual cluster</b><br>
+     <blockquote>Committed to branch-2.0.4. Modified changes.txt in trunk, branch-2 and branch-2.0.4 accordingly.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5053">MAPREDUCE-5053</a>.
+     Major bug reported by Robert Parker and fixed by Robert Parker <br>
+     <b>java.lang.InternalError from decompression codec cause reducer to fail</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5023">MAPREDUCE-5023</a>.
+     Critical bug reported by Kendall Thrapp and fixed by Ravi Prakash (jobhistoryserver , webapps)<br>
+     <b>History Server Web Services missing Job Counters</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-5006">MAPREDUCE-5006</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Sandy Ryza (contrib/streaming)<br>
+     <b>streaming tests failing</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4549">MAPREDUCE-4549</a>.
+     Blocker bug reported by Robert Joseph Evans and fixed by Robert Joseph Evans (mrv2)<br>
+     <b>Distributed cache conflicts breaks backwards compatability</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3685">MAPREDUCE-3685</a>.
+     Critical bug reported by anty.rao and fixed by anty (mrv2)<br>
+     <b>There are some bugs in implementation of MergeManager</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4649">HDFS-4649</a>.
+     Blocker bug reported by Daryn Sharp and fixed by Daryn Sharp (namenode , security , webhdfs)<br>
+     <b>Webhdfs cannot list large directories</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4646">HDFS-4646</a>.
+     Minor bug reported by Jagane Sundar and fixed by  (namenode)<br>
+     <b>createNNProxyWithClientProtocol ignores configured timeout value</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4581">HDFS-4581</a>.
+     Major bug reported by Rohit Kochar and fixed by Rohit Kochar (datanode)<br>
+     <b>DataNode#checkDiskError should not be called on network errors</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4577">HDFS-4577</a>.
+     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
+     <b>Webhdfs operations should declare if authentication is required</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4571">HDFS-4571</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (webhdfs)<br>
+     <b>WebHDFS should not set the service hostname on the server side</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4567">HDFS-4567</a>.
+     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
+     <b>Webhdfs does not need a token for token operations</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4566">HDFS-4566</a>.
+     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
+     <b>Webdhfs token cancelation should use authentication</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4560">HDFS-4560</a>.
+     Major sub-task reported by Daryn Sharp and fixed by Daryn Sharp (webhdfs)<br>
+     <b>Webhdfs cannot use tokens obtained by another user</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4548">HDFS-4548</a>.
+     Blocker sub-task reported by Daryn Sharp and fixed by Daryn Sharp <br>
+     <b>Webhdfs doesn't renegotiate SPNEGO token</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3344">HDFS-3344</a>.
+     Major bug reported by Tsz Wo (Nicholas), SZE and fixed by Kihwal Lee (namenode)<br>
+     <b>Unreliable corrupt blocks counting in TestProcessCorruptBlocks</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9471">HADOOP-9471</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (build)<br>
+     <b>hadoop-client wrongfully excludes jetty-util JAR, breaking webhdfs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9467">HADOOP-9467</a>.
+     Major bug reported by Chris Nauroth and fixed by Chris Nauroth (metrics)<br>
+     <b>Metrics2 record filtering (.record.filter.include/exclude) does not filter by name</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9444">HADOOP-9444</a>.
+     Blocker bug reported by Konstantin Boudnik and fixed by Roman Shaposhnik (conf)<br>
+     <b>$var shell substitution in properties are not expanded in hadoop-policy.xml</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9406">HADOOP-9406</a>.
+     Major bug reported by Alejandro Abdelnur and fixed by Alejandro Abdelnur (build)<br>
+     <b>hadoop-client leaks dependency on JDK tools jar</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9405">HADOOP-9405</a>.
+     Minor bug reported by Andrew Wang and fixed by Andrew Wang (test , tools)<br>
+     <b>TestGridmixSummary#testExecutionSummarizer is broken</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9399">HADOOP-9399</a>.
+     Minor bug reported by Todd Lipcon and fixed by Konstantin Boudnik (build)<br>
+     <b>protoc maven plugin doesn't work on mvn 3.0.2</b><br>
+     <blockquote>Committed to 2.0.4-alpha branch</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9379">HADOOP-9379</a>.
+     Trivial improvement reported by Arpit Gupta and fixed by Arpit Gupta <br>
+     <b>capture the ulimit info after printing the log to the console</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9374">HADOOP-9374</a>.
+     Major improvement reported by Daryn Sharp and fixed by Daryn Sharp (security)<br>
+     <b>Add tokens from -tokenCacheFile into UGI</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9301">HADOOP-9301</a>.
+     Blocker bug reported by Roman Shaposhnik and fixed by Alejandro Abdelnur (build)<br>
+     <b>hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie &amp; HttpFS</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9299">HADOOP-9299</a>.
+     Blocker bug reported by Roman Shaposhnik and fixed by Daryn Sharp (security)<br>
+     <b>kerberos name resolution is kicking in even when kerberos is not configured</b><br>
+     <blockquote></blockquote></li>
+</ul>
+</body></html>
+<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
 <title>Hadoop  2.0.3-alpha Release Notes</title>
 <STYLE type="text/css">
 	H1 {font-family: sans-serif}