blob: 357af57ba030c2fe2efd85a3d72c23a858f6faf1 [file] [log] [blame]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2021-06-15
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.3.1 &#x2013; Apache Hadoop 2.8.0 Release Notes</title>
<style type="text/css" media="all">
@import url("../../css/maven-base.css");
@import url("../../css/maven-theme.css");
@import url("../../css/site.css");
</style>
<link rel="stylesheet" href="../../css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20210615" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xleft">
<a href="http://www.apache.org/" class="externalLink">Apache</a>
&gt;
<a href="http://hadoop.apache.org/" class="externalLink">Hadoop</a>
&gt;
<a href="../../../index.html">Apache Hadoop Project Dist POM</a>
&gt;
<a href="../../index.html">Apache Hadoop 3.3.1</a>
&gt;
Apache Hadoop 2.8.0 Release Notes
</div>
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2021-06-15
&nbsp;| Version: 3.3.1
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../../../hadoop-openstack/index.html">OpenStack Swift</a>
</li>
<li class="none">
<a href="../../../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="../../images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-->
<h1>Apache Hadoop 2.8.0 Release Notes</h1>
<p>These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-7713">HADOOP-7713</a> | <i>Trivial</i> | <b>dfs -count -q should label output column</b></li>
</ul>
<p>Added -v option to fs -count command to display a header record in the report.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-8934">HADOOP-8934</a> | <i>Minor</i> | <b>Shell command ls should include sort options</b></li>
</ul>
<p>Options to sort output of fs -ls comment: -t (mtime), -S (size), -u (atime), -r (reverse)</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11226">HADOOP-11226</a> | <i>Major</i> | <b>Add a configuration to set ipc.Client&#x2019;s traffic class with IPTOS_LOWDELAY|IPTOS_RELIABILITY</b></li>
</ul>
<p>Use low latency TCP connections for hadoop IPC</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-9477">HADOOP-9477</a> | <i>Major</i> | <b>Add posixGroups support for LDAP groups mapping service</b></li>
</ul>
<p>Add posixGroups support for LDAP groups mapping service. The change in LDAPGroupMapping is compatible with previous scenario. In LDAP, the group mapping between {{posixAccount}} and {{posixGroup}} is different from the general LDAPGroupMapping, one of the differences is the {{&#x201c;memberUid&#x201d;}} will be used to mapping {{posixAccount}} and {{posixGroup}}. The feature will handle the mapping in internal when configuration {{hadoop.security.group.mapping.ldap.search.filter.user}} is set as &#x201c;posixAccount&#x201d; and {{hadoop.security.group.mapping.ldap.search.filter.group}} is &#x201c;posixGroup&#x201d;.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3241">YARN-3241</a> | <i>Major</i> | <b>FairScheduler handles &#x201c;invalid&#x201d; queue names inconsistently</b></li>
</ul>
<p>FairScheduler does not allow queue names with leading or tailing spaces or empty sub-queue names anymore.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-7501">HDFS-7501</a> | <i>Major</i> | <b>TransactionsSinceLastCheckpoint can be negative on SBNs</b></li>
</ul>
<p>Fixed a bug where the StandbyNameNode&#x2019;s TransactionsSinceLastCheckpoint metric may slide into a negative number after every subsequent checkpoint.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11660">HADOOP-11660</a> | <i>Minor</i> | <b>Add support for hardware crc of HDFS checksums on ARM aarch64 architecture</b></li>
</ul>
<p>Add support for aarch64 CRC instructions</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11731">HADOOP-11731</a> | <i>Major</i> | <b>Rework the changelog and releasenotes</b></li>
</ul><!-- markdown -->
<ul>
<li>
<p>The release notes now only contains JIRA issues with incompatible changes and actual release notes. The generated format has been changed from HTML to markdown.</p>
</li>
<li>
<p>The changelog is now automatically generated from data stored in JIRA rather than manually maintained. The format has been changed from pure text to markdown as well as containing more of the information that was previously stored in the release notes.</p>
</li>
<li>
<p>In order to generate the changes file, python must be installed.</p>
</li>
<li>
<p>New -Preleasedocs profile added to maven in order to trigger this functionality.</p>
</li>
</ul><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3365">YARN-3365</a> | <i>Major</i> | <b>Add support for using the &#x2018;tc&#x2019; tool via container-executor</b></li>
</ul>
<p>Adding support for using the &#x2018;tc&#x2019; tool in batch mode via container-executor. This is a prerequisite for traffic-shaping functionality that is necessary to support outbound bandwidth as a resource in YARN.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3443">YARN-3443</a> | <i>Major</i> | <b>Create a &#x2018;ResourceHandler&#x2019; subsystem to ease addition of support for new resource types on the NM</b></li>
</ul>
<p>The current cgroups implementation is closely tied to supporting CPU as a resource . This patch separates out CGroups implementation into a reusable class as well as provides a simple ResourceHandler subsystem that will enable us to add support for new resource types on the NM - e.g Network, Disk etc.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-6666">HDFS-6666</a> | <i>Minor</i> | <b>Abort NameNode and DataNode startup if security is enabled but block access token is not enabled.</b></li>
</ul>
<p>NameNode and DataNode now abort during startup if attempting to run in secure mode, but block access tokens are not enabled by setting configuration property dfs.block.access.token.enable to true in hdfs-site.xml. Previously, this case logged a warning, because this would be an insecure configuration.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3021">YARN-3021</a> | <i>Major</i> | <b>YARN&#x2019;s delegation-token handling disallows certain trust setups to operate properly over DistCp</b></li>
</ul>
<p>ResourceManager renews delegation tokens for applications. This behavior has been changed to renew tokens only if the token&#x2019;s renewer is a non-empty string. MapReduce jobs can instruct ResourceManager to skip renewal of tokens obtained from certain hosts by specifying the hosts with configuration mapreduce.job.hdfs-servers.token-renewal.exclude=&lt;host1&gt;,&lt;host2&gt;,..,&lt;hostN&gt;.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11746">HADOOP-11746</a> | <i>Major</i> | <b>rewrite test-patch.sh</b></li>
</ul><!-- markdown -->
<ul>
<li>test-patch.sh now has new output that is different than the previous versions</li>
<li>test-patch.sh is now pluggable via the test-patch.d directory, with checkstyle and shellcheck tests included</li>
<li>JIRA comments now use much more markup to improve readability</li>
<li>test-patch.sh now supports either a file name, a URL, or a JIRA issue as input in developer mode</li>
<li>If part of the patch testing code is changed, test-patch.sh will now attempt to re-executing itself using the new version.</li>
<li>Some logic to try and reduce the amount of unnecessary tests. For example, patches that only modify markdown should not run the Java compilation tests.</li>
<li>Plugins for checkstyle, shellcheck, and whitespace now execute as necessary.</li>
<li>New test code for mvn site</li>
<li>A breakdown of the times needed to execute certain blocks as well as a total runtime is now reported to assist in fixing long running tests and optimize the entire process.</li>
<li>Several new options</li>
<li>&#x2013;resetrepo will put test-patch.sh in destructive mode, similar to a normal Jenkins run</li>
<li>&#x2013;testlist allows one to provide a comma delimited list of test subsystems to forcibly execute</li>
<li>&#x2013;modulelist to provide a comma delimited list of module tests to execute in addition to the ones that are automatically detected</li>
<li>&#x2013;offline mode to attempt to stop connecting to the Internet for certain operations</li>
<li>test-patch.sh now defaults to the POSIX equivalents on Solaris and Illumos-based operating systems</li>
<li>shelldocs.py may be used to generate test-patch.sh API information</li>
<li>FindBugs output is now listed on the JIRA comment</li>
<li>lots of general code cleanup, including attempts to remove any local state files to reduce potential race conditions</li>
<li>Some logic to determine if a patch is for a given major branch using several strategies as well as a particular git ref (using git+ref as part of the name).</li>
<li>Some logic to determine if a patch references a particular JIRA issue.</li>
<li>Unit tests are only flagged as necessary with native or Java code, since Hadoop has no framework in place yet for other types of unit tests.</li>
<li>test-patch now exits with a failure status if problems arise trying to do git checkouts. Previously the exit code was success.</li>
</ul><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3366">YARN-3366</a> | <i>Major</i> | <b>Outbound network bandwidth : classify/shape traffic originating from YARN containers</b></li>
</ul>
<p>1) A TrafficController class that provides an implementation for traffic shaping using tc. 2) A ResourceHandler implementation for OutboundBandwidth as a resource - isolation/enforcement using cgroups and tc.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11861">HADOOP-11861</a> | <i>Major</i> | <b>test-patch.sh rewrite addendum patch</b></li>
</ul><!-- markdown -->
<ul>
<li>&#x2013;build-native=false should work now</li>
<li>&#x2013;branch option lets one specify a branch to test against on the command line</li>
<li>On certain Jenkins machines, the artifact directory sometimes gets deleted from outside the test-patch script. There is now some code to try to detect, alert, and quick exit if that happens.</li>
<li>Various semi-critical output and bug fixes</li>
</ul><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11843">HADOOP-11843</a> | <i>Major</i> | <b>Make setting up the build environment easier</b></li>
</ul>
<p>Includes a docker based solution for setting up a build environment with minimal effort.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11813">HADOOP-11813</a> | <i>Minor</i> | <b>releasedocmaker.py should use today&#x2019;s date instead of unreleased</b></li>
</ul>
<p>Use today instead of &#x2018;Unreleased&#x2019; in releasedocmaker.py when &#x2013;usetoday is given as an option.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8226">HDFS-8226</a> | <i>Blocker</i> | <b>Non-HA rollback compatibility broken</b></li>
</ul>
<p>Non-HA rollback steps have been changed. Run the rollback command on the namenode (`bin/hdfs namenode -rollback`) before starting cluster with &#x2018;-rollback&#x2019; option using (sbin/start-dfs.sh -rollback).</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-6888">HDFS-6888</a> | <i>Major</i> | <b>Allow selectively audit logging ops</b></li>
</ul>
<p>Specific HDFS ops can be selectively excluded from audit logging via &#x2018;dfs.namenode.audit.log.debug.cmdlist&#x2019; configuration.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8157">HDFS-8157</a> | <i>Major</i> | <b>Writes to RAM DISK reserve locked memory for block files</b></li>
</ul>
<p>This change requires setting the dfs.datanode.max.locked.memory configuration key to use the HDFS Lazy Persist feature. Its value limits the combined off-heap memory for blocks in RAM via caching and lazy persist writes.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11772">HADOOP-11772</a> | <i>Major</i> | <b>RPC Invoker relies on static ClientCache which has synchronized(this) blocks</b></li>
</ul>
<p>The Client#call() methods that are deprecated since 0.23 have been removed.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3684">YARN-3684</a> | <i>Major</i> | <b>Change ContainerExecutor&#x2019;s primary lifecycle methods to use a more extensible mechanism for passing information.</b></li>
</ul>
<p>Modifying key methods in ContainerExecutor to use context objects instead of an argument list. This is more extensible and less brittle.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-2336">YARN-2336</a> | <i>Major</i> | <b>Fair scheduler REST api returns a missing &#x2018;[&#x2019; bracket JSON for deep queue tree</b></li>
</ul>
<p>Fix FairScheduler&#x2019;s REST api returns a missing &#x2018;[&#x2019; blacket JSON for childQueues.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8486">HDFS-8486</a> | <i>Blocker</i> | <b>DN startup may cause severe data loss</b></li>
</ul><!-- markdown -->
<p>Public service notice: * Every restart of a 2.6.x or 2.7.0 DN incurs a risk of unwanted block deletion. * Apply this patch if you are running a pre-2.7.1 release.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8270">HDFS-8270</a> | <i>Major</i> | <b>create() always retried with hardcoded timeout when file already exists with open lease</b></li>
</ul>
<p>Proxy level retries will not be done on AlreadyBeingCreatedExeption for create() op.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-41">YARN-41</a> | <i>Major</i> | <b>The RM should handle the graceful shutdown of the NM.</b></li>
</ul>
<p>The behavior of shutdown a NM could be different (if NM work preserving is not enabled): NM will unregister to RM immediately rather than waiting for timeout to be LOST. A new status of NodeStatus - SHUTDOWN is involved which could affect UI, CLI and ClusterMetrics for node&#x2019;s status.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-7139">HADOOP-7139</a> | <i>Major</i> | <b>Allow appending to existing SequenceFiles</b></li>
</ul>
<p>Existing sequence files can be appended.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8582">HDFS-8582</a> | <i>Minor</i> | <b>Support getting a list of reconfigurable config properties and do not generate spurious reconfig warnings</b></li>
</ul>
<p>Add a new option &#x201c;properties&#x201d; to the &#x201c;dfsadmin -reconfig&#x201d; command to get a list of reconfigurable properties.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-6564">HDFS-6564</a> | <i>Major</i> | <b>Use slf4j instead of common-logging in hdfs-client</b></li>
</ul>
<p>Users may need special attention for this change while upgrading to this version. Previously hdfs client was using commons-logging as the logging framework. With this change it will use slf4j framework. For more details about slf4j, please see: <a class="externalLink" href="http://www.slf4j.org/manual.html">http://www.slf4j.org/manual.html</a>. Also, org.apache.hadoop.hdfs.protocol.CachePoolInfo#LOG public static member variable has been removed as it is not used anywhere. Users need to correct their code if any one has a reference to this variable. One can retrieve the named logger via the logging framework of their choice directly like, org.slf4j.Logger LOG = org.slf4j.LoggerFactory.getLogger(org.apache.hadoop.hdfs.protocol.CachePoolInfo.class);</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3823">YARN-3823</a> | <i>Minor</i> | <b>Fix mismatch in default values for yarn.scheduler.maximum-allocation-vcores property</b></li>
</ul>
<p>Default value for &#x2018;yarn.scheduler.maximum-allocation-vcores&#x2019; changed from 32 to 4.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-5732">HADOOP-5732</a> | <i>Minor</i> | <b>Add SFTP FileSystem</b></li>
</ul>
<p>Added SFTP filesystem by using the JSch library.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3069">YARN-3069</a> | <i>Major</i> | <b>Document missing properties in yarn-default.xml</b></li>
</ul>
<p>Documented missing properties and added the regression test to verify that there are no missing properties in yarn-default.xml.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/MAPREDUCE-6427">MAPREDUCE-6427</a> | <i>Minor</i> | <b>Fix typo in JobHistoryEventHandler</b></li>
</ul>
<p>There is a typo in the event string &#x201c;WORKFLOW_ID&#x201d; (as &#x201c;WORKLFOW_ID&#x201d;). The branch-2 change will publish both event strings for compatibility with consumers, but the misspelled metric will be removed in trunk.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12209">HADOOP-12209</a> | <i>Minor</i> | <b>Comparable type should be in FileStatus</b></li>
</ul>
<p><b>WARNING: No release note provided for this change.</b></p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-7582">HDFS-7582</a> | <i>Major</i> | <b>Enforce maximum number of ACL entries separately per access and default.</b></li>
</ul>
<p>Limit on Maximum number of ACL entries(32) will be enforced separately on access and default ACLs. So in total, max. 64 ACL entries can be present in a ACL spec.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12269">HADOOP-12269</a> | <i>Major</i> | <b>Update aws-sdk dependency to 1.10.6; move to aws-sdk-s3</b></li>
</ul>
<p>The Maven dependency on aws-sdk has been changed to aws-sdk-s3 and the version bumped. Applications depending on transitive dependencies pulled in by aws-sdk and not aws-sdk-s3 might not work.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12352">HADOOP-12352</a> | <i>Trivial</i> | <b>Delay in checkpointing Trash can leave trash for 2 intervals before deleting</b></li>
</ul>
<p>Fixes an Trash related issue wherein a delay in the periodic checkpointing of one user&#x2019;s directory causes the subsequent user directory checkpoints to carry a newer timestamp, thereby delaying their eventual deletion.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8900">HDFS-8900</a> | <i>Major</i> | <b>Compact XAttrs to optimize memory footprint.</b></li>
</ul>
<p>The config key &#x201c;dfs.namenode.fs-limits.max-xattr-size&#x201d; can no longer be set to a value of 0 (previously used to indicate unlimited) or a value greater than 32KB. This is a constraint on xattr size similar to many local filesystems.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8890">HDFS-8890</a> | <i>Major</i> | <b>Allow admin to specify which blockpools the balancer should run on</b></li>
</ul>
<p>Adds a new blockpools flag to the balancer. This allows admins to specify which blockpools the balancer will run on. Usage: -blockpools &lt;comma-separated list of blockpool ids&gt; The balancer will only run on blockpools included in this list.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-4087">YARN-4087</a> | <i>Major</i> | <b>Followup fixes after YARN-2019 regarding RM behavior when state-store error occurs</b></li>
</ul>
<p>Set YARN_FAIL_FAST to be false by default. If HA is enabled and if there&#x2019;s any state-store error, after the retry operation failed, we always transition RM to standby state.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12384">HADOOP-12384</a> | <i>Major</i> | <b>Add &#x201c;-direct&#x201d; flag option for fs copy so that user can choose not to create &#x201c;._COPYING_&#x201d; file</b></li>
</ul>
<p>An option &#x2018;-d&#x2019; added for all command-line copy commands to skip intermediate &#x2018;.COPYING&#x2019; file creation.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8929">HDFS-8929</a> | <i>Major</i> | <b>Add a metric to expose the timestamp of the last journal</b></li>
</ul>
<p>Exposed a metric &#x2018;LastJournalTimestamp&#x2019; for JournalNode</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-7116">HDFS-7116</a> | <i>Major</i> | <b>Add a command to get the balancer bandwidth</b></li>
</ul>
<p>Exposed command &#x201c;-getBalancerBandwidth&#x201d; in dfsadmin to get the bandwidth of balancer.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8829">HDFS-8829</a> | <i>Major</i> | <b>Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning</b></li>
</ul>
<p>HDFS-8829 introduces two new configuration settings: dfs.datanode.transfer.socket.send.buffer.size and dfs.datanode.transfer.socket.recv.buffer.size. These settings can be used to control the socket send buffer and receive buffer sizes respectively on the DataNode for client-DataNode and DataNode-DataNode connections. The default values of both settings are 128KB for backwards compatibility. For optimum performance it is recommended to set these values to zero to enable the OS networking stack to auto-tune buffer sizes.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-313">YARN-313</a> | <i>Critical</i> | <b>Add Admin API for supporting node resource configuration in command line</b></li>
</ul>
<p>After this patch, the feature to support NM resource dynamically configuration is completed, so that user can configure NM with new resource without bring NM down or decommissioned. Two CLIs are provided to support update resources on individual node or a batch of nodes: 1. Update resource on single node: yarn rmadmin -updateNodeResource [NodeID] [MemSize] [vCores] 2. Update resource on a batch of nodes: yarn rmadmin -refreshNodesResources, that reflect nodes&#x2019; resource configuration defined in dynamic-resources.xml which is loaded by RM dynamically (like capacity-scheduler.xml or fair-scheduler.xml). The first version of configuration format is: &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.resource.dynamic.nodes&lt;/name&gt; &lt;value&gt;h1:1234&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;yarn.resource.dynamic.h1:1234.vcores&lt;/name&gt; &lt;value&gt;16&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;yarn.resource.dynamic.h1:1234.memory&lt;/name&gt; &lt;value&gt;1024&lt;/value&gt; &lt;/property&gt; &lt;/configuration&gt;</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12416">HADOOP-12416</a> | <i>Major</i> | <b>Trash messages should be handled by Logger instead of being delivered on System.out</b></li>
</ul>
<p>Now trash message is not printed to System.out. It is handled by Logger instead.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9063">HDFS-9063</a> | <i>Major</i> | <b>Correctly handle snapshot path for getContentSummary</b></li>
</ul>
<p>The jira made the following changes: 1. Fix a bug to exclude newly-created files from quota usage calculation for a snapshot path. 2. Number of snapshots is no longer counted as directory number in getContentSummary result.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12360">HADOOP-12360</a> | <i>Minor</i> | <b>Create StatsD metrics2 sink</b></li>
</ul>
<p>Added StatsD metrics2 sink</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9013">HDFS-9013</a> | <i>Major</i> | <b>Deprecate NameNodeMXBean#getNNStarted in branch2 and remove from trunk</b></li>
</ul>
<p>NameNodeMXBean#getNNStarted() metric is deprecated in branch-2 and removed from trunk.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12437">HADOOP-12437</a> | <i>Major</i> | <b>Allow SecurityUtil to lookup alternate hostnames</b></li>
</ul>
<p>HADOOP-12437 introduces two new configuration settings: hadoop.security.dns.interface and hadoop.security.dns.nameserver. These settings can be used to control how Hadoop service instances look up their own hostname and may be required in some multi-homed environments where hosts are configured with multiple hostnames in DNS or hosts files. They supersede the existing settings dfs.datanode.dns.interface and dfs.datanode.dns.nameserver.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12446">HADOOP-12446</a> | <i>Major</i> | <b>Undeprecate createNonRecursive()</b></li>
</ul>
<p>FileSystem#createNonRecursive() is undeprecated.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8696">HDFS-8696</a> | <i>Major</i> | <b>Make the lower and higher watermark in the DN Netty server configurable</b></li>
</ul>
<p>Introduced two new configuration dfs.webhdfs.netty.low.watermark and dfs.webhdfs.netty.high.watermark to enable tuning the size of the buffers of the Netty server inside Datanodes.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9184">HDFS-9184</a> | <i>Major</i> | <b>Logging HDFS operation&#x2019;s caller context into audit logs</b></li>
</ul>
<p>The feature needs to enabled by setting &#x201c;hadoop.caller.context.enabled&#x201d; to true. When the feature is used, additional fields are written into namenode audit log records.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9259">HDFS-9259</a> | <i>Major</i> | <b>Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario</b></li>
</ul>
<p>Introduces a new configuration setting dfs.client.socket.send.buffer.size to control the socket send buffer size for writes. Setting it to zero enables TCP auto-tuning on systems that support it.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9311">HDFS-9311</a> | <i>Major</i> | <b>Support optional offload of NameNode HA service health checks to a separate RPC server.</b></li>
</ul>
<p>There is now support for offloading HA health check RPC activity to a separate RPC server endpoint running within the NameNode process. This may improve reliability of HA health checks and prevent spurious failovers in highly overloaded conditions. For more details, please refer to the hdfs-default.xml documentation for properties dfs.namenode.lifeline.rpc-address, dfs.namenode.lifeline.rpc-bind-host and dfs.namenode.lifeline.handler.count.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-6200">HDFS-6200</a> | <i>Major</i> | <b>Create a separate jar for hdfs-client</b></li>
</ul>
<p>Projects that access HDFS can depend on the hadoop-hdfs-client module instead of the hadoop-hdfs module to avoid pulling in unnecessary dependency. Please note that hadoop-hdfs-client module could miss class like ConfiguredFailoverProxyProvider. So if a cluster is in HA deployment, we should still use hadoop-hdfs instead.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9057">HDFS-9057</a> | <i>Major</i> | <b>allow/disallow snapshots via webhdfs</b></li>
</ul>
<p>Snapshots can be allowed/disallowed on a directory via WebHdfs from users with superuser privilege.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/MAPREDUCE-5485">MAPREDUCE-5485</a> | <i>Critical</i> | <b>Allow repeating job commit by extending OutputCommitter API</b></li>
</ul>
<p>Previously, the MR job will get failed if AM get restarted for some reason (like node failure, etc.) during its doing commit job no matter if AM attempts reach to the maximum attempts. In this improvement, we add a new API isCommitJobRepeatable() to OutputCommitter interface which to indicate if job&#x2019;s committer can do commitJob again if previous commit work is interrupted by NM/AM failures, etc. The instance of OutputCommitter, which support repeatable job commit (like FileOutputCommitter in algorithm 2), can allow AM to continue the commitJob() after AM restart as a new attempt.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12313">HADOOP-12313</a> | <i>Critical</i> | <b>NPE in JvmPauseMonitor when calling stop() before start()</b></li>
</ul>
<p>Allow stop() before start() completed in JvmPauseMonitor</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9433">HDFS-9433</a> | <i>Major</i> | <b>DFS getEZForPath API on a non-existent file should throw FileNotFoundException</b></li>
</ul>
<p>Unify the behavior of dfs.getEZForPath() API when getting a non-existent normal file and non-existent ezone file by throwing FileNotFoundException</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8335">HDFS-8335</a> | <i>Major</i> | <b>FSNamesystem should construct FSPermissionChecker only if permission is enabled</b></li>
</ul>
<p>Only check permissions when permissions enabled in FSDirStatAndListingOp.getFileInfo() and getListingInt()</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8831">HDFS-8831</a> | <i>Major</i> | <b>Trash Support for deletion in HDFS encryption zone</b></li>
</ul>
<p>Add Trash support for deleting files within encryption zones. Deleted files will remain encrypted and they will be moved to a &#x201c;.Trash&#x201d; subdirectory under the root of the encryption zone, prefixed by $USER/current. Checkpoint and expunge continue to work like the existing Trash.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9214">HDFS-9214</a> | <i>Major</i> | <b>Support reconfiguring dfs.datanode.balance.max.concurrent.moves without DN restart</b></li>
</ul>
<p>Steps to reconfigure: 1. change value of the parameter in corresponding xml configuration file 2. to reconfigure, run hdfs dfsadmin -reconfig datanode &lt;dn_addr&gt;:&lt;ipc_port&gt; start 3. repeat step 2 until all DNs are reconfigured 4. to check status of the most recent reconfigure operation, run hdfs dfsadmin -reconfig datanode &lt;dn_addr&gt;:&lt;ipc_port&gt; status 5. to query a list reconfigurable properties on DN, run hdfs dfsadmin -reconfig datanode &lt;dn_addr&gt;:&lt;ipc_port&gt; properties</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-3623">YARN-3623</a> | <i>Major</i> | <b>We should have a config to indicate the Timeline Service version</b></li>
</ul>
<p>Add a new configuration &#x201c;yarn.timeline-service.version&#x201d; to indicate what is the current version of the running timeline service. For example, if &#x201c;yarn.timeline-service.version&#x201d; is 1.5, and &#x201c;yarn.timeline-service.enabled&#x201d; is true, it means the cluster will and should bring up the timeline service v.1.5. On the client side, if the client uses the same version of timeline service, it should succeed. If the client chooses to use a smaller version in spite of this, then depending on how robust the compatibility story is between versions, the results may vary.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-4207">YARN-4207</a> | <i>Major</i> | <b>Add a non-judgemental YARN app completion status</b></li>
</ul>
<p>Adds the ENDED attribute to o.a.h.yarn.api.records.FinalApplicationStatus</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12657">HADOOP-12657</a> | <i>Minor</i> | <b>Add a option to skip newline on empty files with getMerge -nl</b></li>
</ul>
<p>Added -skip-empty-file option to hadoop fs -getmerge command. With the option, delimiter (LF) is not printed for empty files even if -nl option is used.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11252">HADOOP-11252</a> | <i>Critical</i> | <b>RPC client does not time out by default</b></li>
</ul>
<p>This fix includes public method interface change. A follow-up JIRA issue for this incompatibility for branch-2.7 is HADOOP-13579.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9047">HDFS-9047</a> | <i>Major</i> | <b>Retire libwebhdfs</b></li>
</ul>
<p>libwebhdfs has been retired in 2.8.0 due to the lack of maintenance.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11262">HADOOP-11262</a> | <i>Major</i> | <b>Enable YARN to use S3A</b></li>
</ul>
<p>S3A has been made accessible through the FileContext API.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12635">HADOOP-12635</a> | <i>Major</i> | <b>Adding Append API support for WASB</b></li>
</ul>
<p>The Azure Blob Storage file system (WASB) now includes optional support for use of the append API by a single writer on a path. Please note that the implementation differs from the semantics of HDFS append. HDFS append internally guarantees that only a single writer may append to a path at a given time. WASB does not enforce this guarantee internally. Instead, the application must enforce access by a single writer, such as by running single-threaded or relying on some external locking mechanism to coordinate concurrent processes. Refer to the Azure Blob Storage documentation page for more details on enabling append in configuration.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12651">HADOOP-12651</a> | <i>Major</i> | <b>Replace dev-support with wrappers to Yetus</b></li>
</ul><!-- markdown -->
<ul>
<li>Major portions of dev-support have been replaced with wrappers to Apache Yetus:</li>
<li>releasedocmaker.py is now dev-support/bin/releasedocmaker</li>
<li>shelldocs.py is now dev-support/bin/shelldocs</li>
<li>smart-apply-patch.sh is now dev-support/bin/smart-apply-patch</li>
<li>test-patch.sh is now dev-support/bin/test-patch</li>
<li>See the dev-support/README.md file for more details on how to control the wrappers to various degrees.</li>
</ul><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9503">HDFS-9503</a> | <i>Major</i> | <b>Replace -namenode option with -fs for NNThroughputBenchmark</b></li>
</ul>
<p>The patch replaces -namenode option with -fs for specifying the remote name node against which the benchmark is running. Before this patch, if &#x2018;-namenode&#x2019; was not given, the benchmark would run in standalone mode, ignoring the &#x2018;fs.defaultFS&#x2019; in config file even if it&#x2019;s remote. With this patch, the benchmark, as other tools, will rely on the &#x2018;fs.defaultFS&#x2019; config, which is overridable by -fs command option, to run standalone mode or remote mode.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12426">HADOOP-12426</a> | <i>Minor</i> | <b>Add Entry point for Kerberos health check</b></li>
</ul>
<p>Hadoop now includes a shell command named KDiag that helps with diagnosis of Kerberos misconfiguration problems. Please refer to the Secure Mode documentation for full details on usage of the command.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12805">HADOOP-12805</a> | <i>Major</i> | <b>Annotate CanUnbuffer with @InterfaceAudience.Public</b></li>
</ul>
<p>Made CanBuffer interface public for use in client applications.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12548">HADOOP-12548</a> | <i>Major</i> | <b>Read s3a creds from a Credential Provider</b></li>
</ul>
<p>The S3A Hadoop-compatible file system now support reading its S3 credentials from the Hadoop Credential Provider API in addition to XML configuration files.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9711">HDFS-9711</a> | <i>Major</i> | <b>Integrate CSRF prevention filter in WebHDFS.</b></li>
</ul>
<p>WebHDFS now supports options to enforce cross-site request forgery (CSRF) prevention for HTTP requests to both the NameNode and the DataNode. Please refer to the updated WebHDFS documentation for a description of this feature and further details on how to configure it.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12794">HADOOP-12794</a> | <i>Major</i> | <b>Support additional compression levels for GzipCodec</b></li>
</ul>
<p>Added New compression levels for GzipCodec that can be set in zlib.compress.level</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9425">HDFS-9425</a> | <i>Major</i> | <b>Expose number of blocks per volume as a metric</b></li>
</ul>
<p>Number of blocks per volume is made available as a metric.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12668">HADOOP-12668</a> | <i>Critical</i> | <b>Support excluding weak Ciphers in HttpServer2 through ssl-server.xml</b></li>
</ul>
<p>The Code Changes include following: - Modified DFSUtil.java in Apache HDFS project for supplying new parameter ssl.server.exclude.cipher.list - Modified HttpServer2.java in Apache Hadoop-common project to work with new parameter and exclude ciphers using jetty setExcludeCihers method. - Modfied associated test classes to owrk with existing code and also cover the newfunctionality in junit</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12555">HADOOP-12555</a> | <i>Minor</i> | <b>WASB to read credentials from a credential provider</b></li>
</ul>
<p>The hadoop-azure file system now supports configuration of the Azure Storage account credentials using the standard Hadoop Credential Provider API. For details, please refer to the documentation on hadoop-azure and the Credential Provider API.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/MAPREDUCE-6622">MAPREDUCE-6622</a> | <i>Critical</i> | <b>Add capability to set JHS job cache to a task-based limit</b></li>
</ul>
<p>Two recommendations for the mapreduce.jobhistory.loadedtasks.cache.size property: 1) For every 100k of cache size, set the heap size of the Job History Server to 1.2GB. For example, mapreduce.jobhistory.loadedtasks.cache.size=500000, heap size=6GB. 2) Make sure that the cache size is larger than the number of tasks required for the largest job run on the cluster. It might be a good idea to set the value slightly higher (say, 20%) in order to allow for job size growth.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12552">HADOOP-12552</a> | <i>Minor</i> | <b>Fix undeclared/unused dependency to httpclient</b></li>
</ul>
<p>Dependency on commons-httpclient::commons-httpclient was removed from hadoop-common. Downstream projects using commons-httpclient transitively provided by hadoop-common need to add explicit dependency to their pom. Since commons-httpclient is EOL, it is recommended to migrate to org.apache.httpcomponents:httpclient which is the successor.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8791">HDFS-8791</a> | <i>Blocker</i> | <b>block ID-based DN storage layout can be very slow for datanode on ext4</b></li>
</ul>
<p>HDFS-8791 introduces a new datanode layout format. This layout is identical to the previous block id based layout except it has a smaller 32x32 sub-directory structure in each data storage. On startup, the datanode will automatically upgrade it&#x2019;s storages to this new layout. Currently, datanode layout changes support rolling upgrades, on the other hand downgrading is not supported between datanode layout changes and a rollback would be required.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9887">HDFS-9887</a> | <i>Major</i> | <b>WebHdfs socket timeouts should be configurable</b></li>
</ul>
<p>Added new configuration options: dfs.webhdfs.socket.connect-timeout and dfs.webhdfs.socket.read-timeout both defaulting to 60s.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-11792">HADOOP-11792</a> | <i>Major</i> | <b>Remove all of the CHANGES.txt files</b></li>
</ul>
<p>With the introduction of the markdown-formatted and automatically built changes file, the CHANGES.txt files have been eliminated.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9239">HDFS-9239</a> | <i>Major</i> | <b>DataNode Lifeline Protocol: an alternative protocol for reporting DataNode liveness</b></li>
</ul>
<p>This release adds a new feature called the DataNode Lifeline Protocol. If configured, then DataNodes can report that they are still alive to the NameNode via a fallback protocol, separate from the existing heartbeat messages. This can prevent the NameNode from incorrectly marking DataNodes as stale or dead in highly overloaded clusters where heartbeat processing is suffering delays. For more information, please refer to the hdfs-default.xml documentation for several new configuration properties: dfs.namenode.lifeline.rpc-address, dfs.namenode.lifeline.rpc-bind-host, dfs.datanode.lifeline.interval.seconds, dfs.namenode.lifeline.handler.ratio and dfs.namenode.lifeline.handler.count.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-4785">YARN-4785</a> | <i>Major</i> | <b>inconsistent value type of the &#x201c;type&#x201d; field for LeafQueueInfo in response of RM REST API - cluster/scheduler</b></li>
</ul>
<p>Fix inconsistent value type ( String and Array ) of the &#x201c;type&#x201d; field for LeafQueueInfo in response of RM REST API</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/MAPREDUCE-6670">MAPREDUCE-6670</a> | <i>Minor</i> | <b>TestJobListCache#testEviction sometimes fails on Windows with timeout</b></li>
</ul>
<p>Backport the fix to 2.7 and 2.8</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9945">HDFS-9945</a> | <i>Major</i> | <b>Datanode command for evicting writers</b></li>
</ul>
<p>This new dfsadmin command, evictWriters, stops active block writing activities on a data node. The affected writes will continue without the node after a write pipeline recovery. This is useful when data node decommissioning is blocked by slow writers. If issued against a non-decommissioing data node, all current writers will be stopped, but new write requests will continue to be served.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12963">HADOOP-12963</a> | <i>Minor</i> | <b>Allow using path style addressing for accessing the s3 endpoint</b></li>
</ul>
<p>Add new flag to allow supporting path style addressing for s3a</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9412">HDFS-9412</a> | <i>Major</i> | <b>getBlocks occupies FSLock and takes too long to complete</b></li>
</ul>
<p>Skip blocks with size below dfs.balancer.getBlocks.min-block-size (default 10MB) when a balancer asks for a list of blocks.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-3702">HDFS-3702</a> | <i>Minor</i> | <b>Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client</b></li>
</ul>
<p>This patch will attempt to allocate all replicas to remote DataNodes, by adding local DataNode to the excluded DataNodes. If no sufficient replicas can be obtained, it will fall back to default block placement policy, which writes one replica to local DataNode.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9902">HDFS-9902</a> | <i>Major</i> | <b>Support different values of dfs.datanode.du.reserved per storage type</b></li>
</ul>
<p>Reserved space can be configured independently for different storage types for clusters with heterogeneous storage. The &#x2018;dfs.datanode.du.reserved&#x2019; property name can be suffixed with a storage types (i.e. one of ssd, disk, archival or ram_disk). e.g. reserved space for RAM_DISK storage can be configured using the property &#x2018;dfs.datanode.du.reserved.ram_disk&#x2019;. If specific storage type reservation is not configured then the value specified by &#x2018;dfs.datanode.du.reserved&#x2019; will be used for all volumes.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10324">HDFS-10324</a> | <i>Major</i> | <b>Trash directory in an encryption zone should be pre-created with correct permissions</b></li>
</ul>
<p>HDFS will create a &#x201c;.Trash&#x201d; subdirectory when creating a new encryption zone to support soft delete for files deleted within the encryption zone. A new &#x201c;crypto -provisionTrash&#x201d; command has been introduced to provision trash directories for encryption zones created with Apache Hadoop minor releases prior to 2.8.0.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13122">HADOOP-13122</a> | <i>Minor</i> | <b>Customize User-Agent header sent in HTTP requests by S3A.</b></li>
</ul>
<p>S3A now includes the current Hadoop version in the User-Agent string passed through the AWS SDK to the S3 service. Users also may include optional additional information to identify their application. See the documentation of configuration property fs.s3a.user.agent.prefix for further details.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12723">HADOOP-12723</a> | <i>Major</i> | <b>S3A: Add ability to plug in any AWSCredentialsProvider</b></li>
</ul>
<p>Users can integrate a custom credential provider with S3A. See documentation of configuration property fs.s3a.aws.credentials.provider for further details.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/MAPREDUCE-6607">MAPREDUCE-6607</a> | <i>Minor</i> | <b>Enable regex pattern matching when mapreduce.task.files.preserve.filepattern is set</b></li>
</ul>
<p>Before this fix, the files in .staging directory are always preserved when mapreduce.task.files.preserve.filepattern is set. After this fix, the files in .staging directory are preserved if the name of the directory matches the regex pattern specified by mapreduce.task.files.preserve.filepattern.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-5035">YARN-5035</a> | <i>Major</i> | <b>FairScheduler: Adjust maxAssign dynamically when assignMultiple is turned on</b></li>
</ul>
<p>Introducing a new configuration &#x201c;yarn.scheduler.fair.dynamic.max.assign&#x201d; to dynamically determine the resources to assign per heartbeat when assignmultiple is turned on. When turned on, the scheduler allocates roughly half of the remaining resources overriding any max.assign settings configured. This is turned ON by default.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-5132">YARN-5132</a> | <i>Critical</i> | <b>Exclude generated protobuf sources from YARN Javadoc build</b></li>
</ul>
<p>Exclude javadocs for proto-generated java classes.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13105">HADOOP-13105</a> | <i>Major</i> | <b>Support timeouts in LDAP queries in LdapGroupsMapping.</b></li>
</ul>
<p>This patch adds two new config keys for supporting timeouts in LDAP query operations. The property &#x201c;hadoop.security.group.mapping.ldap.connection.timeout.ms&#x201d; is the connection timeout (in milliseconds), within which period if the LDAP provider doesn&#x2019;t establish a connection, it will abort the connect attempt. The property &#x201c;hadoop.security.group.mapping.ldap.read.timeout.ms&#x201d; is the read timeout (in milliseconds), within which period if the LDAP provider doesn&#x2019;t get a LDAP response, it will abort the read attempt.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13155">HADOOP-13155</a> | <i>Major</i> | <b>Implement TokenRenewer to renew and cancel delegation tokens in KMS</b></li>
</ul>
<p>Enables renewal and cancellation of KMS delegation tokens. hadoop.security.key.provider.path needs to be configured to reach the key provider.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12807">HADOOP-12807</a> | <i>Minor</i> | <b>S3AFileSystem should read AWS credentials from environment variables</b></li>
</ul>
<p>Adds support to S3AFileSystem for reading AWS credentials from environment variables.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10375">HDFS-10375</a> | <i>Trivial</i> | <b>Remove redundant TestMiniDFSCluster.testDualClusters</b></li>
</ul>
<p>Remove redundent TestMiniDFSCluster.testDualClusters to save time.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10220">HDFS-10220</a> | <i>Major</i> | <b>A large number of expired leases can make namenode unresponsive and cause failover</b></li>
</ul>
<p>Two new configuration have been added &#x201c;dfs.namenode.lease-recheck-interval-ms&#x201d; and &#x201c;dfs.namenode.max-lock-hold-to-release-lease-ms&#x201d; to fine tune the duty cycle with which the Namenode recovers old leases.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13237">HADOOP-13237</a> | <i>Minor</i> | <b>s3a initialization against public bucket fails if caller lacks any credentials</b></li>
</ul>
<p>S3A now supports read access to a public S3 bucket even if the client does not configure any AWS credentials. See the documentation of configuration property fs.s3a.aws.credentials.provider for further details.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12537">HADOOP-12537</a> | <i>Minor</i> | <b>S3A to support Amazon STS temporary credentials</b></li>
</ul>
<p>S3A now supports use of AWS Security Token Service temporary credentials for authentication to S3. Refer to the documentation of configuration property fs.s3a.session.token for further details.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12892">HADOOP-12892</a> | <i>Blocker</i> | <b>fix/rewrite create-release</b></li>
</ul>
<p>This rewrites the release process with a new dev-support/bin/create-release script. See <a class="externalLink" href="http://wiki.apache.org/hadoop/HowToRelease">http://wiki.apache.org/hadoop/HowToRelease</a> for updated instructions on how to use it.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-3733">HADOOP-3733</a> | <i>Minor</i> | <b>&#x201c;s3:&#x201d; URLs break when Secret Key contains a slash, even if encoded</b></li>
</ul>
<p>Allows userinfo component of URI authority to contain a slash (escaped as %2F). Especially useful for accessing AWS S3 with distcp or hadoop fs.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13203">HADOOP-13203</a> | <i>Major</i> | <b>S3A: Support fadvise &#x201c;random&#x201d; mode for high performance readPositioned() reads</b></li>
</ul>
<p>S3A has added support for configurable input policies. Similar to fadvise, this configuration provides applications with a way to specify their expected access pattern (sequential or random) while reading a file. S3A then performs optimizations tailored to that access pattern. See site documentation of the fs.s3a.experimental.input.fadvise configuration property for more details. Please be advised that this feature is experimental and subject to backward-incompatible changes in future releases.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13263">HADOOP-13263</a> | <i>Major</i> | <b>Reload cached groups in background after expiry</b></li>
</ul>
<p>hadoop.security.groups.cache.background.reload can be set to true to enable background reload of expired groups cache entries. This setting can improve the performance of services that use Groups.java (e.g. the NameNode) when group lookups are slow. The setting is disabled by default.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10440">HDFS-10440</a> | <i>Major</i> | <b>Improve DataNode web UI</b></li>
</ul>
<p>DataNode Web UI has been improved with new HTML5 page, showing useful information.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13139">HADOOP-13139</a> | <i>Major</i> | <b>Branch-2: S3a to use thread pool that blocks clients</b></li>
</ul>
<p>The configuration option &#x2018;fs.s3a.threads.core&#x2019; is no longer supported. The string is still defined in org.apache.hadoop.fs.s3a.Constants.CORE_THREADS, however its value is ignored. If it is set, a warning message will be printed when initializing the S3A filesystem</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13382">HADOOP-13382</a> | <i>Major</i> | <b>remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects</b></li>
</ul>
<p>Dependencies on commons-httpclient have been removed. Projects with undeclared transitive dependencies on commons-httpclient, previously provided via hadoop-common or hadoop-client, may find this to be an incompatible change. Such project are also potentially exposed to the commons-httpclient CVE, and should be fixed for that reason as well.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-7933">HDFS-7933</a> | <i>Major</i> | <b>fsck should also report decommissioning replicas.</b></li>
</ul>
<p>The output of hdfs fsck now also contains information about decommissioning replicas.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13208">HADOOP-13208</a> | <i>Minor</i> | <b>S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories</b></li>
</ul>
<p>S3A has optimized the listFiles method by doing a bulk listing of all entries under a path in a single S3 operation instead of recursively walking the directory tree. The listLocatedStatus method has been optimized by fetching results from S3 lazily as the caller traverses the returned iterator instead of doing an eager fetch of all possible results.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13252">HADOOP-13252</a> | <i>Minor</i> | <b>Tune S3A provider plugin mechanism</b></li>
</ul>
<p>S3A now supports configuration of multiple credential provider classes for authenticating to S3. These are loaded and queried in sequence for a valid set of credentials. For more details, refer to the description of the fs.s3a.aws.credentials.provider configuration property or the S3A documentation page.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8986">HDFS-8986</a> | <i>Major</i> | <b>Add option to -du to calculate directory space usage excluding snapshots</b></li>
</ul>
<p>Add a -x option for &#x201c;hdfs -du&#x201d; and &#x201c;hdfs -count&#x201d; commands to exclude snapshots from being calculated.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10760">HDFS-10760</a> | <i>Major</i> | <b>DataXceiver#run() should not log InvalidToken exception as an error</b></li>
</ul>
<p>Log InvalidTokenException at trace level in DataXceiver#run().</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-5549">YARN-5549</a> | <i>Critical</i> | <b>AMLauncher#createAMContainerLaunchContext() should not log the command to be launched indiscriminately</b></li>
</ul>
<p>Introduces a new configuration property, yarn.resourcemanager.amlauncher.log.command. If this property is set to true, then the AM command being launched will be masked in the RM log.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10489">HDFS-10489</a> | <i>Minor</i> | <b>Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones</b></li>
</ul>
<p>The configuration dfs.encryption.key.provider.uri is deprecated. To configure key provider in HDFS, please use hadoop.security.key.provider.path.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10914">HDFS-10914</a> | <i>Critical</i> | <b>Move remnants of oah.hdfs.client to hadoop-hdfs-client</b></li>
</ul>
<p>The remaining classes in the org.apache.hadoop.hdfs.client package have been moved from hadoop-hdfs to hadoop-hdfs-client.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-12667">HADOOP-12667</a> | <i>Major</i> | <b>s3a: Support createNonRecursive API</b></li>
</ul>
<p>S3A now provides a working implementation of the FileSystem#createNonRecursive method.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10609">HDFS-10609</a> | <i>Major</i> | <b>Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications</b></li>
</ul>
<p>If pipeline recovery fails due to expired encryption key, attempt to refresh the key and retry.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10797">HDFS-10797</a> | <i>Major</i> | <b>Disk usage summary of snapshots causes renamed blocks to get counted twice</b></li>
</ul>
<p>Disk usage summaries previously incorrectly counted files twice if they had been renamed (including files moved to Trash) since being snapshotted. Summaries now include current data plus snapshotted data that is no longer under the directory either due to deletion or being moved outside of the directory.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-10883">HDFS-10883</a> | <i>Major</i> | <b>`getTrashRoot`&#x2019;s behavior is not consistent in DFS after enabling EZ.</b></li>
</ul>
<p>If root path / is an encryption zone, the old DistributedFileSystem#getTrashRoot(new Path(&#x201c;/&#x201d;)) returns /user/$USER/.Trash which is a wrong behavior. The correct value should be /.Trash/$USER</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13560">HADOOP-13560</a> | <i>Major</i> | <b>S3ABlockOutputStream to support huge (many GB) file writes</b></li>
</ul>
<p>This mechanism replaces the (experimental) fast output stream of Hadoop 2.7.x, combining better scalability options with instrumentation. Consult the S3A documentation to see the extra configuration operations.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-11018">HDFS-11018</a> | <i>Major</i> | <b>Incorrect check and message in FsDatasetImpl#invalidate</b></li>
</ul>
<p>Improves the error message when datanode removes a replica which is not found.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-5767">YARN-5767</a> | <i>Major</i> | <b>Fix the order that resources are cleaned up from the local Public/Private caches</b></li>
</ul>
<p>This issue fixes a bug in how resources are evicted from the PUBLIC and PRIVATE yarn local caches used by the node manager for resource localization. In summary, the caches are now properly cleaned based on an LRU policy across both the public and private caches.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-11048">HDFS-11048</a> | <i>Major</i> | <b>Audit Log should escape control characters</b></li>
</ul>
<p>HDFS audit logs are formatted as individual lines, each of which has a few of key-value pair fields. Some of the values come from client request (e.g. src, dst). Before this patch the control characters including \t \n etc are not escaped in audit logs. That may break lines unexpectedly or introduce additional table character (in the worst case, both) within a field. Tools that parse audit logs had to deal with this case carefully. After this patch, the control characters in the src/dst fields are escaped.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-10597">HADOOP-10597</a> | <i>Major</i> | <b>RPC Server signals backoff to clients when all request queues are full</b></li>
</ul>
<p>This change introduces a new configuration key used by RPC server to decide whether to send backoff signal to RPC Client when RPC call queue is full. When the feature is enabled, RPC server will no longer block on the processing of RPC requests when RPC call queue is full. It helps to improve quality of service when the service is under heavy load. The configuration key is in the format of &#x201c;ipc.#port#.backoff.enable&#x201d; where #port# is the port number that RPC server listens on. For example, if you want to enable the feature for the RPC server that listens on 8020, set ipc.8020.backoff.enable to true.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-11056">HDFS-11056</a> | <i>Major</i> | <b>Concurrent append and read operations lead to checksum error</b></li>
</ul>
<p>Load last partial chunk checksum properly into memory when converting a finalized/temporary replica to rbw replica. This ensures concurrent reader reads the correct checksum that matches the data before the update.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13812">HADOOP-13812</a> | <i>Blocker</i> | <b>Upgrade Tomcat to 6.0.48</b></li>
</ul>
<p>Tomcat 6.0.46 starts to filter weak ciphers. Some old SSL clients may be affected. It is recommended to upgrade the SSL client. Run the SSL client against <a class="externalLink" href="https://www.howsmyssl.com/a/check">https://www.howsmyssl.com/a/check</a> to find out its TLS version and cipher suites.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-11217">HDFS-11217</a> | <i>Major</i> | <b>Annotate NameNode and DataNode MXBean interfaces as Private/Stable</b></li>
</ul>
<p>The DataNode and NameNode MXBean interfaces have been marked as Private and Stable to indicate that although users should not be implementing these interfaces directly, the information exposed by these interfaces is part of the HDFS public API.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-11229">HDFS-11229</a> | <i>Blocker</i> | <b>HDFS-11056 failed to close meta file</b></li>
</ul>
<p>The fix for HDFS-11056 reads meta file to load last partial chunk checksum when a block is converted from finalized/temporary to rbw. However, it did not close the file explicitly, which may cause number of open files reaching system limit. This jira fixes it by closing the file explicitly after the meta file is read.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-11160">HDFS-11160</a> | <i>Major</i> | <b>VolumeScanner reports write-in-progress replicas as corrupt incorrectly</b></li>
</ul>
<p>Fixed a race condition that caused VolumeScanner to recognize a good replica as a bad one if the replica is also being written concurrently.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13956">HADOOP-13956</a> | <i>Critical</i> | <b>Read ADLS credentials from Credential Provider</b></li>
</ul>
<p>The hadoop-azure-datalake file system now supports configuration of the Azure Data Lake Store account credentials using the standard Hadoop Credential Provider API. For details, please refer to the documentation on hadoop-azure-datalake and the Credential Provider API.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-5271">YARN-5271</a> | <i>Major</i> | <b>ATS client doesn&#x2019;t work with Jersey 2 on the classpath</b></li>
</ul>
<p>A workaround to avoid dependency conflict with Spark2, before a full classpath isolation solution is implemented. Skip instantiating a Timeline Service client if encountering NoClassDefFoundError.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13929">HADOOP-13929</a> | <i>Major</i> | <b>ADLS connector should not check in contract-test-options.xml</b></li>
</ul>
<p>To run live unit tests, create src/test/resources/auth-keys.xml with the same properties as in the deprecated contract-test-options.xml.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-6177">YARN-6177</a> | <i>Major</i> | <b>Yarn client should exit with an informative error message if an incompatible Jersey library is used at client</b></li>
</ul>
<p>Let yarn client exit with an informative error message if an incompatible Jersey library is used from client side.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-14138">HADOOP-14138</a> | <i>Critical</i> | <b>Remove S3A ref from META-INF service discovery, rely on existing core-default entry</b></li>
</ul>
<p>The classpath implementing the s3a filesystem is now defined in core-default.xml. Attempting to instantiate an S3A filesystem instance using a Configuration instance which has not included the default resorts will fail. Applications should not be doing this anyway, as it will lose other critical configuration options needed by the filesystem.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-11498">HDFS-11498</a> | <i>Major</i> | <b>Make RestCsrfPreventionHandler and WebHdfsHandler compatible with Netty 4.0</b></li>
</ul>
<p>This JIRA sets the Netty 4 dependency to 4.0.23. This is an incompatible change for the 3.0 release line, as 3.0.0-alpha1 and 3.0.0-alpha2 depended on Netty 4.1.0.Beta5.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HADOOP-13037">HADOOP-13037</a> | <i>Major</i> | <b>Refactor Azure Data Lake Store as an independent FileSystem</b></li>
</ul>
<p>Hadoop now supports integration with Azure Data Lake as an alternative Hadoop-compatible file system. Please refer to the Hadoop site documentation of Azure Data Lake for details on usage and configuration.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-11431">HDFS-11431</a> | <i>Blocker</i> | <b>hadoop-hdfs-client JAR does not include ConfiguredFailoverProxyProvider</b></li>
</ul>
<p>The hadoop-client POM now includes a leaner hdfs-client, stripping out all the transitive dependencies on JARs only needed for the Hadoop HDFS daemon itself. The specific jars now excluded are: leveldbjni-all, jetty-util, commons-daemon, xercesImpl, netty and servlet-api.</p>
<p>This should make downstream projects dependent JARs smaller, and avoid version conflict problems with the specific JARs now excluded.</p>
<p>Applications may encounter build problems if they did depend on these JARs, and which didn&#x2019;t explicitly include them. There are two fixes for this</p>
<p>* explicitly include the JARs, stating which version of them you want. * add a dependency on hadoop-hdfs. For Hadoop 2.8+, this will add the missing dependencies. For builds against older versions of Hadoop, this will be harmless, as hadoop-hdfs and all its dependencies are already pulled in by the hadoop-client POM.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-8818">HDFS-8818</a> | <i>Major</i> | <b>Allow Balancer to run faster</b></li>
</ul>
<p>Add a new conf &#x201c;dfs.balancer.max-size-to-move&#x201d; so that Balancer.MAX_SIZE_TO_MOVE becomes configurable.</p><hr />
<ul>
<li><a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-6959">YARN-6959</a> | <i>Major</i> | <b>RM may allocate wrong AM Container for new attempt</b></li>
</ul>
<p>ResourceManager will now record ResourceRequests from different attempts into different objects.</p>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2021
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>