blob: 34ca51abc8dcf39b4fa197013d53e6713e7d4418 [file] [log] [blame]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!-- Generated by Apache Maven Doxia at 2016-10-02 -->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 2.6.5 -
HDFS Rolling Upgrade</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20161002" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="https://hadoop.apache.org/" id="bannerLeft">
<img src="https://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="https://www.apache.org/" id="bannerRight">
<img src="https://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xleft">
<a href="https://www.apache.org/" class="externalLink">Apache</a>
&gt;
<a href="https://hadoop.apache.org/" class="externalLink">Hadoop</a>
&gt;
<a href="../">Apache Hadoop Project Dist POM</a>
&gt;
Apache Hadoop 2.6.5
</div>
<div class="xright"> <a href="https://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://svn.apache.org/repos/asf/hadoop/" class="externalLink">SVN</a>
|
<a href="https://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2016-10-02
&nbsp;| Version: 2.6.5
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Hadoop Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Hadoop Compatibility</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Superusers</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">HDFS User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">HDFS Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">High Availability With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">High Availability With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">HDFS Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">HDFS Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Hftp.html">HFTP</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">C API libhdfs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS REST API</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">HDFS NFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">HDFS Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">HDFS Support for Multihoming</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Archival Storage, SSD & Memory</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">MapReduce Tutorial</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">MapReduce Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibilty between Hadoop 1.x and Hadoop 2.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistCp.html">DistCp</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">YARN Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">YARN Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">YARN Commands</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRestart.html">NodeManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainerExecutor.html">DockerContainerExecutor</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/releasenotes.html">Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CHANGES.txt">Common CHANGES.txt</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CHANGES.txt">HDFS CHANGES.txt</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-mapreduce/CHANGES.txt">MapReduce CHANGES.txt</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-yarn/CHANGES.txt">YARN CHANGES.txt</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="https://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!-- Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. -->
<h1>HDFS Rolling Upgrade</h1>
<ul>
<li><a href="#Introduction">Introduction</a></li>
<li><a href="#Upgrade">Upgrade</a>
<ul>
<li><a href="#Upgrade_without_Downtime">Upgrade without Downtime</a>
<ul>
<li><a href="#Upgrading_Non-Federated_Clusters">Upgrading Non-Federated Clusters</a></li>
<li><a href="#Upgrading_Federated_Clusters">Upgrading Federated Clusters</a></li></ul></li>
<li><a href="#Upgrade_with_Downtime">Upgrade with Downtime</a>
<ul>
<li><a href="#Upgrading_Non-HA_Clusters">Upgrading Non-HA Clusters</a></li></ul></li></ul></li>
<li><a href="#Downgrade_and_Rollback">Downgrade and Rollback</a></li>
<li><a href="#Downgrade">Downgrade</a>
<ul>
<li><a href="#Downgrade_without_Downtime">Downgrade without Downtime</a></li>
<li><a href="#Downgrade_with_Downtime">Downgrade with Downtime</a></li></ul></li>
<li><a href="#Rollback">Rollback</a></li>
<li><a href="#Commands_and_Startup_Options_for_Rolling_Upgrade">Commands and Startup Options for Rolling Upgrade</a>
<ul>
<li><a href="#DFSAdmin_Commands">DFSAdmin Commands</a>
<ul>
<li><a href="#dfsadmin_-rollingUpgrade">dfsadmin -rollingUpgrade</a></li>
<li><a href="#dfsadmin_-getDatanodeInfo">dfsadmin -getDatanodeInfo</a></li>
<li><a href="#dfsadmin_-shutdownDatanode">dfsadmin -shutdownDatanode</a></li></ul></li>
<li><a href="#NameNode_Startup_Options">NameNode Startup Options</a>
<ul>
<li><a href="#namenode_-rollingUpgrade">namenode -rollingUpgrade</a></li></ul></li></ul></li></ul>
<a name="Introduction"></a>
<div class="section" id="Introduction">
<h2>Introduction<a name="Introduction"></a></h2>
<p>
<i>HDFS rolling upgrade</i> allows upgrading individual HDFS daemons.
For examples, the datanodes can be upgraded independent of the namenodes.
A namenode can be upgraded independent of the other namenodes.
The namenodes can be upgraded independent of datanods and journal nodes.
</p>
</div>
<a name="Upgrade"></a>
<div class="section" id="Upgrade">
<h2>Upgrade<a name="Upgrade"></a></h2>
<p>
In Hadoop v2, HDFS supports highly-available (HA) namenode services and wire compatibility.
These two capabilities make it feasible to upgrade HDFS without incurring HDFS downtime.
In order to upgrade a HDFS cluster without downtime, the cluster must be setup with HA.
</p>
<a name="UpgradeWithoutDowntime"></a>
<div class="section" id="UpgradeWithoutDowntime">
<h3>Upgrade without Downtime<a name="Upgrade_without_Downtime"></a></h3>
<p>
In a HA cluster, there are two or more <i>NameNodes (NNs)</i>, many <i>DataNodes (DNs)</i>,
a few <i>JournalNodes (JNs)</i> and a few <i>ZooKeeperNodes (ZKNs)</i>.
<i>JNs</i> is relatively stable and does not require upgrade when upgrading HDFS in most of the cases.
In the rolling upgrade procedure described here,
only <i>NNs</i> and <i>DNs</i> are considered but <i>JNs</i> and <i>ZKNs</i> are not.
Upgrading <i>JNs</i> and <i>ZKNs</i> may incur cluster downtime.
</p>
<div class="section">
<h4>Upgrading Non-Federated Clusters<a name="Upgrading_Non-Federated_Clusters"></a></h4>
<p>
Suppose there are two namenodes <i>NN1</i> and <i>NN2</i>,
where <i>NN1</i> and <i>NN2</i> are respectively in active and standby states.
The following are the steps for upgrading a HA cluster:
</p>
<ol style="list-style-type: decimal">
<li>Prepare Rolling Upgrade
<ol style="list-style-type: decimal">
<li>Run &quot;<tt><a href="#dfsadmin_-rollingUpgrade">hdfs dfsadmin -rollingUpgrade prepare</a></tt>&quot;
to create a fsimage for rollback.
</li>
<li>Run &quot;<tt><a href="#dfsadmin_-rollingUpgrade">hdfs dfsadmin -rollingUpgrade query</a></tt>&quot;
to check the status of the rollback image.
Wait and re-run the command until
the &quot;<tt>Proceed with rolling upgrade</tt>&quot; message is shown.
</li>
</ol></li>
<li>Upgrade Active and Standby <i>NNs</i>
<ol style="list-style-type: decimal">
<li>Shutdown and upgrade <i>NN2</i>.</li>
<li>Start <i>NN2</i> as standby with the
&quot;<a href="#namenode_-rollingUpgrade"><tt>-rollingUpgrade started</tt></a>&quot; option.</li>
<li>Failover from <i>NN1</i> to <i>NN2</i>
so that <i>NN2</i> becomes active and <i>NN1</i> becomes standby.</li>
<li>Shutdown and upgrade <i>NN1</i>.</li>
<li>Start <i>NN1</i> as standby with the
&quot;<a href="#namenode_-rollingUpgrade"><tt>-rollingUpgrade started</tt></a>&quot; option.</li>
</ol></li>
<li>Upgrade <i>DNs</i>
<ol style="list-style-type: decimal">
<li>Choose a small subset of datanodes (e.g. all datanodes under a particular rack).</li>
<ol style="list-style-type: decimal">
<li>Run &quot;<tt><a href="#dfsadmin_-shutdownDatanode">hdfs dfsadmin -shutdownDatanode &lt;DATANODE_HOST:IPC_PORT&gt; upgrade</a></tt>&quot;
to shutdown one of the chosen datanodes.</li>
<li>Run &quot;<tt><a href="#dfsadmin_-getDatanodeInfo">hdfs dfsadmin -getDatanodeInfo &lt;DATANODE_HOST:IPC_PORT&gt;</a></tt>&quot;
to check and wait for the datanode to shutdown.</li>
<li>Upgrade and restart the datanode.</li>
<li>Perform the above steps for all the chosen datanodes in the subset in parallel.</li>
</ol>
<li>Repeat the above steps until all datanodes in the cluster are upgraded.</li>
</ol></li>
<li>Finalize Rolling Upgrade
<ul>
<li>Run &quot;<tt><a href="#dfsadmin_-rollingUpgrade">hdfs dfsadmin -rollingUpgrade finalize</a></tt>&quot;
to finalize the rolling upgrade.</li>
</ul></li>
</ol>
</div>
<div class="section">
<h4>Upgrading Federated Clusters<a name="Upgrading_Federated_Clusters"></a></h4>
<p>
In a federated cluster, there are multiple namespaces
and a pair of active and standby <i>NNs</i> for each namespace.
The procedure for upgrading a federated cluster is similar to upgrading a non-federated cluster
except that Step 1 and Step 4 are performed on each namespace
and Step 2 is performed on each pair of active and standby <i>NNs</i>, i.e.
</p>
<ol style="list-style-type: decimal">
<li>Prepare Rolling Upgrade for Each Namespace</li>
<li>Upgrade Active and Standby <i>NN</i> pairs for Each Namespace</li>
<li>Upgrade <i>DNs</i></li>
<li>Finalize Rolling Upgrade for Each Namespace</li>
</ol>
</div></div>
<a name="UpgradeWithDowntime"></a>
<div class="section" id="UpgradeWithDowntime">
<h3>Upgrade with Downtime<a name="Upgrade_with_Downtime"></a></h3>
<p>
For non-HA clusters,
it is impossible to upgrade HDFS without downtime since it requires restarting the namenodes.
However, datanodes can still be upgraded in a rolling manner.
</p>
<div class="section">
<h4>Upgrading Non-HA Clusters<a name="Upgrading_Non-HA_Clusters"></a></h4>
<p>
In a non-HA cluster, there are a <i>NameNode (NN)</i>, a <i>SecondaryNameNode (SNN)</i>
and many <i>DataNodes (DNs)</i>.
The procedure for upgrading a non-HA cluster is similar to upgrading a HA cluster
except that Step 2 &quot;Upgrade Active and Standby <i>NNs</i>&quot; is changed to below:
</p>
<ul>
<li>Upgrade <i>NN</i> and <i>SNN</i>
<ol style="list-style-type: decimal">
<li>Shutdown <i>SNN</i></li>
<li>Shutdown and upgrade <i>NN</i>.</li>
<li>Start <i>NN</i> with the
&quot;<a href="#namenode_-rollingUpgrade"><tt>-rollingUpgrade started</tt></a>&quot; option.</li>
<li>Upgrade and restart <i>SNN</i></li>
</ol></li>
</ul>
</div></div>
</div>
<a name="DowngradeAndRollback"></a>
<div class="section" id="DowngradeAndRollback">
<h2>Downgrade and Rollback<a name="Downgrade_and_Rollback"></a></h2>
<p>
When the upgraded release is undesirable
or, in some unlikely case, the upgrade fails (due to bugs in the newer release),
administrators may choose to downgrade HDFS back to the pre-upgrade release,
or rollback HDFS to the pre-upgrade release and the pre-upgrade state.
</p>
<p>
Note that downgrade can be done in a rolling fashion but rollback cannot.
Rollback requires cluster downtime.
</p>
<p>
Note also that downgrade and rollback are possible only after a rolling upgrade is started and
before the upgrade is terminated.
An upgrade can be terminated by either finalize, downgrade or rollback.
Therefore, it may not be possible to perform rollback after finalize or downgrade,
or to perform downgrade after finalize.
</p>
</div>
<a name="Downgrade"></a>
<div class="section" id="Downgrade">
<h2>Downgrade<a name="Downgrade"></a></h2>
<p>
<i>Downgrade</i> restores the software back to the pre-upgrade release
and preserves the user data.
Suppose time <i>T</i> is the rolling upgrade start time and the upgrade is terminated by downgrade.
Then, the files created before or after <i>T</i> remain available in HDFS.
The files deleted before or after <i>T</i> remain deleted in HDFS.
</p>
<p>
A newer release is downgradable to the pre-upgrade release
only if both the namenode layout version and the datenode layout version
are not changed between these two releases.
</p>
<a name="DowngradeWithoutDowntime"></a>
<div class="section" id="DowngradeWithoutDowntime">
<h3>Downgrade without Downtime<a name="Downgrade_without_Downtime"></a></h3>
<p>
In a HA cluster,
when a rolling upgrade from an old software release to a new software release is in progress,
it is possible to downgrade, in a rolling fashion, the upgraded machines back to the old software release.
Same as before, suppose <i>NN1</i> and <i>NN2</i> are respectively in active and standby states.
Below are the steps for rolling downgrade:
</p>
<ol style="list-style-type: decimal">
<li>Downgrade <i>DNs</i>
<ol style="list-style-type: decimal">
<li>Choose a small subset of datanodes (e.g. all datanodes under a particular rack).</li>
<ol style="list-style-type: decimal">
<li>Run &quot;<tt><a href="#dfsadmin_-shutdownDatanode">hdfs dfsadmin -shutdownDatanode &lt;DATANODE_HOST:IPC_PORT&gt; upgrade</a></tt>&quot;
to shutdown one of the chosen datanodes.</li>
<li>Run &quot;<tt><a href="#dfsadmin_-getDatanodeInfo">hdfs dfsadmin -getDatanodeInfo &lt;DATANODE_HOST:IPC_PORT&gt;</a></tt>&quot;
to check and wait for the datanode to shutdown.</li>
<li>Downgrade and restart the datanode.</li>
<li>Perform the above steps for all the chosen datanodes in the subset in parallel.</li>
</ol>
<li>Repeat the above steps until all upgraded datanodes in the cluster are downgraded.</li>
</ol></li>
<li>Downgrade Active and Standby <i>NNs</i>
<ol style="list-style-type: decimal">
<li>Shutdown and downgrade <i>NN2</i>.</li>
<li>Start <i>NN2</i> as standby normally. (Note that it is incorrect to use the
&quot;<a href="#namenode_-rollingUpgrade"><tt>-rollingUpgrade downgrade</tt></a>&quot;
option here.)
</li>
<li>Failover from <i>NN1</i> to <i>NN2</i>
so that <i>NN2</i> becomes active and <i>NN1</i> becomes standby.</li>
<li>Shutdown and upgrade <i>NN1</i>.</li>
<li>Start <i>NN1</i> as standby normally. (Note that it is incorrect to use the
&quot;<a href="#namenode_-rollingUpgrade"><tt>-rollingUpgrade downgrade</tt></a>&quot;
option here.)
</li>
</ol></li>
<li>Finalize Rolling Downgrade
<ul>
<li>Run &quot;<tt><a href="#dfsadmin_-rollingUpgrade">hdfs dfsadmin -rollingUpgrade finalize</a></tt>&quot;
to finalize the rolling downgrade.</li>
</ul></li>
</ol>
<p>
Note that the datanodes must be downgraded before downgrading the namenodes
since protocols may be changed in a backward compatible manner but not forward compatible,
i.e. old datanodes can talk to the new namenodes but not vice versa.
</p>
</div>
<a name="DowngradeWithDowntime"></a>
<div class="section" id="DowngradeWithDowntime">
<h3>Downgrade with Downtime<a name="Downgrade_with_Downtime"></a></h3>
<p>
Administrator may choose to first shutdown the cluster and then downgrade it.
The following are the steps:
</p>
<ol style="list-style-type: decimal">
<li>Shutdown all <i>NNs</i> and <i>DNs</i>.</li>
<li>Restore the pre-upgrade release in all machines.</li>
<li>Start <i>NNs</i> with the
&quot;<a href="#namenode_-rollingUpgrade"><tt>-rollingUpgrade downgrade</tt></a>&quot; option.</li>
<li>Start <i>DNs</i> normally.</li>
</ol>
</div>
</div>
<a name="Rollback"></a>
<div class="section" id="Rollback">
<h2>Rollback<a name="Rollback"></a></h2>
<p>
<i>Rollback</i> restores the software back to the pre-upgrade release
but also reverts the user data back to the pre-upgrade state.
Suppose time <i>T</i> is the rolling upgrade start time and the upgrade is terminated by rollback.
The files created before <i>T</i> remain available in HDFS but the files created after <i>T</i> become unavailable.
The files deleted before <i>T</i> remain deleted in HDFS but the files deleted after <i>T</i> are restored.
</p>
<p>
Rollback from a newer release to the pre-upgrade release is always supported.
However, it cannot be done in a rolling fashion. It requires cluster downtime.
Below are the steps for rollback:
</p>
<ul>
<li>Rollback HDFS
<ol style="list-style-type: decimal">
<li>Shutdown all <i>NNs</i> and <i>DNs</i>.</li>
<li>Restore the pre-upgrade release in all machines.</li>
<li>Start <i>NNs</i> with the
&quot;<a href="#namenode_-rollingUpgrade"><tt>-rollingUpgrade rollback</tt></a>&quot; option.</li>
<li>Start <i>DNs</i> with the &quot;<tt>-rollback</tt>&quot; option.</li>
</ol></li>
</ul>
</div>
<a name="dfsadminCommands"></a>
<div class="section" id="dfsadminCommands">
<h2>Commands and Startup Options for Rolling Upgrade<a name="Commands_and_Startup_Options_for_Rolling_Upgrade"></a></h2>
<a name="dfsadminCommands"></a>
<div class="section" id="dfsadminCommands">
<h3>DFSAdmin Commands<a name="DFSAdmin_Commands"></a></h3>
<div class="section">
<h4><tt>dfsadmin -rollingUpgrade</tt><a name="dfsadmin_-rollingUpgrade"></a></h4>
<div class="source">
<pre>hdfs dfsadmin -rollingUpgrade &lt;query|prepare|finalize&gt;</pre></div>
<p>
Execute a rolling upgrade action.
</p>
<ul>
<li>Options:
<table border="0" class="bodyTable">
<tr class="a">
<td><tt>query</tt></td>
<td>Query the current rolling upgrade status.</td></tr>
<tr class="b">
<td><tt>prepare</tt></td>
<td>Prepare a new rolling upgrade.</td></tr>
<tr class="a">
<td><tt>finalize</tt></td>
<td>Finalize the current rolling upgrade.</td></tr>
</table></li></ul>
</div>
<div class="section">
<h4><tt>dfsadmin -getDatanodeInfo</tt><a name="dfsadmin_-getDatanodeInfo"></a></h4>
<div class="source">
<pre>hdfs dfsadmin -getDatanodeInfo &lt;DATANODE_HOST:IPC_PORT&gt;</pre></div>
<p>
Get the information about the given datanode.
This command can be used for checking if a datanode is alive
like the Unix <tt>ping</tt> command.
</p>
</div>
<div class="section">
<h4><tt>dfsadmin -shutdownDatanode</tt><a name="dfsadmin_-shutdownDatanode"></a></h4>
<div class="source">
<pre>hdfs dfsadmin -shutdownDatanode &lt;DATANODE_HOST:IPC_PORT&gt; [upgrade]</pre></div>
<p>
Submit a shutdown request for the given datanode.
If the optional <tt>upgrade</tt> argument is specified,
clients accessing the datanode will be advised to wait for it to restart
and the fast start-up mode will be enabled.
When the restart does not happen in time, clients will timeout and ignore the datanode.
In such case, the fast start-up mode will also be disabled.
</p>
<p>
Note that the command does not wait for the datanode shutdown to complete.
The &quot;<a href="#dfsadmin_-getDatanodeInfo">dfsadmin -getDatanodeInfo</a>&quot;
command can be used for checking if the datanode shutdown is completed.
</p>
</div></div>
<a name="dfsadminCommands"></a>
<div class="section" id="dfsadminCommands">
<h3>NameNode Startup Options<a name="NameNode_Startup_Options"></a></h3>
<div class="section">
<h4><tt>namenode -rollingUpgrade</tt><a name="namenode_-rollingUpgrade"></a></h4>
<div class="source">
<pre>hdfs namenode -rollingUpgrade &lt;downgrade|rollback|started&gt;</pre></div>
<p>
When a rolling upgrade is in progress,
the <tt>-rollingUpgrade</tt> namenode startup option is used to specify
various rolling upgrade options.
</p>
<ul>
<li>Options:
<table border="0" class="bodyTable">
<tr class="a">
<td><tt>downgrade</tt></td>
<td>Restores the namenode back to the pre-upgrade release
and preserves the user data.</td>
</tr>
<tr class="b">
<td><tt>rollback</tt></td>
<td>Restores the namenode back to the pre-upgrade release
but also reverts the user data back to the pre-upgrade state.</td>
</tr>
<tr class="a">
<td><tt>started</tt></td>
<td>Specifies a rolling upgrade already started
so that the namenode should allow image directories
with different layout versions during startup.</td>
</tr>
</table></li></ul>
</div></div>
</div>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">&#169; 2016
Apache Software Foundation
- <a href="https://maven.apache.org/privacy-policy.html">Privacy Policy</a></div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>