blob: c7098a44c9d18977bbdb68e3da5b3852110f991e [file] [log] [blame]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2021-06-15
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.3.1 &#x2013; Unix Shell Guide</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20210615" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xleft">
<a href="http://www.apache.org/" class="externalLink">Apache</a>
&gt;
<a href="http://hadoop.apache.org/" class="externalLink">Hadoop</a>
&gt;
<a href="../index.html">Apache Hadoop Project Dist POM</a>
&gt;
<a href="index.html">Apache Hadoop 3.3.1</a>
&gt;
Unix Shell Guide
</div>
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2021-06-15
&nbsp;| Version: 3.3.1
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../hadoop-openstack/index.html">OpenStack Swift</a>
</li>
<li class="none">
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>Unix Shell Guide</h1>
<ul>
<li><a href="#Important_End-User_Environment_Variables">Important End-User Environment Variables</a>
<ul>
<li><a href="#HADOOP_CLIENT_OPTS">HADOOP_CLIENT_OPTS</a></li>
<li><a href="#a.28command.29_.28subcommand.29_OPTS">(command)_(subcommand)_OPTS</a></li>
<li><a href="#HADOOP_CLASSPATH">HADOOP_CLASSPATH</a></li>
<li><a href="#Auto-setting_of_Variables">Auto-setting of Variables</a></li></ul></li>
<li><a href="#Administrator_Environment">Administrator Environment</a>
<ul>
<li><a href="#a.28command.29_.28subcommand.29_OPTS">(command)_(subcommand)_OPTS</a></li>
<li><a href="#a.28command.29_.28subcommand.29_USER">(command)_(subcommand)_USER</a></li></ul></li>
<li><a href="#Developer_and_Advanced_Administrator_Environment">Developer and Advanced Administrator Environment</a>
<ul>
<li><a href="#Shell_Profiles">Shell Profiles</a></li>
<li><a href="#Shell_API">Shell API</a></li>
<li><a href="#User-level_API_Access">User-level API Access</a></li>
<li><a href="#Dynamic_Subcommands">Dynamic Subcommands</a></li>
<li><a href="#Running_with_Privilege_.28Secure_Mode.29">Running with Privilege (Secure Mode)</a></li></ul></li></ul>
<p>Much of Apache Hadoop&#x2019;s functionality is controlled via <a href="CommandsManual.html">the shell</a>. There are several ways to modify the default behavior of how these commands execute.</p>
<div class="section">
<h2><a name="Important_End-User_Environment_Variables"></a>Important End-User Environment Variables</h2>
<p>Apache Hadoop has many environment variables that control various aspects of the software. (See <tt>hadoop-env.sh</tt> and related files.) Some of these environment variables are dedicated to helping end users manage their runtime.</p>
<div class="section">
<h3><a name="HADOOP_CLIENT_OPTS"></a><tt>HADOOP_CLIENT_OPTS</tt></h3>
<p>This environment variable is used for all end-user, non-daemon operations. It can be used to set any Java options as well as any Apache Hadoop options via a system property definition. For example:</p>
<div>
<div>
<pre class="source">HADOOP_CLIENT_OPTS=&quot;-Xmx1g -Dhadoop.socks.server=localhost:4000&quot; hadoop fs -ls /tmp
</pre></div></div>
<p>will increase the memory and send this command via a SOCKS proxy server.</p>
<p>NOTE: If &#x2018;YARN_CLIENT_OPTS&#x2019; is defined, it will replace &#x2018;HADOOP_CLIENT_OPTS&#x2019; when commands are run with &#x2018;yarn&#x2019;.</p></div>
<div class="section">
<h3><a name="a.28command.29_.28subcommand.29_OPTS"></a><tt>(command)_(subcommand)_OPTS</tt></h3>
<p>It is also possible to set options on a per subcommand basis. This allows for one to create special options for particular cases. The first part of the pattern is the command being used, but all uppercase. The second part of the command is the subcommand being used. Then finally followed by the string <tt>_OPT</tt>.</p>
<p>For example, to configure <tt>mapred distcp</tt> to use a 2GB heap, one would use:</p>
<div>
<div>
<pre class="source">MAPRED_DISTCP_OPTS=&quot;-Xmx2g&quot;
</pre></div></div>
<p>These options will appear <i>after</i> <tt>HADOOP_CLIENT_OPTS</tt> during execution and will generally take precedence.</p></div>
<div class="section">
<h3><a name="HADOOP_CLASSPATH"></a><tt>HADOOP_CLASSPATH</tt></h3>
<p>NOTE: Site-wide settings should be configured via a shellprofile entry and permanent user-wide settings should be configured via ${HOME}/.hadooprc using the <tt>hadoop_add_classpath</tt> function. See below for more information.</p>
<p>The Apache Hadoop scripts have the capability to inject more content into the classpath of the running command by setting this environment variable. It should be a colon delimited list of directories, files, or wildcard locations.</p>
<div>
<div>
<pre class="source">HADOOP_CLASSPATH=${HOME}/lib/myjars/*.jar hadoop classpath
</pre></div></div>
<p>A user can provides hints to the location of the paths via the <tt>HADOOP_USER_CLASSPATH_FIRST</tt> variable. Setting this to any value will tell the system to try and push these paths near the front.</p></div>
<div class="section">
<h3><a name="Auto-setting_of_Variables"></a>Auto-setting of Variables</h3>
<p>If a user has a common set of settings, they can be put into the <tt>${HOME}/.hadoop-env</tt> file. This file is always read to initialize and override any variables that the user may want to customize. It uses bash syntax, similar to the <tt>.bashrc</tt> file:</p>
<p>For example:</p>
<div>
<div>
<pre class="source">#
# my custom Apache Hadoop settings!
#
HADOOP_CLIENT_OPTS=&quot;-Xmx1g&quot;
MAPRED_DISTCP_OPTS=&quot;-Xmx2g&quot;
HADOOP_DISTCP_OPTS=&quot;-Xmx2g&quot;
</pre></div></div>
<p>The <tt>.hadoop-env</tt> file can also be used to extend functionality and teach Apache Hadoop new tricks. For example, to run hadoop commands accessing the server referenced in the environment variable <tt>${HADOOP_SERVER}</tt>, the following in the <tt>.hadoop-env</tt> will do just that:</p>
<div>
<div>
<pre class="source">if [[ -n ${HADOOP_SERVER} ]]; then
HADOOP_CONF_DIR=/etc/hadoop.${HADOOP_SERVER}
fi
</pre></div></div>
<p>One word of warning: not all of Unix Shell API routines are available or work correctly in <tt>.hadoop-env</tt>. See below for more information on <tt>.hadooprc</tt>.</p></div></div>
<div class="section">
<h2><a name="Administrator_Environment"></a>Administrator Environment</h2>
<p>In addition to the various XML files, there are two key capabilities for administrators to configure Apache Hadoop when using the Unix Shell:</p>
<ul>
<li>
<p>Many environment variables that impact how the system operates. This guide will only highlight some key ones. There is generally more information in the various <tt>*-env.sh</tt> files.</p>
</li>
<li>
<p>Supplement or do some platform-specific changes to the existing scripts. Apache Hadoop provides the capabilities to do function overrides so that the existing code base may be changed in place without all of that work. Replacing functions is covered later under the Shell API documentation.</p>
</li>
</ul>
<div class="section">
<h3><a name="a.28command.29_.28subcommand.29_OPTS"></a><tt>(command)_(subcommand)_OPTS</tt></h3>
<p>By far, the most important are the series of <tt>_OPTS</tt> variables that control how daemons work. These variables should contain all of the relevant settings for those daemons.</p>
<p>Similar to the user commands above, all daemons will honor the <tt>(command)_(subcommand)_OPTS</tt> pattern. It is generally recommended that these be set in <tt>hadoop-env.sh</tt> to guarantee that the system will know which settings it should use on restart. Unlike user-facing subcommands, daemons will <i>NOT</i> honor <tt>HADOOP_CLIENT_OPTS</tt>.</p>
<p>In addition, daemons that run in an extra security mode also support <tt>(command)_(subcommand)_SECURE_EXTRA_OPTS</tt>. These options are <i>supplemental</i> to the generic <tt>*_OPTS</tt> and will appear after, therefore generally taking precedence.</p></div>
<div class="section">
<h3><a name="a.28command.29_.28subcommand.29_USER"></a><tt>(command)_(subcommand)_USER</tt></h3>
<p>Apache Hadoop provides a way to do a user check per-subcommand. While this method is easily circumvented and should not be considered a security-feature, it does provide a mechanism by which to prevent accidents. For example, setting <tt>HDFS_NAMENODE_USER=hdfs</tt> will make the <tt>hdfs namenode</tt> and <tt>hdfs --daemon start namenode</tt> commands verify that the user running the commands are the hdfs user by checking the <tt>USER</tt> environment variable. This also works for non-daemons. Setting <tt>HADOOP_DISTCP_USER=jane</tt> will verify that <tt>USER</tt> is set to <tt>jane</tt> before being allowed to execute the <tt>hadoop distcp</tt> command.</p>
<p>If a _USER environment variable exists and commands are run with a privilege (e.g., as root; see hadoop_privilege_check in the API documentation), execution will switch to the specified user first. For commands that support user account switching for security reasons and therefore have a SECURE_USER variable (see more below), the base _USER variable needs to be the user that is expected to be used to switch to the SECURE_USER account. For example:</p>
<div>
<div>
<pre class="source">HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
</pre></div></div>
<p>will force &#x2018;hdfs &#x2013;daemon start datanode&#x2019; to be root, but will eventually switch to the hdfs user after the privileged work has been completed.</p>
<p>Be aware that if the --workers flag is used, the user switch happens <i>after</i> ssh is invoked. The multi-daemon start and stop commands in sbin will, however, switch (if appropriate) prior and will therefore use the keys of the specified _USER.</p></div></div>
<div class="section">
<h2><a name="Developer_and_Advanced_Administrator_Environment"></a>Developer and Advanced Administrator Environment</h2>
<div class="section">
<h3><a name="Shell_Profiles"></a>Shell Profiles</h3>
<p>Apache Hadoop allows for third parties to easily add new features through a variety of pluggable interfaces. This includes a shell code subsystem that makes it easy to inject the necessary content into the base installation.</p>
<p>Core to this functionality is the concept of a shell profile. Shell profiles are shell snippets that can do things such as add jars to the classpath, configure Java system properties and more.</p>
<p>Shell profiles may be installed in either <tt>${HADOOP_CONF_DIR}/shellprofile.d</tt> or <tt>${HADOOP_HOME}/libexec/shellprofile.d</tt>. Shell profiles in the <tt>libexec</tt> directory are part of the base installation and cannot be overridden by the user. Shell profiles in the configuration directory may be ignored if the end user changes the configuration directory at runtime.</p>
<p>An example of a shell profile is in the libexec directory.</p></div>
<div class="section">
<h3><a name="Shell_API"></a>Shell API</h3>
<p>Apache Hadoop&#x2019;s shell code has a <a href="./UnixShellAPI.html">function library</a> that is open for administrators and developers to use to assist in their configuration and advanced feature management. These APIs follow the standard <a href="./InterfaceClassification.html">Apache Hadoop Interface Classification</a>, with one addition: Replaceable.</p>
<p>The shell code allows for core functions to be overridden. However, not all functions can be or are safe to be replaced. If a function is not safe to replace, it will have an attribute of Replaceable: No. If a function is safe to replace, it will have the attribute of Replaceable: Yes.</p>
<p>In order to replace a function, create a file called <tt>hadoop-user-functions.sh</tt> in the <tt>${HADOOP_CONF_DIR}</tt> directory. Simply define the new, replacement function in this file and the system will pick it up automatically. There may be as many replacement functions as needed in this file. Examples of function replacement are in the <tt>hadoop-user-functions.sh.examples</tt> file.</p>
<p>Functions that are marked Public and Stable are safe to use in shell profiles as-is. Other functions may change in a minor release.</p></div>
<div class="section">
<h3><a name="User-level_API_Access"></a>User-level API Access</h3>
<p>In addition to <tt>.hadoop-env</tt>, which allows individual users to override <tt>hadoop-env.sh</tt>, user&#x2019;s may also use <tt>.hadooprc</tt>. This is called after the Apache Hadoop shell environment has been configured and allows the full set of shell API function calls.</p>
<p>For example:</p>
<div>
<div>
<pre class="source">hadoop_add_classpath /some/path/custom.jar
</pre></div></div>
<p>would go into <tt>.hadooprc</tt></p></div>
<div class="section">
<h3><a name="Dynamic_Subcommands"></a>Dynamic Subcommands</h3>
<p>Utilizing the Shell API, it is possible for third parties to add their own subcommands to the primary Hadoop shell scripts (hadoop, hdfs, mapred, yarn).</p>
<p>Prior to executing a subcommand, the primary scripts will check for the existence of a (scriptname)_subcommand_(subcommand) function. This function gets executed with the parameters set to all remaining command line arguments. For example, if the following function is defined:</p>
<div>
<div>
<pre class="source">function yarn_subcommand_hello
{
echo &quot;$@&quot;
exit $?
}
</pre></div></div>
<p>then executing <tt>yarn --debug hello world I see you</tt> will activate script debugging and call the <tt>yarn_subcommand_hello</tt> function as:</p>
<div>
<div>
<pre class="source">yarn_subcommand_hello world I see you
</pre></div></div>
<p>which will result in the output of:</p>
<div>
<div>
<pre class="source">world I see you
</pre></div></div>
<p>It is also possible to add the new subcommands to the usage output. The <tt>hadoop_add_subcommand</tt> function adds text to the usage output. Utilizing the standard HADOOP_SHELL_EXECNAME variable, we can limit which command gets our new function.</p>
<div>
<div>
<pre class="source">if [[ &quot;${HADOOP_SHELL_EXECNAME}&quot; = &quot;yarn&quot; ]]; then
hadoop_add_subcommand &quot;hello&quot; client &quot;Print some text to the screen&quot;
fi
</pre></div></div>
<p>We set the subcommand type to be &#x201c;client&#x201d; as there are no special restrictions, extra capabilities, etc. This functionality may also be use to override the built-ins. For example, defining:</p>
<div>
<div>
<pre class="source">function hdfs_subcommand_fetchdt
{
...
}
</pre></div></div>
<p>&#x2026; will replace the existing <tt>hdfs fetchdt</tt> subcommand with a custom one.</p>
<p>Some key environment variables for Dynamic Subcommands:</p>
<ul>
<li>HADOOP_CLASSNAME</li>
</ul>
<p>This is the name of the Java class to use when program execution continues.</p>
<ul>
<li>HADOOP_PRIV_CLASSNAME</li>
</ul>
<p>This is the name of the Java class to use when a daemon is expected to be run in a privileged mode. (See more below.)</p>
<ul>
<li>HADOOP_SHELL_EXECNAME</li>
</ul>
<p>This is the name of the script that is being executed. It will be one of hadoop, hdfs, mapred, or yarn.</p>
<ul>
<li>HADOOP_SUBCMD</li>
</ul>
<p>This is the subcommand that was passed on the command line.</p>
<ul>
<li>HADOOP_SUBCMD_ARGS</li>
</ul>
<p>This array contains the argument list after the Apache Hadoop common argument processing has taken place and is the same list that is passed to the subcommand function as arguments. For example, if <tt>hadoop --debug subcmd 1 2 3</tt> has been executed on the command line, then <tt>${HADOOP_SUBCMD_ARGS[0]}</tt> will be 1 and <tt>hadoop_subcommand_subcmd</tt> will also have $1 equal to 1. This array list MAY be modified by subcommand functions to add or delete values from the argument list for further processing.</p>
<ul>
<li>HADOOP_SECURE_CLASSNAME</li>
</ul>
<p>If this subcommand runs a service that supports the secure mode, this variable should be set to the classname of the secure version.</p>
<ul>
<li>HADOOP_SUBCMD_SECURESERVICE</li>
</ul>
<p>Setting this to true will force the subcommand to run in secure mode regardless of hadoop_detect_priv_subcmd. It is expected that HADOOP_SECURE_USER will be set to the user that will be executing the final process. See more about secure mode.</p>
<ul>
<li>HADOOP_SUBCMD_SUPPORTDAEMONIZATION</li>
</ul>
<p>If this command can be executed as a daemon, set this to true.</p>
<ul>
<li>HADOOP_USER_PARAMS</li>
</ul>
<p>This is the full content of the command line, prior to any parsing done. It will contain flags such as <tt>--debug</tt>. It MAY NOT be manipulated.</p>
<p>The Apache Hadoop runtime facilities require functions exit if no further processing is required. For example, in the hello example above, Java and other facilities were not required so a simple <tt>exit $?</tt> was sufficient. However, if the function were to utilize <tt>HADOOP_CLASSNAME</tt>, then program execution must continue so that Java with the Apache Hadoop-specific parameters will be launched against the given Java class. Another example would be in the case of an unrecoverable error. It is the function&#x2019;s responsibility to print an appropriate message (preferably using the hadoop_error API call) and exit appropriately.</p></div>
<div class="section">
<h3><a name="Running_with_Privilege_.28Secure_Mode.29"></a>Running with Privilege (Secure Mode)</h3>
<p>Some daemons, such as the DataNode and the NFS gateway, may be run in a privileged mode. This means that they are expected to be launched as root and (by default) switched to another userid via jsvc. This allows for these daemons to grab a low, privileged port and then drop superuser privileges during normal execution. Running with privilege is also possible for 3rd parties utilizing Dynamic Subcommands. If the following are true:</p>
<ul>
<li>(command)_(subcommand)_SECURE_USER environment variable is defined and points to a valid username</li>
<li>HADOOP_SECURE_CLASSNAME is defined and points to a valid Java class</li>
</ul>
<p>then the shell scripts will attempt to run the class as a command with privilege as it would the built-ins. In general, users are expected to define the _SECURE_USER variable and developers define the _CLASSNAME in their shell script bootstrap.</p></div></div>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2021
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>