blob: dabafb9d7e0e814c74e6d666818bacfc12586959 [file] [log] [blame]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2021-06-15
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.3.1 &#x2013; HDFS NFS Gateway</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20210615" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xleft">
<a href="http://www.apache.org/" class="externalLink">Apache</a>
&gt;
<a href="http://hadoop.apache.org/" class="externalLink">Hadoop</a>
&gt;
<a href="../index.html">Apache Hadoop Project Dist POM</a>
&gt;
<a href="index.html">Apache Hadoop 3.3.1</a>
&gt;
HDFS NFS Gateway
</div>
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2021-06-15
&nbsp;| Version: 3.3.1
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../hadoop-openstack/index.html">OpenStack Swift</a>
</li>
<li class="none">
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>HDFS NFS Gateway</h1>
<ul>
<li><a href="#Overview">Overview</a></li>
<li><a href="#Configuration">Configuration</a></li>
<li><a href="#Start_and_stop_NFS_gateway_service">Start and stop NFS gateway service</a></li>
<li><a href="#Verify_validity_of_NFS_related_services">Verify validity of NFS related services</a></li>
<li><a href="#Mount_the_export_.E2.80.9C.2F.E2.80.9D">Mount the export &#x201c;/&#x201d;</a></li>
<li><a href="#Allow_mounts_from_unprivileged_clients">Allow mounts from unprivileged clients</a></li>
<li><a href="#User_authentication_and_mapping">User authentication and mapping</a></li></ul>
<div class="section">
<h2><a name="Overview"></a>Overview</h2>
<p>The NFS Gateway supports NFSv3 and allows HDFS to be mounted as part of the client&#x2019;s local file system. Currently NFS Gateway supports and enables the following usage patterns:</p>
<ul>
<li>Users can browse the HDFS file system through their local file system on NFSv3 client compatible operating systems.</li>
<li>Users can download files from the the HDFS file system on to their local file system.</li>
<li>Users can upload files from their local file system directly to the HDFS file system.</li>
<li>Users can stream data directly to HDFS through the mount point. File append is supported but random write is not supported.</li>
</ul>
<p>The NFS gateway machine needs the same thing to run an HDFS client like Hadoop JAR files, HADOOP_CONF directory. The NFS gateway can be on the same host as DataNode, NameNode, or any HDFS client.</p></div>
<div class="section">
<h2><a name="Configuration"></a>Configuration</h2>
<p>The NFS-gateway uses proxy user to proxy all the users accessing the NFS mounts. In non-secure mode, the user running the gateway is the proxy user, while in secure mode the user in Kerberos keytab is the proxy user. Suppose the proxy user is &#x2018;nfsserver&#x2019; and users belonging to the groups &#x2018;users-group1&#x2019; and &#x2018;users-group2&#x2019; use the NFS mounts, then in core-site.xml of the NameNode, the following two properities must be set and only NameNode needs restart after the configuration change (NOTE: replace the string &#x2018;nfsserver&#x2019; with the proxy user name in your cluster):</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;hadoop.proxyuser.nfsserver.groups&lt;/name&gt;
&lt;value&gt;root,users-group1,users-group2&lt;/value&gt;
&lt;description&gt;
The 'nfsserver' user is allowed to proxy all members of the 'users-group1' and
'users-group2' groups. Note that in most cases you will need to include the
group &quot;root&quot; because the user &quot;root&quot; (which usually belonges to &quot;root&quot; group) will
generally be the user that initially executes the mount on the NFS client system.
Set this to '*' to allow nfsserver user to proxy any group.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.proxyuser.nfsserver.hosts&lt;/name&gt;
&lt;value&gt;nfs-client-host1.com&lt;/value&gt;
&lt;description&gt;
This is the host where the nfs gateway is running. Set this to '*' to allow
requests from any hosts to be proxied.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
<p>The above are the only required configuration for the NFS gateway in non-secure mode. For Kerberized hadoop clusters, the following configurations need to be added to hdfs-site.xml for the gateway (NOTE: replace string &#x201c;nfsserver&#x201d; with the proxy user name and ensure the user contained in the keytab is also the same proxy user):</p>
<div>
<div>
<pre class="source"> &lt;property&gt;
&lt;name&gt;nfs.keytab.file&lt;/name&gt;
&lt;value&gt;/etc/hadoop/conf/nfsserver.keytab&lt;/value&gt; &lt;!-- path to the nfs gateway keytab --&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;nfs.kerberos.principal&lt;/name&gt;
&lt;value&gt;nfsserver/_HOST@YOUR-REALM.COM&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
<p>The rest of the NFS gateway configurations are optional for both secure and non-secure mode.</p>
<ul>
<li>
<p>The AIX NFS client has a <a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-6549">few known issues</a> that prevent it from working correctly by default with the HDFS NFS Gateway. If you want to be able to access the HDFS NFS Gateway from AIX, you should set the following configuration setting to enable work-arounds for these issues:</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;nfs.aix.compatibility.mode.enabled&lt;/name&gt;
&lt;value&gt;true&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
<p>Note that regular, non-AIX clients should NOT enable AIX compatibility mode. The work-arounds implemented by AIX compatibility mode effectively disable safeguards to ensure that listing of directory contents via NFS returns consistent results, and that all data sent to the NFS server can be assured to have been committed.</p>
</li>
<li>
<p>HDFS super-user is the user with the same identity as NameNode process itself and the super-user can do anything in that permissions checks never fail for the super-user. If the following property is configured, the superuser on NFS client can access any file on HDFS. By default, the super user is not configured in the gateway. Note that, even the the superuser is configured, &#x201c;nfs.exports.allowed.hosts&#x201d; still takes effect. For example, the superuser will not have write access to HDFS files through the gateway if the NFS client host is not allowed to have write access in &#x201c;nfs.exports.allowed.hosts&#x201d;.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;nfs.superuser&lt;/name&gt;
&lt;value&gt;the_name_of_hdfs_superuser&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
</li>
</ul>
<p>It&#x2019;s strongly recommended for the users to update a few configuration properties based on their use cases. All the following configuration properties can be added or updated in hdfs-site.xml.</p>
<ul>
<li>
<p>If the client mounts the export with access time update allowed, make sure the following property is not disabled in the configuration file. Only NameNode needs to restart after this property is changed. On some Unix systems, the user can disable access time update by mounting the export with &#x201c;noatime&#x201d;. If the export is mounted with &#x201c;noatime&#x201d;, the user doesn&#x2019;t need to change the following property and thus no need to restart namenode.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;dfs.namenode.accesstime.precision&lt;/name&gt;
&lt;value&gt;3600000&lt;/value&gt;
&lt;description&gt;The access time for HDFS file is precise upto this value.
The default value is 1 hour. Setting a value of 0 disables
access times for HDFS.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
</li>
<li>
<p>Users are expected to update the file dump directory. NFS client often reorders writes, especially when the export is not mounted with &#x201c;sync&#x201d; option. Sequential writes can arrive at the NFS gateway at random order. This directory is used to temporarily save out-of-order writes before writing to HDFS. For each file, the out-of-order writes are dumped after they are accumulated to exceed certain threshold (e.g., 1MB) in memory. One needs to make sure the directory has enough space. For example, if the application uploads 10 files with each having 100MB, it is recommended for this directory to have roughly 1GB space in case if a worst-case write reorder happens to every file. Only NFS gateway needs to restart after this property is updated.</p>
<div>
<div>
<pre class="source"> &lt;property&gt;
&lt;name&gt;nfs.dump.dir&lt;/name&gt;
&lt;value&gt;/tmp/.hdfs-nfs&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
</li>
<li>
<p>By default, the export can be mounted by any client. To better control the access, users can update the following property. The value string contains machine name and access privilege, separated by whitespace characters. The machine name format can be a single host, a &#x201c;*&#x201d;, a Java regular expression, or an IPv4 address. The access privilege uses rw or ro to specify read/write or read-only access of the machines to exports. If the access privilege is not provided, the default is read-only. Entries are separated by &#x201c;;&#x201d;. For example: &#x201c;192.168.0.0/22 rw ; \\w*\\.example\\.com ; host1.test.org ro;&#x201d;. Only the NFS gateway needs to restart after this property is updated. Note that, here Java regular expression is different with the regulation expression used in Linux NFS export table, such as, using &#x201c;\\w*\\.example\\.com&#x201d; instead of &#x201c;*.example.com&#x201d;, &#x201c;192\\.168\\.0\\.(11|22)&#x201d; instead of &#x201c;192.168.0.[11|22]&#x201d; and so on.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;nfs.exports.allowed.hosts&lt;/name&gt;
&lt;value&gt;* rw&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
</li>
<li>
<p>HDFS super-user is the user with the same identity as NameNode process itself and the super-user can do anything in that permissions checks never fail for the super-user. If the following property is configured, the superuser on NFS client can access any file on HDFS. By default, the super user is not configured in the gateway. Note that, even the the superuser is configured, &#x201c;nfs.exports.allowed.hosts&#x201d; still takes effect. For example, the superuser will not have write access to HDFS files through the gateway if the NFS client host is not allowed to have write access in &#x201c;nfs.exports.allowed.hosts&#x201d;.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;nfs.superuser&lt;/name&gt;
&lt;value&gt;the_name_of_hdfs_superuser&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
</li>
<li>
<p>Metrics. Like other HDFS daemons, the gateway exposes runtime metrics. It is available at <tt>http://gateway-ip:50079/jmx</tt> as a JSON document. The NFS handler related metrics is exposed under the name &#x201c;Nfs3Metrics&#x201d;. The latency histograms can be enabled by adding the following property to hdfs-site.xml file.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;nfs.metrics.percentiles.intervals&lt;/name&gt;
&lt;value&gt;100&lt;/value&gt;
&lt;description&gt;Enable the latency histograms for read, write and
commit requests. The time unit is 100 seconds in this example.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
</li>
<li>
<p>JVM and log settings. You can export JVM settings (e.g., heap size and GC log) in HDFS_NFS3_OPTS. More NFS related settings can be found in hadoop-env.sh. To get NFS debug trace, you can edit the log4j.property file to add the following. Note, debug trace, especially for ONCRPC, can be very verbose.</p>
<p>To change logging level:</p>
<div>
<div>
<pre class="source"> log4j.logger.org.apache.hadoop.hdfs.nfs=DEBUG
</pre></div></div>
<p>To get more details of ONCRPC requests:</p>
<div>
<div>
<pre class="source"> log4j.logger.org.apache.hadoop.oncrpc=DEBUG
</pre></div></div>
</li>
<li>
<p>Export point. One can specify the NFS export point of HDFS. Exactly one export point is supported. Full path is required when configuring the export point. By default, the export point is the root directory &#x201c;/&#x201d;.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;nfs.export.point&lt;/name&gt;
&lt;value&gt;/&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
</li>
</ul></div>
<div class="section">
<h2><a name="Start_and_stop_NFS_gateway_service"></a>Start and stop NFS gateway service</h2>
<p>Three daemons are required to provide NFS service: rpcbind (or portmap), mountd and nfsd. The NFS gateway process has both nfsd and mountd. It shares the HDFS root &#x201c;/&#x201d; as the only export. It is recommended to use the portmap included in NFS gateway package. Even though NFS gateway works with portmap/rpcbind provide by most Linux distributions, the package included portmap is needed on some Linux systems such as RHEL 6.2 and SLES 11, the former due to an <a class="externalLink" href="https://bugzilla.redhat.com/show_bug.cgi?id=731542">rpcbind bug</a>. More detailed discussions can be found in <a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-4763">HDFS-4763</a>.</p>
<ol style="list-style-type: decimal">
<li>
<p>Stop nfsv3 and rpcbind/portmap services provided by the platform (commands can be different on various Unix platforms):</p>
<div>
<div>
<pre class="source">[root]&gt; service nfs stop
[root]&gt; service rpcbind stop
</pre></div></div>
</li>
<li>
<p>Start Hadoop&#x2019;s portmap (needs root privileges):</p>
<div>
<div>
<pre class="source">[root]&gt; $HADOOP_HOME/bin/hdfs --daemon start portmap
</pre></div></div>
</li>
<li>
<p>Start mountd and nfsd.</p>
<p>No root privileges are required for this command. In non-secure mode, the NFS gateway should be started by the proxy user mentioned at the beginning of this user guide. While in secure mode, any user can start NFS gateway as long as the user has read access to the Kerberos keytab defined in &#x201c;nfs.keytab.file&#x201d;.</p>
<div>
<div>
<pre class="source">[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon start nfs3
</pre></div></div>
</li>
<li>
<p>Stop NFS gateway services.</p>
<div>
<div>
<pre class="source">[hdfs]$ $HADOOP_HOME/bin/hdfs --daemon stop nfs3
[root]&gt; $HADOOP_HOME/bin/hdfs --daemon stop portmap
</pre></div></div>
</li>
</ol>
<p>Optionally, you can forgo running the Hadoop-provided portmap daemon and instead use the system portmap daemon on all operating systems if you start the NFS Gateway as root. This will allow the HDFS NFS Gateway to work around the aforementioned bug and still register using the system portmap daemon. To do so, just start the NFS gateway daemon as you normally would, but make sure to do so as the &#x201c;root&#x201d; user, and also set the &#x201c;HDFS_NFS3_SECURE_USER&#x201d; environment variable to an unprivileged user. In this mode the NFS Gateway will start as root to perform its initial registration with the system portmap, and then will drop privileges back to the user specified by the HDFS_NFS3_SECURE_USER afterward and for the rest of the duration of the lifetime of the NFS Gateway process. Note that if you choose this route, you should skip steps 1 and 2 above.</p></div>
<div class="section">
<h2><a name="Verify_validity_of_NFS_related_services"></a>Verify validity of NFS related services</h2>
<ol style="list-style-type: decimal">
<li>
<p>Execute the following command to verify if all the services are up and running:</p>
<div>
<div>
<pre class="source">[root]&gt; rpcinfo -p $nfs_server_ip
</pre></div></div>
<p>You should see output similar to the following:</p>
<div>
<div>
<pre class="source"> program vers proto port
100005 1 tcp 4242 mountd
100005 2 udp 4242 mountd
100005 2 tcp 4242 mountd
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100005 3 udp 4242 mountd
100005 1 udp 4242 mountd
100003 3 tcp 2049 nfs
100005 3 tcp 4242 mountd
</pre></div></div>
</li>
<li>
<p>Verify if the HDFS namespace is exported and can be mounted.</p>
<div>
<div>
<pre class="source">[root]&gt; showmount -e $nfs_server_ip
</pre></div></div>
<p>You should see output similar to the following:</p>
<div>
<div>
<pre class="source"> Exports list on $nfs_server_ip :
/ (everyone)
</pre></div></div>
</li>
</ol></div>
<div class="section">
<h2><a name="Mount_the_export_.E2.80.9C.2F.E2.80.9D"></a>Mount the export &#x201c;/&#x201d;</h2>
<p>Currently NFS v3 only uses TCP as the transportation protocol. NLM is not supported so mount option &#x201c;nolock&#x201d; is needed. Mount option &#x201c;sync&#x201d; is strongly recommended since it can minimize or avoid reordered writes, which results in more predictable throughput. Not specifying the sync option may cause unreliable behavior when uploading large files. It&#x2019;s recommended to use hard mount. This is because, even after the client sends all data to NFS gateway, it may take NFS gateway some extra time to transfer data to HDFS when writes were reorderd by NFS client Kernel.</p>
<p>If soft mount has to be used, the user should give it a relatively long timeout (at least no less than the default timeout on the host) .</p>
<p>The users can mount the HDFS namespace as shown below:</p>
<div>
<div>
<pre class="source"> [root]&gt;mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync $server:/ $mount_point
</pre></div></div>
<p>Then the users can access HDFS as part of the local file system except that, hard link and random write are not supported yet. To optimize the performance of large file I/O, one can increase the NFS transfer size (rsize and wsize) during mount. By default, NFS gateway supports 1MB as the maximum transfer size. For larger data transfer size, one needs to update &#x201c;nfs.rtmax&#x201d; and &#x201c;nfs.wtmax&#x201d; in hdfs-site.xml.</p></div>
<div class="section">
<h2><a name="Allow_mounts_from_unprivileged_clients"></a>Allow mounts from unprivileged clients</h2>
<p>In environments where root access on client machines is not generally available, some measure of security can be obtained by ensuring that only NFS clients originating from privileged ports can connect to the NFS server. This feature is referred to as &#x201c;port monitoring.&#x201d; This feature is not enabled by default in the HDFS NFS Gateway, but can be optionally enabled by setting the following config in hdfs-site.xml on the NFS Gateway machine:</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;nfs.port.monitoring.disabled&lt;/name&gt;
&lt;value&gt;false&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
</div>
<div class="section">
<h2><a name="User_authentication_and_mapping"></a>User authentication and mapping</h2>
<p>NFS gateway in this release uses AUTH_UNIX style authentication. When the user on NFS client accesses the mount point, NFS client passes the UID to NFS gateway. NFS gateway does a lookup to find user name from the UID, and then passes the username to the HDFS along with the HDFS requests. For example, if the NFS client has current user as &#x201c;admin&#x201d;, when the user accesses the mounted directory, NFS gateway will access HDFS as user &#x201c;admin&#x201d;. To access HDFS as the user &#x201c;hdfs&#x201d;, one needs to switch the current user to &#x201c;hdfs&#x201d; on the client system when accessing the mounted directory.</p>
<p>The system administrator must ensure that the user on NFS client host has the same name and UID as that on the NFS gateway host. This is usually not a problem if the same user management system (e.g., LDAP/NIS) is used to create and deploy users on HDFS nodes and NFS client node. In case the user account is created manually on different hosts, one might need to modify UID (e.g., do &#x201c;usermod -u 123 myusername&#x201d;) on either NFS client or NFS gateway host in order to make it the same on both sides. More technical details of RPC AUTH_UNIX can be found in <a class="externalLink" href="http://tools.ietf.org/html/rfc1057">RPC specification</a>.</p>
<p>Optionally, the system administrator can configure a custom static mapping file in the event one wishes to access the HDFS NFS Gateway from a system with a completely disparate set of UIDs/GIDs. By default this file is located at &#x201c;/etc/nfs.map&#x201d;, but a custom location can be configured by setting the &#x201c;static.id.mapping.file&#x201d; property to the path of the static mapping file. The format of the static mapping file is similar to what is described in the exports(5) manual page, but roughly it is:</p>
<div>
<div>
<pre class="source"># Mapping for clients accessing the NFS gateway
uid 10 100 # Map the remote UID 10 the local UID 100
gid 11 101 # Map the remote GID 11 to the local GID 101
</pre></div></div></div>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2021
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>