blob: 5b091592b1c28db22a6b38c1cc0a31d1021132fe [file] [log] [blame]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2021-06-15
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop Amazon Web Services support &#x2013; Working with IAM Assumed Roles</title>
<style type="text/css" media="all">
@import url("../../css/maven-base.css");
@import url("../../css/maven-theme.css");
@import url("../../css/site.css");
</style>
<link rel="stylesheet" href="../../css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20210615" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xleft">
<a href="http://www.apache.org/" class="externalLink">Apache</a>
&gt;
<a href="http://hadoop.apache.org/" class="externalLink">Hadoop</a>
&gt;
<a href="../../index.html">Apache Hadoop Amazon Web Services support</a>
&gt;
Working with IAM Assumed Roles
</div>
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
&nbsp;| Last Published: 2021-06-15
&nbsp;| Version: 3.3.1
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../../hadoop-openstack/index.html">OpenStack Swift</a>
</li>
<li class="none">
<a href="../../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="../../images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>Working with IAM Assumed Roles</h1>
<ul>
<li><a href="#Using_IAM_Assumed_Roles"> Using IAM Assumed Roles</a>
<ul>
<li><a href="#Before_You_Begin">Before You Begin</a></li>
<li><a href="#How_the_S3A_connector_supports_IAM_Assumed_Roles."> How the S3A connector supports IAM Assumed Roles.</a></li>
<li><a href="#Configuring_Assumed_Roles"> Configuring Assumed Roles</a></li>
<li><a href="#Assumed_Role_Configuration_Options">Assumed Role Configuration Options</a></li></ul></li>
<li><a href="#Restricting_S3A_operations_through_AWS_Policies"> Restricting S3A operations through AWS Policies</a>
<ul>
<li><a href="#Read_Access_Permissions"> Read Access Permissions</a></li>
<li><a href="#Write_Access_Permissions"> Write Access Permissions</a></li>
<li><a href="#SSE-KMS_Permissions"> SSE-KMS Permissions</a></li>
<li><a href="#S3Guard_Permissions"> S3Guard Permissions</a></li>
<li><a href="#Mixed_Permissions_in_a_single_S3_Bucket"> Mixed Permissions in a single S3 Bucket</a></li>
<li><a href="#Example:_Read_access_to_the_base.2C_R.2FW_to_the_path_underneath">Example: Read access to the base, R/W to the path underneath</a></li></ul></li>
<li><a href="#Troubleshooting_Assumed_Roles"> Troubleshooting Assumed Roles</a>
<ul>
<li><a href="#IOException:_.E2.80.9CUnset_property_fs.s3a.assumed.role.arn.E2.80.9D"> IOException: &#x201c;Unset property fs.s3a.assumed.role.arn&#x201d;</a></li>
<li><a href="#a.E2.80.9CNot_authorized_to_perform_sts:AssumeRole.E2.80.9D"> &#x201c;Not authorized to perform sts:AssumeRole&#x201d;</a></li>
<li><a href="#a.E2.80.9CRoles_may_not_be_assumed_by_root_accounts.E2.80.9D"> &#x201c;Roles may not be assumed by root accounts&#x201d;</a></li>
<li><a href="#Member_must_have_value_greater_than_or_equal_to_900"> Member must have value greater than or equal to 900</a></li>
<li><a href="#Error_.E2.80.9CThe_requested_DurationSeconds_exceeds_the_MaxSessionDuration_set_for_this_role.E2.80.9D"> Error &#x201c;The requested DurationSeconds exceeds the MaxSessionDuration set for this role&#x201d;</a></li>
<li><a href="#a.E2.80.9CValue_.E2.80.98345600.E2.80.99_at_.E2.80.98durationSeconds.E2.80.99_failed_to_satisfy_constraint:_Member_must_have_value_less_than_or_equal_to_43200.E2.80.9D">&#x201c;Value &#x2018;345600&#x2019; at &#x2018;durationSeconds&#x2019; failed to satisfy constraint: Member must have value less than or equal to 43200&#x201d;</a></li>
<li><a href="#MalformedPolicyDocumentException_.E2.80.9CThe_policy_is_not_in_the_valid_JSON_format.E2.80.9D"> MalformedPolicyDocumentException &#x201c;The policy is not in the valid JSON format&#x201d;</a></li>
<li><a href="#MalformedPolicyDocumentException_.E2.80.9CSyntax_errors_in_policy.E2.80.9D"> MalformedPolicyDocumentException &#x201c;Syntax errors in policy&#x201d;</a></li>
<li><a href="#IOException:_.E2.80.9CAssumedRoleCredentialProvider_cannot_be_in_fs.s3a.assumed.role.credentials.provider.E2.80.9D"> IOException: &#x201c;AssumedRoleCredentialProvider cannot be in fs.s3a.assumed.role.credentials.provider&#x201d;</a></li>
<li><a href="#AWSBadRequestException:_.E2.80.9Cnot_a_valid_key.3Dvalue_pair.E2.80.9D"> AWSBadRequestException: &#x201c;not a valid key=value pair&#x201d;</a></li>
<li><a href="#AccessDeniedException.2FInvalidClientTokenId:_.E2.80.9CThe_security_token_included_in_the_request_is_invalid.E2.80.9D"> AccessDeniedException/InvalidClientTokenId: &#x201c;The security token included in the request is invalid&#x201d;</a></li>
<li><a href="#AWSSecurityTokenServiceExceptiond:_.E2.80.9CMember_must_satisfy_regular_expression_pattern:_.5B.5Cw.2B.3D.2C..40-.5D.2A.E2.80.9D"> AWSSecurityTokenServiceExceptiond: &#x201c;Member must satisfy regular expression pattern: [\w+=,.@-]*&#x201d;</a></li>
<li><a href="#java.nio.file.AccessDeniedException_within_a_FileSystem_API_call"> java.nio.file.AccessDeniedException within a FileSystem API call</a></li>
<li><a href="#AccessDeniedException_When_working_with_KMS-encrypted_data"> AccessDeniedException When working with KMS-encrypted data</a></li>
<li><a href="#AccessDeniedException_.2B_AmazonDynamoDBException"> AccessDeniedException + AmazonDynamoDBException</a></li>
<li><a href="#Error_Unable_to_execute_HTTP_request">Error Unable to execute HTTP request</a></li>
<li><a href="#Error_.E2.80.9CCredential_should_be_scoped_to_a_valid_region.E2.80.9D"> Error &#x201c;Credential should be scoped to a valid region&#x201d;</a></li></ul></li></ul>
<p>AWS <a class="externalLink" href="http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html">&#x201c;IAM Assumed Roles&#x201d;</a> allows applications to change the AWS role with which to authenticate with AWS services. The assumed roles can have different rights from the main user login.</p>
<p>The S3A connector supports assumed roles for authentication with AWS. A full set of login credentials must be provided, which will be used to obtain the assumed role and refresh it regularly. By using per-filesystem configuration, it is possible to use different assumed roles for different buckets.</p>
<p><i>IAM Assumed Roles are unlikely to be supported by third-party systems supporting the S3 APIs.</i></p>
<div class="section">
<h2><a name="Using_IAM_Assumed_Roles"></a><a name="using_assumed_roles"></a> Using IAM Assumed Roles</h2>
<div class="section">
<h3><a name="Before_You_Begin"></a>Before You Begin</h3>
<p>This document assumes you know about IAM Assumed roles, what they are, how to configure their policies, etc.</p>
<ul>
<li>You need a role to assume, and know its &#x201c;ARN&#x201d;.</li>
<li>You need a pair of long-lived IAM User credentials, not the root account set.</li>
<li>Have the AWS CLI installed, and test that it works there.</li>
<li>Give the role access to S3, and, if using S3Guard, to DynamoDB.</li>
<li>For working with data encrypted with SSE-KMS, the role must have access to the appropriate KMS keys.</li>
</ul>
<p>Trying to learn how IAM Assumed Roles work by debugging stack traces from the S3A client is &#x201c;suboptimal&#x201d;.</p></div>
<div class="section">
<h3><a name="How_the_S3A_connector_supports_IAM_Assumed_Roles."></a><a name="how_it_works"></a> How the S3A connector supports IAM Assumed Roles.</h3>
<p>The S3A connector support IAM Assumed Roles in two ways:</p>
<ol style="list-style-type: decimal">
<li>Using the full credentials on the client to request credentials for a specific role -credentials which are then used for all the store operations. This can be used to verify that a specific role has the access permissions you need, or to &#x201c;su&#x201d; into a role which has permissions that&#x2019;s the full accounts does not directly qualify for -such as access to a KMS key.</li>
<li>Using the full credentials to request role credentials which are then propagated into a launched application as delegation tokens. This extends the previous use as it allows the jobs to be submitted to a shared cluster with the permissions of the requested role, rather than those of the VMs/Containers of the deployed cluster.</li>
</ol>
<p>For Delegation Token integration, see (Delegation Tokens)[delegation_tokens.html]</p>
<p>To for Assumed Role authentication, the client must be configured to use the <i>Assumed Role Credential Provider</i>, <tt>org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider</tt>, in the configuration option <tt>fs.s3a.aws.credentials.provider</tt>.</p>
<p>This AWS Credential provider will read in the <tt>fs.s3a.assumed.role</tt> options needed to connect to the Security Token Service <a class="externalLink" href="https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html">Assumed Role API</a>, first authenticating with the full credentials, then assuming the specific role specified. It will then refresh this login at the configured rate of <tt>fs.s3a.assumed.role.session.duration</tt></p>
<p>To authenticate with the <a class="externalLink" href="https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html">AWS STS service</a> both for the initial credential retrieval and for background refreshes, a different credential provider must be created, one which uses long-lived credentials (secret keys, environment variables). Short lived credentials (e.g other session tokens, EC2 instance credentials) cannot be used.</p>
<p>A list of providers can be set in <tt>s.s3a.assumed.role.credentials.provider</tt>; if unset the standard <tt>BasicAWSCredentialsProvider</tt> credential provider is used, which uses <tt>fs.s3a.access.key</tt> and <tt>fs.s3a.secret.key</tt>.</p>
<p>Note: although you can list other AWS credential providers in to the Assumed Role Credential Provider, it can only cause confusion.</p></div>
<div class="section">
<h3><a name="Configuring_Assumed_Roles"></a><a name="using"></a> Configuring Assumed Roles</h3>
<p>To use assumed roles, the S3A client credentials provider must be set to the <tt>AssumedRoleCredentialProvider</tt>, and <tt>fs.s3a.assumed.role.arn</tt> to the previously created ARN.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;fs.s3a.aws.credentials.provider&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.arn&lt;/name&gt;
&lt;value&gt;arn:aws:iam::90066806600238:role/s3-restricted&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
<p>The STS service itself needs the caller to be authenticated, <i>which can only be done with a set of long-lived credentials</i>. This means the normal <tt>fs.s3a.access.key</tt> and <tt>fs.s3a.secret.key</tt> pair, environment variables, or some other supplier of long-lived secrets.</p>
<p>The default is the <tt>fs.s3a.access.key</tt> and <tt>fs.s3a.secret.key</tt> pair. If you wish to use a different authentication mechanism, set it in the property <tt>fs.s3a.assumed.role.credentials.provider</tt>.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.credentials.provider&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
<p>Requirements for long-lived credentials notwithstanding, this option takes the same values as <tt>fs.s3a.aws.credentials.provider</tt>.</p>
<p>The safest way to manage AWS secrets is via <a href="index.html#hadoop_credential_providers">Hadoop Credential Providers</a>.</p></div>
<div class="section">
<h3><a name="Assumed_Role_Configuration_Options"></a><a name="configuration"></a>Assumed Role Configuration Options</h3>
<p>Here are the full set of configuration options.</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.arn&lt;/name&gt;
&lt;value /&gt;
&lt;description&gt;
AWS ARN for the role to be assumed.
Required if the fs.s3a.aws.credentials.provider contains
org.apache.hadoop.fs.s3a.AssumedRoleCredentialProvider
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.session.name&lt;/name&gt;
&lt;value /&gt;
&lt;description&gt;
Session name for the assumed role, must be valid characters according to
the AWS APIs.
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
If not set, one is generated from the current Hadoop/Kerberos username.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.policy&lt;/name&gt;
&lt;value/&gt;
&lt;description&gt;
JSON policy to apply to the role.
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.session.duration&lt;/name&gt;
&lt;value&gt;30m&lt;/value&gt;
&lt;description&gt;
Duration of assumed roles before a refresh is attempted.
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
Range: 15m to 1h
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.sts.endpoint&lt;/name&gt;
&lt;value/&gt;
&lt;description&gt;
AWS Security Token Service Endpoint. If unset, uses the default endpoint.
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.sts.endpoint.region&lt;/name&gt;
&lt;value&gt;us-west-1&lt;/value&gt;
&lt;description&gt;
AWS Security Token Service Endpoint's region;
Needed if fs.s3a.assumed.role.sts.endpoint points to an endpoint
other than the default one and the v4 signature is used.
Only used if AssumedRoleCredentialProvider is the AWS credential provider.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;fs.s3a.assumed.role.credentials.provider&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,
com.amazonaws.auth.EnvironmentVariableCredentialsProvider
&lt;/value&gt;
&lt;description&gt;
List of credential providers to authenticate with the STS endpoint and
retrieve short-lived role credentials.
Used by AssumedRoleCredentialProvider and the S3A Session Delegation Token
and S3A Role Delegation Token bindings.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
</div></div>
<div class="section">
<h2><a name="Restricting_S3A_operations_through_AWS_Policies"></a><a name="polices"></a> Restricting S3A operations through AWS Policies</h2>
<p>The S3A client needs to be granted specific permissions in order to work with a bucket. Here is a non-normative list of the permissions which must be granted for FileSystem operations to work.</p>
<p><i>Disclaimer</i> The specific set of actions which the S3A connector needs will change over time.</p>
<p>As more operations are added to the S3A connector, and as the means by which existing operations are implemented change, the AWS actions which are required by the client will change.</p>
<p>These lists represent the minimum actions to which the client&#x2019;s principal must have in order to work with a bucket.</p>
<div class="section">
<h3><a name="Read_Access_Permissions"></a><a name="read-permissions"></a> Read Access Permissions</h3>
<p>Permissions which must be granted when reading from a bucket:</p>
<div>
<div>
<pre class="source">s3:Get*
s3:ListBucket
</pre></div></div>
<p>When using S3Guard, the client needs the appropriate <a href="s3guard-permissions">DynamoDB access permissions</a></p>
<p>To use SSE-KMS encryption, the client needs the <a href="sse-kms-permissions">SSE-KMS Permissions</a> to access the KMS key(s).</p></div>
<div class="section">
<h3><a name="Write_Access_Permissions"></a><a name="write-permissions"></a> Write Access Permissions</h3>
<p>These permissions must all be granted for write access:</p>
<div>
<div>
<pre class="source">s3:Get*
s3:Delete*
s3:Put*
s3:ListBucket
s3:ListBucketMultipartUploads
s3:AbortMultipartUpload
</pre></div></div>
</div>
<div class="section">
<h3><a name="SSE-KMS_Permissions"></a><a name="sse-kms-permissions"></a> SSE-KMS Permissions</h3>
<p>When to read data encrypted using SSE-KMS, the client must have <tt>kms:Decrypt</tt> permission for the specific key a file was encrypted with.</p>
<div>
<div>
<pre class="source">kms:Decrypt
</pre></div></div>
<p>To write data using SSE-KMS, the client must have all the following permissions.</p>
<div>
<div>
<pre class="source">kms:Decrypt
kms:GenerateDataKey
</pre></div></div>
<p>This includes renaming: renamed files are encrypted with the encryption key of the current S3A client; it must decrypt the source file first.</p>
<p>If the caller doesn&#x2019;t have these permissions, the operation will fail with an <tt>AccessDeniedException</tt>: the S3 Store does not provide the specifics of the cause of the failure.</p></div>
<div class="section">
<h3><a name="S3Guard_Permissions"></a><a name="s3guard-permissions"></a> S3Guard Permissions</h3>
<p>To use S3Guard, all clients must have a subset of the <a class="externalLink" href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/api-permissions-reference.html">AWS DynamoDB Permissions</a>.</p>
<p>To work with buckets protected with S3Guard, the client must have all the following rights on the DynamoDB Table used to protect that bucket.</p>
<div>
<div>
<pre class="source">dynamodb:BatchGetItem
dynamodb:BatchWriteItem
dynamodb:DeleteItem
dynamodb:DescribeTable
dynamodb:GetItem
dynamodb:PutItem
dynamodb:Query
dynamodb:UpdateItem
</pre></div></div>
<p>This is true, <i>even if the client only has read access to the data</i>.</p>
<p>For the <tt>hadoop s3guard</tt> table management commands, <i>extra</i> permissions are required:</p>
<div>
<div>
<pre class="source">dynamodb:CreateTable
dynamodb:DescribeLimits
dynamodb:DeleteTable
dynamodb:Scan
dynamodb:TagResource
dynamodb:UntagResource
dynamodb:UpdateTable
</pre></div></div>
<p>Without these permissions, tables cannot be created, destroyed or have their IO capacity changed through the <tt>s3guard set-capacity</tt> call. The <tt>dynamodb:Scan</tt> permission is needed for <tt>s3guard prune</tt></p>
<p>The <tt>dynamodb:CreateTable</tt> permission is needed by a client when it tries to create the DynamoDB table on startup, that is <tt>fs.s3a.s3guard.ddb.table.create</tt> is <tt>true</tt> and the table does not already exist.</p></div>
<div class="section">
<h3><a name="Mixed_Permissions_in_a_single_S3_Bucket"></a><a name="mixed-permissions"></a> Mixed Permissions in a single S3 Bucket</h3>
<p>Mixing permissions down the &#x201c;directory tree&#x201d; is limited only to the extent of supporting writeable directories under read-only parent paths.</p>
<p><i>Disclaimer:</i> When a client lacks write access up the entire directory tree, there are no guarantees of consistent filesystem views or operations.</p>
<p>Particular troublespots are &#x201c;directory markers&#x201d; and failures of non-atomic operations, particularly <tt>rename()</tt> and <tt>delete()</tt>.</p>
<p>A directory marker such as <tt>/users/</tt> will not be deleted if the user <tt>alice</tt> creates a directory <tt>/users/alice</tt> <i>and</i> she only has access to <tt>/users/alice</tt>.</p>
<p>When a path or directory is deleted, the parent directory may not exist afterwards. In the example above, if <tt>alice</tt> deletes <tt>/users/alice</tt> and there are no other entries under <tt>/users/alice</tt>, then the directory marker <tt>/users/</tt> cannot be created. The directory <tt>/users</tt> will not exist in listings, <tt>getFileStatus(&quot;/users&quot;)</tt> or similar.</p>
<p>Rename will fail if it cannot delete the items it has just copied, that is <tt>rename(read-only-source, writeable-dest)</tt> will fail &#x2014;but only after performing the COPY of the data. Even though the operation failed, for a single file copy, the destination file will exist. For a directory copy, only a partial copy of the source data may take place before the permission failure is raised.</p>
<p><i>S3Guard</i>: if <a href="s3guard.html">S3Guard</a> is used to manage the directory listings, then after partial failures of rename/copy the DynamoDB tables can get out of sync.</p></div>
<div class="section">
<h3><a name="Example:_Read_access_to_the_base.2C_R.2FW_to_the_path_underneath"></a>Example: Read access to the base, R/W to the path underneath</h3>
<p>This example has the base bucket read only, and a directory underneath, <tt>/users/alice/</tt> granted full R/W access.</p>
<div>
<div>
<pre class="source">{
&quot;Version&quot; : &quot;2012-10-17&quot;,
&quot;Statement&quot; : [ {
&quot;Sid&quot; : &quot;4&quot;,
&quot;Effect&quot; : &quot;Allow&quot;,
&quot;Action&quot; : [
&quot;s3:ListBucket&quot;,
&quot;s3:ListBucketMultipartUploads&quot;,
&quot;s3:Get*&quot;
],
&quot;Resource&quot; : &quot;arn:aws:s3:::example-bucket/*&quot;
}, {
&quot;Sid&quot; : &quot;5&quot;,
&quot;Effect&quot; : &quot;Allow&quot;,
&quot;Action&quot; : [
&quot;s3:Get*&quot;,
&quot;s3:PutObject&quot;,
&quot;s3:DeleteObject&quot;,
&quot;s3:AbortMultipartUpload&quot;,
&quot;s3:ListMultipartUploadParts&quot; ],
&quot;Resource&quot; : [
&quot;arn:aws:s3:::example-bucket/users/alice/*&quot;,
&quot;arn:aws:s3:::example-bucket/users/alice&quot;,
&quot;arn:aws:s3:::example-bucket/users/alice/&quot;
]
} ]
}
</pre></div></div>
<p>Note how three resources are provided to represent the path <tt>/users/alice</tt></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th> Path </th>
<th> Matches </th></tr>
</thead><tbody>
<tr class="b">
<td> <tt>/users/alice</tt> </td>
<td> Any file <tt>alice</tt> created under <tt>/users</tt> </td></tr>
<tr class="a">
<td> <tt>/users/alice/</tt> </td>
<td> The directory marker <tt>alice/</tt> created under <tt>/users</tt> </td></tr>
<tr class="b">
<td> <tt>/users/alice/*</tt> </td>
<td> All files and directories under the path <tt>/users/alice</tt> </td></tr>
</tbody>
</table>
<p>Note that the resource <tt>arn:aws:s3:::example-bucket/users/alice*</tt> cannot be used to refer to all of these paths, because it would also cover adjacent paths like <tt>/users/alice2</tt> and <tt>/users/alicebob</tt>.</p></div></div>
<div class="section">
<h2><a name="Troubleshooting_Assumed_Roles"></a><a name="troubleshooting"></a> Troubleshooting Assumed Roles</h2>
<ol style="list-style-type: decimal">
<li>Make sure the role works and the user trying to enter it can do so from AWS the command line before trying to use the S3A client.</li>
<li>Try to access the S3 bucket with reads and writes from the AWS CLI.</li>
<li>With the Hadoop configuration set too use the role, try to read data from the <tt>hadoop fs</tt> CLI: <tt>hadoop fs -ls -p s3a://bucket/</tt></li>
<li>With the hadoop CLI, try to create a new directory with a request such as <tt>hadoop fs -mkdirs -p s3a://bucket/path/p1/</tt></li>
</ol>
<div class="section">
<h3><a name="IOException:_.E2.80.9CUnset_property_fs.s3a.assumed.role.arn.E2.80.9D"></a><a name="no_role"></a> IOException: &#x201c;Unset property fs.s3a.assumed.role.arn&#x201d;</h3>
<p>The Assumed Role Credential Provider is enabled, but <tt>fs.s3a.assumed.role.arn</tt> is unset.</p>
<div>
<div>
<pre class="source">java.io.IOException: Unset property fs.s3a.assumed.role.arn
at org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider.&lt;init&gt;(AssumedRoleCredentialProvider.java:76)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:583)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
</pre></div></div>
</div>
<div class="section">
<h3><a name="a.E2.80.9CNot_authorized_to_perform_sts:AssumeRole.E2.80.9D"></a><a name="not_authorized_for_assumed_role"></a> &#x201c;Not authorized to perform sts:AssumeRole&#x201d;</h3>
<p>This can arise if the role ARN set in <tt>fs.s3a.assumed.role.arn</tt> is invalid or one to which the caller has no access.</p>
<div>
<div>
<pre class="source">java.nio.file.AccessDeniedException: : Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider
on : com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
Not authorized to perform sts:AssumeRole (Service: AWSSecurityTokenService; Status Code: 403;
Error Code: AccessDenied; Request ID: aad4e59a-f4b0-11e7-8c78-f36aaa9457f6):AccessDenied
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
</pre></div></div>
</div>
<div class="section">
<h3><a name="a.E2.80.9CRoles_may_not_be_assumed_by_root_accounts.E2.80.9D"></a><a name="root_account"></a> &#x201c;Roles may not be assumed by root accounts&#x201d;</h3>
<p>You can&#x2019;t assume a role with the root account of an AWS account; you need to create a new user and give it the permission to change into the role.</p>
<div>
<div>
<pre class="source">java.nio.file.AccessDeniedException: : Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider
on : com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
Roles may not be assumed by root accounts. (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied;
Request ID: e86dfd8f-e758-11e7-88e7-ad127c04b5e2):
No AWS Credentials provided by AssumedRoleCredentialProvider :
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
Roles may not be assumed by root accounts. (Service: AWSSecurityTokenService;
Status Code: 403; Error Code: AccessDenied; Request ID: e86dfd8f-e758-11e7-88e7-ad127c04b5e2)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
... 22 more
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
Roles may not be assumed by root accounts.
(Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied;
Request ID: e86dfd8f-e758-11e7-88e7-ad127c04b5e2)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
</pre></div></div>
</div>
<div class="section">
<h3><a name="Member_must_have_value_greater_than_or_equal_to_900"></a><a name="invalid_duration"></a> <tt>Member must have value greater than or equal to 900</tt></h3>
<p>The value of <tt>fs.s3a.assumed.role.session.duration</tt> is too low.</p>
<div>
<div>
<pre class="source">org.apache.hadoop.fs.s3a.AWSBadRequestException: request role credentials:
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
1 validation error detected: Value '20' at 'durationSeconds' failed to satisfy constraint:
Member must have value greater than or equal to 900 (Service: AWSSecurityTokenService;
Status Code: 400; Error Code: ValidationError;
Request ID: b9a82403-d0a7-11e8-98ef-596679ee890d)
</pre></div></div>
<p>Fix: increase.</p></div>
<div class="section">
<h3><a name="Error_.E2.80.9CThe_requested_DurationSeconds_exceeds_the_MaxSessionDuration_set_for_this_role.E2.80.9D"></a><a name="duration_too_high"></a> Error &#x201c;The requested DurationSeconds exceeds the MaxSessionDuration set for this role&#x201d;</h3>
<p>The value of <tt>fs.s3a.assumed.role.session.duration</tt> is too high.</p>
<div>
<div>
<pre class="source">org.apache.hadoop.fs.s3a.AWSBadRequestException: request role credentials:
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
The requested DurationSeconds exceeds the MaxSessionDuration set for this role.
(Service: AWSSecurityTokenService; Status Code: 400;
Error Code: ValidationError; Request ID: 17875165-d0a7-11e8-b85f-d15a599a7f6d)
</pre></div></div>
<p>There are two solutions to this</p>
<ul>
<li>Decrease the duration value.</li>
<li>Increase the duration of a role in the <a class="externalLink" href="https://console.aws.amazon.com/iam/home#/roles">AWS IAM Console</a>.</li>
</ul></div>
<div class="section">
<h3><a name="a.E2.80.9CValue_.E2.80.98345600.E2.80.99_at_.E2.80.98durationSeconds.E2.80.99_failed_to_satisfy_constraint:_Member_must_have_value_less_than_or_equal_to_43200.E2.80.9D"></a>&#x201c;Value &#x2018;345600&#x2019; at &#x2018;durationSeconds&#x2019; failed to satisfy constraint: Member must have value less than or equal to 43200&#x201d;</h3>
<p>Irrespective of the maximum duration of a role, the AWS role API only permits callers to request any role for up to 12h; attempting to use a larger number will fail.</p>
<div>
<div>
<pre class="source">Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
1 validation error detected:
Value '345600' at 'durationSeconds' failed to satisfy constraint:
Member must have value less than or equal to 43200
(Service: AWSSecurityTokenService;
Status Code: 400; Error Code:
ValidationError;
Request ID: dec1ca6b-d0aa-11e8-ac8c-4119b3ea9f7f)
</pre></div></div>
<p>For full sessions, the duration limit is 129600 seconds: 36h.</p>
<div>
<div>
<pre class="source">org.apache.hadoop.fs.s3a.AWSBadRequestException: request session credentials:
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
1 validation error detected: Value '345600' at 'durationSeconds' failed to satisfy constraint:
Member must have value less than or equal to 129600
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError;
Request ID: a6e73d44-d0aa-11e8-95ed-c5bba29f0635)
</pre></div></div>
<p>For both these errors, the sole fix is to request a shorter duration in <tt>fs.s3a.assumed.role.session.duration</tt>.</p></div>
<div class="section">
<h3><a name="MalformedPolicyDocumentException_.E2.80.9CThe_policy_is_not_in_the_valid_JSON_format.E2.80.9D"></a><a name="malformed_policy"></a> <tt>MalformedPolicyDocumentException</tt> &#x201c;The policy is not in the valid JSON format&#x201d;</h3>
<p>The policy set in <tt>fs.s3a.assumed.role.policy</tt> is not valid according to the AWS specification of Role Policies.</p>
<div>
<div>
<pre class="source">org.apache.hadoop.fs.s3a.AWSBadRequestException: Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on :
com.amazonaws.services.securitytoken.model.MalformedPolicyDocumentException:
The policy is not in the valid JSON format. (Service: AWSSecurityTokenService; Status Code: 400;
Error Code: MalformedPolicyDocument; Request ID: baf8cb62-f552-11e7-9768-9df3b384e40c):
MalformedPolicyDocument: The policy is not in the valid JSON format.
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: MalformedPolicyDocument;
Request ID: baf8cb62-f552-11e7-9768-9df3b384e40c)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:209)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
Caused by: com.amazonaws.services.securitytoken.model.MalformedPolicyDocumentException:
The policy is not in the valid JSON format.
(Service: AWSSecurityTokenService; Status Code: 400;
Error Code: MalformedPolicyDocument; Request ID: baf8cb62-f552-11e7-9768-9df3b384e40c)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
</pre></div></div>
</div>
<div class="section">
<h3><a name="MalformedPolicyDocumentException_.E2.80.9CSyntax_errors_in_policy.E2.80.9D"></a><a name="policy_syntax_error"></a> <tt>MalformedPolicyDocumentException</tt> &#x201c;Syntax errors in policy&#x201d;</h3>
<p>The policy set in <tt>fs.s3a.assumed.role.policy</tt> is not valid JSON.</p>
<div>
<div>
<pre class="source">org.apache.hadoop.fs.s3a.AWSBadRequestException:
Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on :
com.amazonaws.services.securitytoken.model.MalformedPolicyDocumentException:
Syntax errors in policy. (Service: AWSSecurityTokenService;
Status Code: 400; Error Code: MalformedPolicyDocument;
Request ID: 24a281e8-f553-11e7-aa91-a96becfb4d45):
MalformedPolicyDocument: Syntax errors in policy.
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: MalformedPolicyDocument;
Request ID: 24a281e8-f553-11e7-aa91-a96becfb4d45)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:209)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: MalformedPolicyDocument;
Request ID: 24a281e8-f553-11e7-aa91-a96becfb4d45)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
... 19 more
</pre></div></div>
</div>
<div class="section">
<h3><a name="IOException:_.E2.80.9CAssumedRoleCredentialProvider_cannot_be_in_fs.s3a.assumed.role.credentials.provider.E2.80.9D"></a><a name="recursive_auth"></a> <tt>IOException</tt>: &#x201c;AssumedRoleCredentialProvider cannot be in fs.s3a.assumed.role.credentials.provider&#x201d;</h3>
<p>You can&#x2019;t use the Assumed Role Credential Provider as the provider in <tt>fs.s3a.assumed.role.credentials.provider</tt>.</p>
<div>
<div>
<pre class="source">java.io.IOException: AssumedRoleCredentialProvider cannot be in fs.s3a.assumed.role.credentials.provider
at org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider.&lt;init&gt;(AssumedRoleCredentialProvider.java:86)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:583)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
</pre></div></div>
</div>
<div class="section">
<h3><a name="AWSBadRequestException:_.E2.80.9Cnot_a_valid_key.3Dvalue_pair.E2.80.9D"></a><a name="invalid_keypair"></a> <tt>AWSBadRequestException</tt>: &#x201c;not a valid key=value pair&#x201d;</h3>
<p>There&#x2019;s an space or other typo in the <tt>fs.s3a.access.key</tt> or <tt>fs.s3a.secret.key</tt> values used for the inner authentication which is breaking signature creation.</p>
<div>
<div>
<pre class="source"> org.apache.hadoop.fs.s3a.AWSBadRequestException: Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider
on : com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
'valid/20180109/us-east-1/sts/aws4_request' not a valid key=value pair (missing equal-sign) in Authorization header:
'AWS4-HMAC-SHA256 Credential=not valid/20180109/us-east-1/sts/aws4_request,
SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;host;user-agent;x-amz-date.
(Service: AWSSecurityTokenService; Status Code: 400; Error Code:
IncompleteSignature; Request ID: c4a8841d-f556-11e7-99f9-af005a829416):IncompleteSignature:
'valid/20180109/us-east-1/sts/aws4_request' not a valid key=value pair (missing equal-sign)
in Authorization header: 'AWS4-HMAC-SHA256 Credential=not valid/20180109/us-east-1/sts/aws4_request,
SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;host;user-agent;x-amz-date,
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: IncompleteSignature;
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:209)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
'valid/20180109/us-east-1/sts/aws4_request' not a valid key=value pair (missing equal-sign)
in Authorization header: 'AWS4-HMAC-SHA256 Credential=not valid/20180109/us-east-1/sts/aws4_request,
SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;host;user-agent;x-amz-date,
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: IncompleteSignature;
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
</pre></div></div>
</div>
<div class="section">
<h3><a name="AccessDeniedException.2FInvalidClientTokenId:_.E2.80.9CThe_security_token_included_in_the_request_is_invalid.E2.80.9D"></a><a name="invalid_token"></a> <tt>AccessDeniedException/InvalidClientTokenId</tt>: &#x201c;The security token included in the request is invalid&#x201d;</h3>
<p>The credentials used to authenticate with the AWS Security Token Service are invalid.</p>
<div>
<div>
<pre class="source">[ERROR] Failures:
[ERROR] java.nio.file.AccessDeniedException: : Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on :
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
The security token included in the request is invalid.
(Service: AWSSecurityTokenService; Status Code: 403; Error Code: InvalidClientTokenId;
Request ID: 74aa7f8a-f557-11e7-850c-33d05b3658d7):InvalidClientTokenId
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
The security token included in the request is invalid.
(Service: AWSSecurityTokenService; Status Code: 403; Error Code: InvalidClientTokenId;
Request ID: 74aa7f8a-f557-11e7-850c-33d05b3658d7)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1055)
... 25 more
</pre></div></div>
</div>
<div class="section">
<h3><a name="AWSSecurityTokenServiceExceptiond:_.E2.80.9CMember_must_satisfy_regular_expression_pattern:_.5B.5Cw.2B.3D.2C..40-.5D.2A.E2.80.9D"></a><a name="invalid_session"></a> <tt>AWSSecurityTokenServiceExceptiond</tt>: &#x201c;Member must satisfy regular expression pattern: <tt>[\w+=,.@-]*</tt>&#x201d;</h3>
<p>The session name, as set in <tt>fs.s3a.assumed.role.session.name</tt> must match the wildcard <tt>[\w+=,.@-]*</tt>.</p>
<p>If the property is unset, it is extracted from the current username and then sanitized to match these constraints. If set explicitly, it must be valid.</p>
<div>
<div>
<pre class="source">org.apache.hadoop.fs.s3a.AWSBadRequestException:
Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
1 validation error detected: Value 'Session Names cannot Hava Spaces!' at 'roleSessionName'
failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]*
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError;
Request ID: 7c437acb-f55d-11e7-9ad8-3b5e4f701c20):ValidationError:
1 validation error detected: Value 'Session Names cannot Hava Spaces!' at 'roleSessionName'
failed to satisfy constraint: Member must satisfy regular expression pattern: [\w+=,.@-]*
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError;
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:209)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:616)
at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:520)
at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:252)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:474)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
1 validation error detected: Value 'Session Names cannot Hava Spaces!' at 'roleSessionName'
failed to satisfy constraint:
Member must satisfy regular expression pattern: [\w+=,.@-]*
(Service: AWSSecurityTokenService; Status Code: 400; Error Code: ValidationError;
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
</pre></div></div>
</div>
<div class="section">
<h3><a name="java.nio.file.AccessDeniedException_within_a_FileSystem_API_call"></a><a name="access_denied"></a> <tt>java.nio.file.AccessDeniedException</tt> within a FileSystem API call</h3>
<p>If an operation fails with an <tt>AccessDeniedException</tt>, then the role does not have the permission for the S3 Operation invoked during the call.</p>
<div>
<div>
<pre class="source">java.nio.file.AccessDeniedException: s3a://bucket/readonlyDir:
rename(s3a://bucket/readonlyDir, s3a://bucket/renameDest)
on s3a://bucket/readonlyDir:
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 2805F2ABF5246BB1;
S3 Extended Request ID: iEXDVzjIyRbnkAc40MS8Sjv+uUQNvERRcqLsJsy9B0oyrjHLdkRKwJ/phFfA17Kjn483KSlyJNw=),
S3 Extended Request ID: iEXDVzjIyRbnkAc40MS8Sjv+uUQNvERRcqLsJsy9B0oyrjHLdkRKwJ/phFfA17Kjn483KSlyJNw=:AccessDenied
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:216)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:143)
at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:853)
...
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied
(Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 2805F2ABF5246BB1;
S3 Extended Request ID: iEXDVzjIyRbnkAc40MS8Sjv+uUQNvERRcqLsJsy9B0oyrjHLdkRKwJ/phFfA17Kjn483KSlyJNw=),
S3 Extended Request ID: iEXDVzjIyRbnkAc40MS8Sjv+uUQNvERRcqLsJsy9B0oyrjHLdkRKwJ/phFfA17Kjn483KSlyJNw=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1638)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1303)
</pre></div></div>
<p>This is the policy restriction behaving as intended: the caller is trying to perform an action which is forbidden.</p>
<ol style="list-style-type: decimal">
<li>
<p>If a policy has been set in <tt>fs.s3a.assumed.role.policy</tt> then it must declare <i>all</i> permissions which the caller is allowed to perform. The existing role policies act as an outer constraint on what the caller can perform, but are not inherited.</p>
</li>
<li>
<p>If the policy for a bucket is set up with complex rules on different paths, check the path for the operation.</p>
</li>
<li>
<p>The policy may have omitted one or more actions which are required. Make sure that all the read and write permissions are allowed for any bucket/path to which data is being written to, and read permissions for all buckets read from.</p>
</li>
</ol></div>
<div class="section">
<h3><a name="AccessDeniedException_When_working_with_KMS-encrypted_data"></a><a name="access_denied_kms"></a> <tt>AccessDeniedException</tt> When working with KMS-encrypted data</h3>
<p>If the bucket is using SSE-KMS to encrypt data:</p>
<ol style="list-style-type: decimal">
<li>The caller must have the <tt>kms:Decrypt</tt> permission to read the data.</li>
<li>The caller needs <tt>kms:Decrypt</tt> and <tt>kms:GenerateDataKey</tt> to write data.</li>
</ol>
<p>Without permissions, the request fails <i>and there is no explicit message indicating that this is an encryption-key issue</i>.</p>
<p>This problem is most obvious when you fail when writing data in a &#x201c;Writing Object&#x201d; operation.</p>
<p>If the client does have write access to the bucket, verify that the caller has <tt>kms:GenerateDataKey</tt> permissions for the encryption key in use.</p>
<div>
<div>
<pre class="source">java.nio.file.AccessDeniedException: test/testDTFileSystemClient: Writing Object on test/testDTFileSystemClient:
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403;
Error Code: AccessDenied; Request ID: E86544FF1D029857)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:243)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:314)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:406)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:310)
at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:285)
at org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:150)
at org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:460)
at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:438)
at org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
at org.apache.hadoop.util.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:219)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403;
Error Code: AccessDenied; Request ID: E86544FF1D029857)
</pre></div></div>
<p>Note: the ability to read encrypted data in the store does not guarantee that the caller can encrypt new data. It is a separate permission.</p></div>
<div class="section">
<h3><a name="AccessDeniedException_.2B_AmazonDynamoDBException"></a><a name="dynamodb_exception"></a> <tt>AccessDeniedException</tt> + <tt>AmazonDynamoDBException</tt></h3>
<div>
<div>
<pre class="source">java.nio.file.AccessDeniedException: bucket1:
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException:
User: arn:aws:sts::980678866538:assumed-role/s3guard-test-role/test is not authorized to perform:
dynamodb:DescribeTable on resource: arn:aws:dynamodb:us-west-1:980678866538:table/bucket1
(Service: AmazonDynamoDBv2; Status Code: 400;
</pre></div></div>
<p>The caller is trying to access an S3 bucket which uses S3Guard, but the caller lacks the relevant DynamoDB access permissions.</p>
<p>The <tt>dynamodb:DescribeTable</tt> operation is the first one used in S3Guard to access, the DynamoDB table, so it is often the first to fail. It can be a sign that the role has no permissions at all to access the table named in the exception, or just that this specific permission has been omitted.</p>
<p>If the role policy requested for the assumed role didn&#x2019;t ask for any DynamoDB permissions, this is where all attempts to work with a S3Guarded bucket will fail. Check the value of <tt>fs.s3a.assumed.role.policy</tt></p></div>
<div class="section">
<h3><a name="Error_Unable_to_execute_HTTP_request"></a>Error <tt>Unable to execute HTTP request</tt></h3>
<p>This is a low-level networking error. Possible causes include:</p>
<ul>
<li>The endpoint set in <tt>fs.s3a.assumed.role.sts.endpoint</tt> is invalid.</li>
<li>There are underlying network problems.</li>
</ul>
<div>
<div>
<pre class="source">org.apache.hadoop.fs.s3a.AWSClientIOException: request session credentials:
com.amazonaws.SdkClientException:
Unable to execute HTTP request: null: Unable to execute HTTP request: null
at com.amazonaws.thirdparty.apache.http.impl.conn.DefaultRoutePlanner.determineRoute(DefaultRoutePlanner.java:88)
at com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.determineRoute(InternalHttpClient.java:124)
at com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:183)
at com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
</pre></div></div>
</div>
<div class="section">
<h3><a name="Error_.E2.80.9CCredential_should_be_scoped_to_a_valid_region.E2.80.9D"></a><a name="credential_scope"></a> Error &#x201c;Credential should be scoped to a valid region&#x201d;</h3>
<p>This is based on conflict between the values of <tt>fs.s3a.assumed.role.sts.endpoint</tt> and <tt>fs.s3a.assumed.role.sts.endpoint.region</tt> Two variants, &#x201c;not '''&#x201d;</p>
<p>Variant 1: <tt>Credential should be scoped to a valid region, not 'us-west-1'</tt> (or other string)</p>
<div>
<div>
<pre class="source">java.nio.file.AccessDeniedException: : request session credentials:
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
Credential should be scoped to a valid region, not 'us-west-1'.
(Service: AWSSecurityTokenService; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: d9065cc4-e2b9-11e8-8b7b-f35cb8d7aea4):SignatureDoesNotMatch
</pre></div></div>
<p>One of:</p>
<ul>
<li>the value of <tt>fs.s3a.assumed.role.sts.endpoint.region</tt> is not a valid region</li>
<li>the value of <tt>fs.s3a.assumed.role.sts.endpoint.region</tt> is not the signing region of the endpoint set in <tt>fs.s3a.assumed.role.sts.endpoint</tt></li>
</ul>
<p>Variant 2: <tt>Credential should be scoped to a valid region, not ''</tt></p>
<div>
<div>
<pre class="source">java.nio.file.AccessDeniedException: : request session credentials:
com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException:
Credential should be scoped to a valid region, not ''. (
Service: AWSSecurityTokenService; Status Code: 403; Error Code: SignatureDoesNotMatch;
Request ID: bd3e5121-e2ac-11e8-a566-c1a4d66b6a16):SignatureDoesNotMatch
</pre></div></div>
<p>This should be intercepted earlier: an endpoint has been specified but not a region.</p>
<p>There&#x2019;s special handling for the central <tt>sts.amazonaws.com</tt> region; when that is declared as the value of <tt>fs.s3a.assumed.role.sts.endpoint.region</tt> then there is no need to declare a region: whatever value it has is ignored.</p></div></div>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2021
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>