blob: f4f44abd1ecd81e7b0b8b3d53ac7c0628f69ae43 [file] [log] [blame]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2021-06-15
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.3.1 &#x2013; interface </title>
<style type="text/css" media="all">
@import url("../css/maven-base.css");
@import url("../css/maven-theme.css");
@import url("../css/site.css");
</style>
<link rel="stylesheet" href="../css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20210615" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xleft">
<a href="http://www.apache.org/" class="externalLink">Apache</a>
&gt;
<a href="http://hadoop.apache.org/" class="externalLink">Hadoop</a>
&gt;
<a href="../../index.html">Apache Hadoop Project Dist POM</a>
&gt;
<a href="../index.html">Apache Hadoop 3.3.1</a>
&gt;
interface
</div>
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2021-06-15
&nbsp;| Version: 3.3.1
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../../hadoop-openstack/index.html">OpenStack Swift</a>
</li>
<li class="none">
<a href="../../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="../images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- ============================================================= -->
<!-- INTERFACE: MultipartUploader -->
<!-- ============================================================= -->
<h1>interface <tt>org.apache.hadoop.fs.MultipartUploader</tt></h1>
<ul>
<li><a href="#Invariants">Invariants</a></li>
<li><a href="#Concurrency">Concurrency</a></li>
<li><a href="#Model">Model</a></li>
<li><a href="#Asynchronous_API">Asynchronous API</a>
<ul>
<li><a href="#close.28.29">close()</a></li></ul></li>
<li><a href="#State_Changing_Operations">State Changing Operations</a>
<ul>
<li><a href="#CompletableFuture.3CUploadHandle.3E_startUpload.28Path.29">CompletableFuture&lt;UploadHandle&gt; startUpload(Path)</a></li>
<li><a href="#CompletableFuture.3CPartHandle.3E_putPart.28UploadHandle_uploadHandle.2C_int_partNumber.2C_Path_filePath.2C_InputStream_inputStream.2C_long_lengthInBytes.29">CompletableFuture&lt;PartHandle&gt; putPart(UploadHandle uploadHandle, int partNumber, Path filePath, InputStream inputStream, long lengthInBytes)</a></li>
<li><a href="#CompletableFuture.3CPathHandle.3E_complete.28UploadHandle_uploadId.2C_Path_filePath.2C_Map.3CInteger.2C_PartHandle.3E_handles.29">CompletableFuture&lt;PathHandle&gt; complete(UploadHandle uploadId, Path filePath, Map&lt;Integer, PartHandle&gt; handles)</a></li>
<li><a href="#CompletableFuture.3CVoid.3E_abort.28UploadHandle_uploadId.2C_Path_filePath.29">CompletableFuture&lt;Void&gt; abort(UploadHandle uploadId, Path filePath)</a></li>
<li><a href="#CompletableFuture.3CInteger.3E_abortUploadsUnderPath.28Path_path.29">CompletableFuture&lt;Integer&gt; abortUploadsUnderPath(Path path)</a></li></ul></li></ul>
<p>The <tt>MultipartUploader</tt> can upload a file using multiple parts to Hadoop-supported filesystems. The benefits of a multipart upload is that the file can be uploaded from multiple clients or processes in parallel and the results will not be visible to other clients until the <tt>complete</tt> function is called.</p>
<p>When implemented by an object store, uploaded data may incur storage charges, even before it is visible in the filesystems. Users of this API must be diligent and always perform best-effort attempts to complete or abort the upload. The <tt>abortUploadsUnderPath(path)</tt> operation can help here.</p>
<div class="section">
<h2><a name="Invariants"></a>Invariants</h2>
<p>All the requirements of a valid <tt>MultipartUploader</tt> are considered implicit econditions and postconditions:</p>
<p>The operations of a single multipart upload may take place across different instance of a multipart uploader, across different processes and hosts. It is therefore a requirement that:</p>
<ol style="list-style-type: decimal">
<li>
<p>All state needed to upload a part, complete an upload or abort an upload must be contained within or retrievable from an upload handle.</p>
</li>
<li>
<p>That handle MUST be serializable; it MUST be deserializable to different processes executing the exact same version of Hadoop.</p>
</li>
<li>
<p>different hosts/processes MAY upload different parts, sequentially or simultaneously. The order in which they are uploaded to the filesystem MUST NOT constrain the order in which the data is stored in the final file.</p>
</li>
<li>
<p>An upload MAY be completed on a different instance than any which uploaded parts.</p>
</li>
<li>
<p>The output of an upload MUST NOT be visible at the final destination until the upload may complete.</p>
</li>
<li>
<p>It is not an error if a single multipart uploader instance initiates or completes multiple uploads files to the same destination sequentially, irrespective of whether or not the store supports concurrent uploads.</p>
</li>
</ol></div>
<div class="section">
<h2><a name="Concurrency"></a>Concurrency</h2>
<p>Multiple processes may upload parts of a multipart upload simultaneously.</p>
<p>If a call is made to <tt>startUpload(path)</tt> to a destination where an active upload is in progress, implementations MUST perform one of the two operations.</p>
<ul>
<li>Reject the call as a duplicate.</li>
<li>Permit both to proceed, with the final output of the file being that of <i>exactly one of the two uploads</i>.</li>
</ul>
<p>Which upload succeeds is undefined. Users must not expect consistent behavior across filesystems, across filesystem instances *or even across different requests.</p>
<p>If a multipart upload is completed or aborted while a part upload is in progress, the in-progress upload, if it has not completed, must not be included in the final file, in whole or in part. Implementations SHOULD raise an error in the <tt>putPart()</tt> operation.</p>
<h1>Serialization Compatibility</h1>
<p>Users MUST NOT expect that serialized PathHandle versions are compatible across * different multipart uploader implementations. * different versions of the same implementation.</p>
<p>That is: all clients MUST use the exact same version of Hadoop.</p></div>
<div class="section">
<h2><a name="Model"></a>Model</h2>
<p>A FileSystem/FileContext which supports Multipart Uploads extends the existing model <tt>(Directories, Files, Symlinks)</tt> to one of <tt>(Directories, Files, Symlinks, Uploads)</tt> <tt>Uploads</tt> of type <tt>Map[UploadHandle -&gt; Map[PartHandle -&gt; UploadPart]</tt>.</p>
<p>The <tt>Uploads</tt> element of the state tuple is a map of all active uploads.</p>
<div>
<div>
<pre class="source">Uploads: Map[UploadHandle -&gt; Map[PartHandle -&gt; UploadPart]`
</pre></div></div>
<p>An UploadHandle is a non-empty list of bytes.</p>
<div>
<div>
<pre class="source">UploadHandle: List[byte]
len(UploadHandle) &gt; 0
</pre></div></div>
<p>Clients <i>MUST</i> treat this as opaque. What is core to this features design is that the handle is valid from across clients: the handle may be serialized on host <tt>hostA</tt>, deserialized on <tt>hostB</tt> and still used to extend or complete the upload.</p>
<div>
<div>
<pre class="source">UploadPart = (Path: path, parts: Map[PartHandle -&gt; byte[]])
</pre></div></div>
<p>Similarly, the <tt>PartHandle</tt> type is also a non-empty list of opaque bytes, again, marshallable between hosts.</p>
<div>
<div>
<pre class="source">PartHandle: List[byte]
</pre></div></div>
<p>It is implicit that each <tt>UploadHandle</tt> in <tt>FS.Uploads</tt> is unique. Similarly, each <tt>PartHandle</tt> in the map of <tt>[PartHandle -&gt; UploadPart]</tt> must also be unique.</p>
<ol style="list-style-type: decimal">
<li>There is no requirement that Part Handles are unique across uploads.</li>
<li>There is no requirement that Upload Handles are unique over time. However, if Part Handles are rapidly recycled, there is a risk that the nominally idempotent operation <tt>abort(FS, uploadHandle)</tt> could unintentionally cancel a successor operation which used the same Upload Handle.</li>
</ol></div>
<div class="section">
<h2><a name="Asynchronous_API"></a>Asynchronous API</h2>
<p>All operations return <tt>CompletableFuture&lt;&gt;</tt> types which must be subsequently evaluated to get their return values.</p>
<ol style="list-style-type: decimal">
<li>The execution of the operation MAY be a blocking operation in on the call thread.</li>
<li>If not, it SHALL be executed in a separate thread and MUST complete by the time the future evaluation returns.</li>
<li>Some/All preconditions MAY be evaluated at the time of initial invocation,</li>
<li>All those which are not evaluated at that time, MUST Be evaluated during the execution of the future.</li>
</ol>
<p>What this means is that when an implementation interacts with a fast file system/store all preconditions including the existence of files MAY be evaluated early, whereas and implementation interacting with a remote object store whose probes are slow MAY verify preconditions in the asynchronous phase -especially those which interact with the remote store.</p>
<p>Java CompletableFutures do not work well with checked exceptions. The Hadoop codease is still evolving the details of the exception handling here, as more use is made of the asynchronous APIs. Assume that any precondition failure which declares that an <tt>IOException</tt> MUST be raised may have that operation wrapped in a <tt>RuntimeException</tt> of some form if evaluated in the future; this also holds for any other <tt>IOException</tt> raised during the operations.</p>
<div class="section">
<h3><a name="close.28.29"></a><tt>close()</tt></h3>
<p>Applications MUST call <tt>close()</tt> after using an uploader; this is so it may release other objects, update statistics, etc.</p></div></div>
<div class="section">
<h2><a name="State_Changing_Operations"></a>State Changing Operations</h2>
<div class="section">
<h3><a name="CompletableFuture.3CUploadHandle.3E_startUpload.28Path.29"></a><tt>CompletableFuture&lt;UploadHandle&gt; startUpload(Path)</tt></h3>
<p>Starts a Multipart Upload, ultimately returning an <tt>UploadHandle</tt> for use in subsequent operations.</p>
<div class="section">
<h4><a name="Preconditions"></a>Preconditions</h4>
<div>
<div>
<pre class="source">if path == &quot;/&quot; : raise IOException
if exists(FS, path) and not isFile(FS, path) raise PathIsDirectoryException, IOException
</pre></div></div>
<p>If a filesystem does not support concurrent uploads to a destination, then the following precondition is added:</p>
<div>
<div>
<pre class="source">if path in values(FS.Uploads) raise PathExistsException, IOException
</pre></div></div>
</div>
<div class="section">
<h4><a name="Postconditions"></a>Postconditions</h4>
<p>Once the initialization operation completes, the filesystem state is updated with a new active upload, with a new handle, this handle being returned to the caller.</p>
<div>
<div>
<pre class="source">handle' = UploadHandle where not handle' in keys(FS.Uploads)
FS' = FS where FS'.Uploads(handle') == {}
result = handle'
</pre></div></div>
</div></div>
<div class="section">
<h3><a name="CompletableFuture.3CPartHandle.3E_putPart.28UploadHandle_uploadHandle.2C_int_partNumber.2C_Path_filePath.2C_InputStream_inputStream.2C_long_lengthInBytes.29"></a><tt>CompletableFuture&lt;PartHandle&gt; putPart(UploadHandle uploadHandle, int partNumber, Path filePath, InputStream inputStream, long lengthInBytes)</tt></h3>
<p>Upload a part for the specific multipart upload, eventually being returned an opaque part handle represting this part of the specified upload.</p>
<div class="section">
<h4><a name="Preconditions"></a>Preconditions</h4>
<div>
<div>
<pre class="source">uploadHandle in keys(FS.Uploads)
partNumber &gt;= 1
lengthInBytes &gt;= 0
len(inputStream) &gt;= lengthInBytes
</pre></div></div>
</div>
<div class="section">
<h4><a name="Postconditions"></a>Postconditions</h4>
<div>
<div>
<pre class="source">data' = inputStream(0..lengthInBytes)
partHandle' = byte[] where not partHandle' in keys(FS.uploads(uploadHandle).parts)
FS' = FS where FS'.uploads(uploadHandle).parts(partHandle') == data'
result = partHandle'
</pre></div></div>
<p>The data is stored in the filesystem, pending completion. It MUST NOT be visible at the destination path. It MAY be visible in a temporary path somewhere in the file system; This is implementation-specific and MUST NOT be relied upon.</p></div></div>
<div class="section">
<h3><a name="CompletableFuture.3CPathHandle.3E_complete.28UploadHandle_uploadId.2C_Path_filePath.2C_Map.3CInteger.2C_PartHandle.3E_handles.29"></a><tt>CompletableFuture&lt;PathHandle&gt; complete(UploadHandle uploadId, Path filePath, Map&lt;Integer, PartHandle&gt; handles)</tt></h3>
<p>Complete the multipart upload.</p>
<p>A Filesystem may enforce a minimum size of each part, excluding the last part uploaded.</p>
<p>If a part is out of this range, an <tt>IOException</tt> MUST be raised.</p>
<div class="section">
<h4><a name="Preconditions"></a>Preconditions</h4>
<div>
<div>
<pre class="source">uploadHandle in keys(FS.Uploads) else raise FileNotFoundException
FS.Uploads(uploadHandle).path == path
if exists(FS, path) and not isFile(FS, path) raise PathIsDirectoryException, IOException
parts.size() &gt; 0
forall k in keys(parts): k &gt; 0
forall k in keys(parts):
not exists(k2 in keys(parts)) where (parts[k] == parts[k2])
</pre></div></div>
<p>All keys MUST be greater than zero, and there MUST not be any duplicate references to the same parthandle. These validations MAY be performed at any point during the operation. After a failure, there is no guarantee that a <tt>complete()</tt> call for this upload with a valid map of paths will complete. Callers SHOULD invoke <tt>abort()</tt> after any such failure to ensure cleanup.</p>
<p>if <tt>putPart()</tt> operations For this <tt>uploadHandle</tt> were performed But whose <tt>PathHandle</tt> Handles were not included in this request -the omitted parts SHALL NOT be a part of the resulting file.</p>
<p>The MultipartUploader MUST clean up any such outstanding entries.</p>
<p>In the case of backing stores that support directories (local filesystem, HDFS, etc), if, at the point of completion, there is now a directory at the destination then a <tt>PathIsDirectoryException</tt> or other <tt>IOException</tt> must be thrown.</p></div>
<div class="section">
<h4><a name="Postconditions"></a>Postconditions</h4>
<div>
<div>
<pre class="source">UploadData' == ordered concatention of all data in the map of parts, ordered by key
exists(FS', path') and result = PathHandle(path')
FS' = FS where FS.Files(path) == UploadData' and not uploadHandle in keys(FS'.uploads)
</pre></div></div>
<p>The <tt>PathHandle</tt> is returned by the complete operation so subsequent operations will be able to identify that the data has not changed in the meantime.</p>
<p>The order of parts in the uploaded by file is that of the natural order of parts in the map: part 1 is ahead of part 2, etc.</p></div></div>
<div class="section">
<h3><a name="CompletableFuture.3CVoid.3E_abort.28UploadHandle_uploadId.2C_Path_filePath.29"></a><tt>CompletableFuture&lt;Void&gt; abort(UploadHandle uploadId, Path filePath)</tt></h3>
<p>Abort a multipart upload. The handle becomes invalid and not subject to reuse.</p>
<div class="section">
<h4><a name="Preconditions"></a>Preconditions</h4>
<div>
<div>
<pre class="source">uploadHandle in keys(FS.Uploads) else raise FileNotFoundException
</pre></div></div>
</div>
<div class="section">
<h4><a name="Postconditions"></a>Postconditions</h4>
<p>The upload handle is no longer known.</p>
<div>
<div>
<pre class="source">FS' = FS where not uploadHandle in keys(FS'.uploads)
</pre></div></div>
<p>A subsequent call to <tt>abort()</tt> with the same handle will fail, unless the handle has been recycled.</p></div></div>
<div class="section">
<h3><a name="CompletableFuture.3CInteger.3E_abortUploadsUnderPath.28Path_path.29"></a><tt>CompletableFuture&lt;Integer&gt; abortUploadsUnderPath(Path path)</tt></h3>
<p>Perform a best-effort cleanup of all uploads under a path.</p>
<p>returns a future which resolves to.</p>
<div>
<div>
<pre class="source">-1 if unsuppported
&gt;= 0 if supported
</pre></div></div>
<p>Because it is best effort a strict postcondition isn&#x2019;t possible. The ideal postcondition is all uploads under the path are aborted, and the count is the number of uploads aborted:</p>
<div>
<div>
<pre class="source">FS'.uploads forall upload in FS.uploads:
not isDescendant(FS, path, upload.path)
return len(forall upload in FS.uploads:
isDescendant(FS, path, upload.path))
</pre></div></div></div></div>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2021
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>