blob: 1c0b0cd4496ebf76abdcd8ef8ae5a5837c085a94 [file] [log] [blame]
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2021-06-15
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.3.1 &#x2013; Hadoop: Capacity Scheduler</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20210615" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xleft">
<a href="http://www.apache.org/" class="externalLink">Apache</a>
&gt;
<a href="http://hadoop.apache.org/" class="externalLink">Hadoop</a>
&gt;
<a href="../index.html">Apache Hadoop YARN</a>
&gt;
<a href="index.html">Apache Hadoop 3.3.1</a>
&gt;
Hadoop: Capacity Scheduler
</div>
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2021-06-15
&nbsp;| Version: 3.3.1
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../hadoop-openstack/index.html">OpenStack Swift</a>
</li>
<li class="none">
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>Hadoop: Capacity Scheduler</h1>
<ul>
<li><a href="#Purpose">Purpose</a></li>
<li><a href="#Overview">Overview</a></li>
<li><a href="#Features">Features</a></li>
<li><a href="#Configuration">Configuration</a>
<ul>
<li><a href="#Setting_up_ResourceManager_to_use_CapacityScheduler">Setting up ResourceManager to use CapacityScheduler</a></li>
<li><a href="#Setting_up_queues">Setting up queues</a></li>
<li><a href="#Queue_Properties">Queue Properties</a></li>
<li><a href="#Setup_for_application_priority.">Setup for application priority.</a></li>
<li><a href="#Capacity_Scheduler_container_preemption">Capacity Scheduler container preemption</a></li>
<li><a href="#Reservation_Properties">Reservation Properties</a></li>
<li><a href="#Configuring_ReservationSystem_with_CapacityScheduler">Configuring ReservationSystem with CapacityScheduler</a></li>
<li><a href="#Dynamic_Auto-Creation_and_Management_of_Leaf_Queues">Dynamic Auto-Creation and Management of Leaf Queues</a></li>
<li><a href="#Other_Properties">Other Properties</a></li>
<li><a href="#Reviewing_the_configuration_of_the_CapacityScheduler">Reviewing the configuration of the CapacityScheduler</a></li></ul></li>
<li><a href="#Changing_Queue_Configuration">Changing Queue Configuration</a>
<ul>
<li><a href="#Changing_queue_configuration_via_file">Changing queue configuration via file</a>
<ul>
<li><a href="#Deleting_queue_via_file">Deleting queue via file</a></li></ul></li>
<li><a href="#Changing_queue_configuration_via_API">Changing queue configuration via API</a></li></ul></li>
<li><a href="#Updating_a_Container_.28Experimental_-_API_may_change_in_the_future.29">Updating a Container (Experimental - API may change in the future)</a></li>
<li><a href="#Activities">Activities</a>
<ul>
<li><a href="#Scheduler_Activities">Scheduler Activities</a></li>
<li><a href="#Application_Activities">Application Activities</a></li>
<li><a href="#Configuration">Configuration</a></li>
<li><a href="#Web_UI">Web UI</a></li></ul></li></ul>
<div class="section">
<h2><a name="Purpose"></a>Purpose</h2>
<p>This document describes the <tt>CapacityScheduler</tt>, a pluggable scheduler for Hadoop which allows for multiple-tenants to securely share a large cluster such that their applications are allocated resources in a timely manner under constraints of allocated capacities.</p></div>
<div class="section">
<h2><a name="Overview"></a>Overview</h2>
<p>The <tt>CapacityScheduler</tt> is designed to run Hadoop applications as a shared, multi-tenant cluster in an operator-friendly manner while maximizing the throughput and the utilization of the cluster.</p>
<p>Traditionally each organization has it own private set of compute resources that have sufficient capacity to meet the organization&#x2019;s SLA under peak or near-peak conditions. This generally leads to poor average utilization and overhead of managing multiple independent clusters, one per each organization. Sharing clusters between organizations is a cost-effective manner of running large Hadoop installations since this allows them to reap benefits of economies of scale without creating private clusters. However, organizations are concerned about sharing a cluster because they are worried about others using the resources that are critical for their SLAs.</p>
<p>The <tt>CapacityScheduler</tt> is designed to allow sharing a large cluster while giving each organization capacity guarantees. The central idea is that the available resources in the Hadoop cluster are shared among multiple organizations who collectively fund the cluster based on their computing needs. There is an added benefit that an organization can access any excess capacity not being used by others. This provides elasticity for the organizations in a cost-effective manner.</p>
<p>Sharing clusters across organizations necessitates strong support for multi-tenancy since each organization must be guaranteed capacity and safe-guards to ensure the shared cluster is impervious to single rogue application or user or sets thereof. The <tt>CapacityScheduler</tt> provides a stringent set of limits to ensure that a single application or user or queue cannot consume disproportionate amount of resources in the cluster. Also, the <tt>CapacityScheduler</tt> provides limits on initialized and pending applications from a single user and queue to ensure fairness and stability of the cluster.</p>
<p>The primary abstraction provided by the <tt>CapacityScheduler</tt> is the concept of <i>queues</i>. These queues are typically setup by administrators to reflect the economics of the shared cluster.</p>
<p>To provide further control and predictability on sharing of resources, the <tt>CapacityScheduler</tt> supports <i>hierarchical queues</i> to ensure resources are shared among the sub-queues of an organization before other queues are allowed to use free resources, thereby providing <i>affinity</i> for sharing free resources among applications of a given organization.</p></div>
<div class="section">
<h2><a name="Features"></a>Features</h2>
<p>The <tt>CapacityScheduler</tt> supports the following features:</p>
<ul>
<li>
<p><b>Hierarchical Queues</b> - Hierarchy of queues is supported to ensure resources are shared among the sub-queues of an organization before other queues are allowed to use free resources, thereby providing more control and predictability.</p>
</li>
<li>
<p><b>Capacity Guarantees</b> - Queues are allocated a fraction of the capacity of the grid in the sense that a certain capacity of resources will be at their disposal. All applications submitted to a queue will have access to the capacity allocated to the queue. Administrators can configure soft limits and optional hard limits on the capacity allocated to each queue.</p>
</li>
<li>
<p><b>Security</b> - Each queue has strict ACLs which controls which users can submit applications to individual queues. Also, there are safe-guards to ensure that users cannot view and/or modify applications from other users. Also, per-queue and system administrator roles are supported.</p>
</li>
<li>
<p><b>Elasticity</b> - Free resources can be allocated to any queue beyond its capacity. When there is demand for these resources from queues running below capacity at a future point in time, as tasks scheduled on these resources complete, they will be assigned to applications on queues running below the capacity (preemption is also supported). This ensures that resources are available in a predictable and elastic manner to queues, thus preventing artificial silos of resources in the cluster which helps utilization.</p>
</li>
<li>
<p><b>Multi-tenancy</b> - Comprehensive set of limits are provided to prevent a single application, user and queue from monopolizing resources of the queue or the cluster as a whole to ensure that the cluster isn&#x2019;t overwhelmed.</p>
</li>
<li>
<p><b>Operability</b></p>
<ul>
<li>
<p>Runtime Configuration - The queue definitions and properties such as capacity, ACLs can be changed, at runtime, by administrators in a secure manner to minimize disruption to users. Also, a console is provided for users and administrators to view current allocation of resources to various queues in the system. Administrators can <i>add additional queues</i> at runtime, but queues cannot be <i>deleted</i> at runtime unless the queue is STOPPED and has no pending/running apps.</p>
</li>
<li>
<p>Drain applications - Administrators can <i>stop</i> queues at runtime to ensure that while existing applications run to completion, no new applications can be submitted. If a queue is in <tt>STOPPED</tt> state, new applications cannot be submitted to <i>itself</i> or <i>any of its child queues</i>. Existing applications continue to completion, thus the queue can be <i>drained</i> gracefully. Administrators can also <i>start</i> the stopped queues.</p>
</li>
</ul>
</li>
<li>
<p><b>Resource-based Scheduling</b> - Support for resource-intensive applications, where-in a application can optionally specify higher resource-requirements than the default, thereby accommodating applications with differing resource requirements. Currently, <i>memory</i> is the resource requirement supported.</p>
</li>
<li>
<p><b>Queue Mapping Interface based on Default or User Defined Placement Rules</b> - This feature allows users to map a job to a specific queue based on some default placement rule. For instance based on user &amp; group, or application name. User can also define their own placement rule.</p>
</li>
<li>
<p><b>Priority Scheduling</b> - This feature allows applications to be submitted and scheduled with different priorities. Higher integer value indicates higher priority for an application. Currently Application priority is supported only for FIFO ordering policy.</p>
</li>
<li>
<p><b>Absolute Resource Configuration</b> - Administrators could specify absolute resources to a queue instead of providing percentage based values. This provides better control for admins to configure required amount of resources for a given queue.</p>
</li>
<li>
<p><b>Dynamic Auto-Creation and Management of Leaf Queues</b> - This feature supports auto-creation of <b>leaf queues</b> in conjunction with <b>queue-mapping</b> which currently supports <b>user-group</b> based queue mappings for application placement to a queue. The scheduler also supports capacity management for these queues based on a policy configured on the parent queue.</p>
</li>
</ul></div>
<div class="section">
<h2><a name="Configuration"></a>Configuration</h2>
<div class="section">
<h3><a name="Setting_up_ResourceManager_to_use_CapacityScheduler"></a>Setting up <tt>ResourceManager</tt> to use <tt>CapacityScheduler</tt></h3>
<p>To configure the <tt>ResourceManager</tt> to use the <tt>CapacityScheduler</tt>, set the following property in the <b>conf/yarn-site.xml</b>:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Value </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.scheduler.class</tt> </td>
<td align="left"> <tt>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</tt> </td></tr>
</tbody>
</table></div>
<div class="section">
<h3><a name="Setting_up_queues"></a>Setting up queues</h3>
<p><tt>etc/hadoop/capacity-scheduler.xml</tt> is the configuration file for the <tt>CapacityScheduler</tt>.</p>
<p>The <tt>CapacityScheduler</tt> has a predefined queue called <i>root</i>. All queues in the system are children of the root queue.</p>
<p>Further queues can be setup by configuring <tt>yarn.scheduler.capacity.root.queues</tt> with a list of comma-separated child queues.</p>
<p>The configuration for <tt>CapacityScheduler</tt> uses a concept called <i>queue path</i> to configure the hierarchy of queues. The <i>queue path</i> is the full path of the queue&#x2019;s hierarchy, starting at <i>root</i>, with . (dot) as the delimiter.</p>
<p>A given queue&#x2019;s children can be defined with the configuration knob: <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.queues</tt>. Children do not inherit properties directly from the parent unless otherwise noted.</p>
<p>Here is an example with three top-level child-queues <tt>a</tt>, <tt>b</tt> and <tt>c</tt> and some sub-queues for <tt>a</tt> and <tt>b</tt>:</p>
<div>
<div>
<pre class="source">&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.queues&lt;/name&gt;
&lt;value&gt;a,b,c&lt;/value&gt;
&lt;description&gt;The queues at the this level (root is the root queue).
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.a.queues&lt;/name&gt;
&lt;value&gt;a1,a2&lt;/value&gt;
&lt;description&gt;The queues at the this level (root is the root queue).
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.b.queues&lt;/name&gt;
&lt;value&gt;b1,b2,b3&lt;/value&gt;
&lt;description&gt;The queues at the this level (root is the root queue).
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
</div>
<div class="section">
<h3><a name="Queue_Properties"></a>Queue Properties</h3>
<ul>
<li>Resource Allocation</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.capacity</tt> </td>
<td align="left"> Queue <i>capacity</i> in percentage (%) as a float (e.g. 12.5) OR as absolute resource queue minimum capacity. The sum of capacities for all queues, at each level, must be equal to 100. However if absolute resource is configured, sum of absolute resources of child queues could be less than it&#x2019;s parent absolute resource capacity. Applications in the queue may consume more resources than the queue&#x2019;s capacity if there are free resources, providing elasticity. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.maximum-capacity</tt> </td>
<td align="left"> Maximum queue capacity in percentage (%) as a float OR as absolute resource queue maximum capacity. This limits the <i>elasticity</i> for applications in the queue. 1) Value is between 0 and 100. 2) Admin needs to make sure absolute maximum capacity &gt;= absolute capacity for each queue. Also, setting this value to -1 sets maximum capacity to 100%. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.minimum-user-limit-percent</tt> </td>
<td align="left"> Each queue enforces a limit on the percentage of resources allocated to a user at any given time, if there is demand for resources. The user limit can vary between a minimum and maximum value. The former (the minimum value) is set to this property value and the latter (the maximum value) depends on the number of users who have submitted applications. For e.g., suppose the value of this property is 25. If two users have submitted applications to a queue, no single user can use more than 50% of the queue resources. If a third user submits an application, no single user can use more than 33% of the queue resources. With 4 or more users, no user can use more than 25% of the queues resources. A value of 100 implies no user limits are imposed. The default is 100. Value is specified as a integer. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.user-limit-factor</tt> </td>
<td align="left"> The multiple of the queue capacity which can be configured to allow a single user to acquire more resources. By default this is set to 1 which ensures that a single user can never take more than the queue&#x2019;s configured capacity irrespective of how idle the cluster is. Value is specified as a float. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.maximum-allocation-mb</tt> </td>
<td align="left"> The per queue maximum limit of memory to allocate to each container request at the Resource Manager. This setting overrides the cluster configuration <tt>yarn.scheduler.maximum-allocation-mb</tt>. This value must be smaller than or equal to the cluster maximum. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.maximum-allocation-vcores</tt> </td>
<td align="left"> The per queue maximum limit of virtual cores to allocate to each container request at the Resource Manager. This setting overrides the cluster configuration <tt>yarn.scheduler.maximum-allocation-vcores</tt>. This value must be smaller than or equal to the cluster maximum. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.user-settings.&lt;user-name&gt;.weight</tt> </td>
<td align="left"> This floating point value is used when calculating the user limit resource values for users in a queue. This value will weight each user more or less than the other users in the queue. For example, if user A should receive 50% more resources in a queue than users B and C, this property will be set to 1.5 for user A. Users B and C will default to 1.0. </td></tr>
</tbody>
</table>
<ul>
<li>Resource Allocation using Absolute Resources configuration</li>
</ul>
<p><tt>CapacityScheduler</tt> supports configuration of absolute resources instead of providing Queue <i>capacity</i> in percentage. As mentioned in above configuration section for <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.capacity</tt> and <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.max-capacity</tt>, administrator could specify an absolute resource value like <tt>[memory=10240,vcores=12]</tt>. This is a valid configuration which indicates 10GB Memory and 12 VCores.</p>
<ul>
<li>Running and Pending Application Limits</li>
</ul>
<p>The <tt>CapacityScheduler</tt> supports the following parameters to control the running and pending applications:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.maximum-applications</tt> / <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.maximum-applications</tt> </td>
<td align="left"> Maximum number of applications in the system which can be concurrently active both running and pending. Limits on each queue are directly proportional to their queue capacities and user limits. This is a hard limit and any applications submitted when this limit is reached will be rejected. Default is 10000. This can be set for all queues with <tt>yarn.scheduler.capacity.maximum-applications</tt> and can also be overridden on a per queue basis by setting <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.maximum-applications</tt>. Integer value expected. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.maximum-am-resource-percent</tt> / <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.maximum-am-resource-percent</tt> </td>
<td align="left"> Maximum percent of resources in the cluster which can be used to run application masters - controls number of concurrent active applications. Limits on each queue are directly proportional to their queue capacities and user limits. Specified as a float - ie 0.5 = 50%. Default is 10%. This can be set for all queues with <tt>yarn.scheduler.capacity.maximum-am-resource-percent</tt> and can also be overridden on a per queue basis by setting <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.maximum-am-resource-percent</tt> </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.max-parallel-apps</tt> / <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.max-parallel-apps</tt> </td>
<td align="left"> Maximum number of applications that can run at the same time. Unlike to <tt>maximum-applications</tt>, application submissions are <i>not</i> rejected when this limit is reached. Instead they stay in <tt>ACCEPTED</tt> state until they are eligible to run. This can be set for all queues with <tt>yarn.scheduler.capacity.max-parallel-apps</tt> and can also be overridden on a per queue basis by setting <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.max-parallel-apps</tt>. Integer value is expected. By default, there is no limit. </td></tr>
</tbody>
</table>
<p>You can also limit the number of parallel applications on a per user basis.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.user.max-parallel-apps</tt> </td>
<td align="left"> Maximum number of applications that can run at the same time for all users. Default value is unlimited. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.user.&lt;username&gt;.max-parallel-apps</tt> </td>
<td align="left"> Maximum number of applications that can run at the same for a specific user. This overrides the global setting. </td></tr>
</tbody>
</table>
<p>The evaluation of these limits happens in the following order:</p>
<ol style="list-style-type: decimal">
<li>
<p><tt>maximum-applications</tt> check - if the limit is exceeded, the submission is rejected immediately.</p>
</li>
<li>
<p><tt>max-parallel-apps</tt> check - the submission is accepted, but the application will not transition to <tt>RUNNING</tt> state. It stays in <tt>ACCEPTED</tt> until the queue / user limits are satisfied.</p>
</li>
<li>
<p><tt>maximum-am-resource-percent</tt> check - if there are too many Application Masters running, the application stays in <tt>ACCEPTED</tt> state until there is enough room for it.</p>
</li>
</ol>
<ul>
<li>Queue Administration &amp; Permissions</li>
</ul>
<p>The <tt>CapacityScheduler</tt> supports the following parameters to the administer the queues:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.state</tt> </td>
<td align="left"> The <i>state</i> of the queue. Can be one of <tt>RUNNING</tt> or <tt>STOPPED</tt>. If a queue is in <tt>STOPPED</tt> state, new applications cannot be submitted to <i>itself</i> or <i>any of its child queues</i>. Thus, if the <i>root</i> queue is <tt>STOPPED</tt> no applications can be submitted to the entire cluster. Existing applications continue to completion, thus the queue can be <i>drained</i> gracefully. Value is specified as Enumeration. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_submit_applications</tt> </td>
<td align="left"> The <i>ACL</i> which controls who can <i>submit</i> applications to the given queue. If the given user/group has necessary ACLs on the given queue or <i>one of the parent queues in the hierarchy</i> they can submit applications. <i>ACLs</i> for this property <i>are</i> inherited from the parent queue if not specified. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.root.&lt;queue-path&gt;.acl_administer_queue</tt> </td>
<td align="left"> The <i>ACL</i> which controls who can <i>administer</i> applications on the given queue. If the given user/group has necessary ACLs on the given queue or <i>one of the parent queues in the hierarchy</i> they can administer applications. <i>ACLs</i> for this property <i>are</i> inherited from the parent queue if not specified. </td></tr>
</tbody>
</table>
<p><b>Note:</b> An <i>ACL</i> is of the form <i>user1</i>,<i>user2</i> <i>space</i> <i>group1</i>,<i>group2</i>. The special value of * implies <i>anyone</i>. The special value of <i>space</i> implies <i>no one</i>. The default is * for the root queue if not specified.</p>
<ul>
<li>Queue Mapping based on User or Group, Application Name or user defined placement rules</li>
</ul>
<p>The <tt>CapacityScheduler</tt> supports the following parameters to configure the queue mapping based on user or group, user &amp; group, or application name. User can also define their own placement rule:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.queue-mappings</tt> </td>
<td align="left"> This configuration specifies the mapping of user or group to a specific queue. You can map a single user or a list of users to queues. Syntax: <tt>[u or g]:[name]:[queue_name][,next_mapping]*</tt>. Here, <i>u or g</i> indicates whether the mapping is for a user or group. The value is <i>u</i> for user and <i>g</i> for group. <i>name</i> indicates the user name or group name. To specify the user who has submitted the application, %user can be used. <i>queue_name</i> indicates the queue name for which the application has to be mapped. To specify queue name same as user name, <i>%user</i> can be used. To specify queue name same as the name of the primary group for which the user belongs to, <i>%primary_group</i> can be used. Secondary group can be referenced as <i>%secondary_group</i> </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.queue-placement-rules.app-name</tt> </td>
<td align="left"> This configuration specifies the mapping of application_name to a specific queue. You can map a single application or a list of applications to queues. Syntax: <tt>[app_name]:[queue_name][,next_mapping]*</tt>. Here, <i>app_name</i> indicates the application name you want to do the mapping. <i>queue_name</i> indicates the queue name for which the application has to be mapped. To specify the current application&#x2019;s name as the app_name, %application can be used.</td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.queue-mappings-override.enable</tt> </td>
<td align="left"> This function is used to specify whether the user specified queues can be overridden. This is a Boolean value and the default value is <i>false</i>. </td></tr>
</tbody>
</table>
<p>Example:</p>
<p>Below example covers single mapping separately. In case of multiple mappings with comma separated values, evaluation would be from left to right, and the first valid mapping will be used. Below example order has been documented based on actual order of execution at runtime in case of multiple mappings.</p>
<div>
<div>
<pre class="source"> &lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;u:%user:%primary_group.%user&lt;/value&gt;
&lt;description&gt;Maps users to queue with the same name as user but
parent queue name should be same as primary group of the user&lt;/description&gt;
&lt;/property&gt;
...
&#xa0;&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;u:%user:%secondary_group.%user&lt;/value&gt;
&lt;description&gt;Maps users to queue with the same name as user but
parent queue name should be same as any secondary group of the user&lt;/description&gt;
&lt;/property&gt;
...
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;u:%user:%user&lt;/value&gt;
&lt;description&gt;Maps users to queues with the same name as user&lt;/description&gt;
&lt;/property&gt;
...
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;u:user2:%primary_group&lt;/value&gt;
&lt;description&gt;user2 is mapped to queue name same as primary group&lt;/description&gt;
&lt;/property&gt;
...
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;u:user3:%secondary_group&lt;/value&gt;
&lt;description&gt;user3 is mapped to queue name same as secondary group&lt;/description&gt;
&lt;/property&gt;
...
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;u:user1:queue1&lt;/value&gt;
&lt;description&gt;user1 is mapped to queue1&lt;/description&gt;
&lt;/property&gt;
...
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;g:group1:queue2&lt;/value&gt;
&lt;description&gt;group1 is mapped to queue2&lt;/description&gt;
&lt;/property&gt;
...
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;u:user1:queue1,u:user2:queue2&lt;/value&gt;
&lt;description&gt;Here, &lt;user1&gt; is mapped to &lt;queue1&gt;, &lt;user2&gt; is mapped to &lt;queue2&gt; respectively&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.queue-placement-rules.app-name&lt;/name&gt;
&lt;value&gt;appName1:queue1,%application:%application&lt;/value&gt;
&lt;description&gt;
Here, &lt;appName1&gt; is mapped to &lt;queue1&gt;, maps applications to queues with
the same name as application respectively. The mappings will be
evaluated from left to right, and the first valid mapping will be used.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
<ul>
<li>Queue lifetime for applications
<p>The <tt>CapacityScheduler</tt> supports the following parameters to lifetime of an application:</p></li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.maximum-application-lifetime</tt> </td>
<td align="left"> Maximum lifetime (in seconds) of an application which is submitted to a queue. Any value less than or equal to zero will be considered as disabled. The default is -1. If positive value is configured then any application submitted to this queue will be killed after it exceeds the configured lifetime. User can also specify lifetime per application in application submission context. However, user lifetime will be overridden if it exceeds queue maximum lifetime. It is point-in-time configuration. Note: This feature can be set at any level in the queue hierarchy. Child queues will inherit their parent&#x2019;s value unless overridden at the child level. A value of 0 means no max lifetime and will override a parent&#x2019;s max lifetime. If this property is not set or is set to a negative number, then this queue&#x2019;s max lifetime value will be inherited from it&#x2019;s parent.</td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.root.&lt;queue-path&gt;.default-application-lifetime</tt> </td>
<td align="left"> Default lifetime (in seconds) of an application which is submitted to a queue. Any value less than or equal to zero will be considered as disabled. If the user has not submitted application with lifetime value then this value will be taken. It is point-in-time configuration. This feature can be set at any level in the queue hierarchy. Child queues will inherit their parent&#x2019;s value unless overridden at the child level. If set to less than or equal to 0, the queue&#x2019;s max value must also be unlimited. Default lifetime can&#x2019;t exceed maximum lifetime. </td></tr>
</tbody>
</table></div>
<div class="section">
<h3><a name="Setup_for_application_priority."></a>Setup for application priority.</h3>
<p>Application priority works only along with FIFO ordering policy. Default ordering policy is FIFO.</p>
<p>Default priority for an application can be at cluster level and queue level.</p>
<ul>
<li>Cluster-level priority : Any application submitted with a priority greater than the cluster-max priority will have its priority reset to the cluster-max priority. $HADOOP_HOME/etc/hadoop/yarn-site.xml is the configuration file for cluster-max priority.</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.cluster.max-application-priority</tt> </td>
<td align="left"> Defines maximum application priority in a cluster. </td></tr>
</tbody>
</table>
<ul>
<li>Leaf Queue-level priority : Each leaf queue provides default priority by the administrator. The queue&#x2019;s default priority will be used for any application submitted without a specified priority. $HADOOP_HOME/etc/hadoop/capacity-scheduler.xml is the configuration file for queue-level priority.</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.root.&lt;leaf-queue-path&gt;.default-application-priority</tt> </td>
<td align="left"> Defines default application priority in a leaf queue. </td></tr>
</tbody>
</table>
<p><b>Note:</b> Priority of an application will not be changed when application is moved to different queue.</p></div>
<div class="section">
<h3><a name="Capacity_Scheduler_container_preemption"></a>Capacity Scheduler container preemption</h3>
<p>The <tt>CapacityScheduler</tt> supports preemption of container from the queues whose resource usage is more than their guaranteed capacity. The following configuration parameters need to be enabled in yarn-site.xml for supporting preemption of application containers.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.scheduler.monitor.enable</tt> </td>
<td align="left"> Enable a set of periodic monitors (specified in yarn.resourcemanager.scheduler.monitor.policies) that affect the scheduler. Default value is false. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.resourcemanager.scheduler.monitor.policies</tt> </td>
<td align="left"> The list of SchedulingEditPolicy classes that interact with the scheduler. Configured policies need to be compatible with the scheduler. Default value is <tt>org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy</tt> which is compatible with <tt>CapacityScheduler</tt> </td></tr>
</tbody>
</table>
<p>The following configuration parameters can be configured in yarn-site.xml to control the preemption of containers when <tt>ProportionalCapacityPreemptionPolicy</tt> class is configured for <tt>yarn.resourcemanager.scheduler.monitor.policies</tt></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.monitor.capacity.preemption.observe_only</tt> </td>
<td align="left"> If true, run the policy but do not affect the cluster with preemption and kill events. Default value is false </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval</tt> </td>
<td align="left"> Time in milliseconds between invocations of this ProportionalCapacityPreemptionPolicy policy. Default value is 3000 </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill</tt> </td>
<td align="left"> Time in milliseconds between requesting a preemption from an application and killing the container. Default value is 15000 </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round</tt> </td>
<td align="left"> Maximum percentage of resources preempted in a single round. By controlling this value one can throttle the pace at which containers are reclaimed from the cluster. After computing the total desired preemption, the policy scales it back within this limit. Default value is <tt>0.1</tt> </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.monitor.capacity.preemption.max_ignored_over_capacity</tt> </td>
<td align="left"> Maximum amount of resources above the target capacity ignored for preemption. This defines a deadzone around the target capacity that helps prevent thrashing and oscillations around the computed target balance. High values would slow the time to capacity and (absent natural.completions) it might prevent convergence to guaranteed capacity. Default value is <tt>0.1</tt> </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor</tt> </td>
<td align="left"> Given a computed preemption target, account for containers naturally expiring and preempt only this percentage of the delta. This determines the rate of geometric convergence into the deadzone (<tt>MAX_IGNORED_OVER_CAPACITY</tt>). For example, a termination factor of 0.5 will reclaim almost 95% of resources within 5 * #<tt>WAIT_TIME_BEFORE_KILL</tt>, even absent natural termination. Default value is <tt>0.2</tt> </td></tr>
</tbody>
</table>
<p>The <tt>CapacityScheduler</tt> supports the following configurations in capacity-scheduler.xml to control the preemption of application containers submitted to a queue.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.disable_preemption</tt> </td>
<td align="left"> This configuration can be set to <tt>true</tt> to selectively disable preemption of application containers submitted to a given queue. This property applies only when system wide preemption is enabled by configuring <tt>yarn.resourcemanager.scheduler.monitor.enable</tt> to <i>true</i> and <tt>yarn.resourcemanager.scheduler.monitor.policies</tt> to <i>ProportionalCapacityPreemptionPolicy</i>. If this property is not set for a queue, then the property value is inherited from the queue&#x2019;s parent. Default value is false.</td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.intra-queue-preemption.disable_preemption</tt> </td>
<td align="left"> This configuration can be set to <i>true</i> to selectively disable intra-queue preemption of application containers submitted to a given queue. This property applies only when system wide preemption is enabled by configuring <tt>yarn.resourcemanager.scheduler.monitor.enable</tt> to <i>true</i>, <tt>yarn.resourcemanager.scheduler.monitor.policies</tt> to <i>ProportionalCapacityPreemptionPolicy</i>, and <tt>yarn.resourcemanager.monitor.capacity.preemption.intra-queue-preemption.enabled</tt> to <i>true</i>. If this property is not set for a queue, then the property value is inherited from the queue&#x2019;s parent. Default value is <i>false</i>.</td></tr>
</tbody>
</table></div>
<div class="section">
<h3><a name="Reservation_Properties"></a>Reservation Properties</h3>
<ul>
<li>Reservation Administration &amp; Permissions</li>
</ul>
<p>The <tt>CapacityScheduler</tt> supports the following parameters to control the creation, deletion, update, and listing of reservations. Note that any user can update, delete, or list their own reservations. If reservation ACLs are enabled but not defined, everyone will have access. In the examples below, &lt;queue&gt; is the queue name. For example, to set the reservation ACL to administer reservations on the default queue, use the property <tt>yarn.scheduler.capacity.root.default.acl_administer_reservations</tt></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.root.&lt;queue&gt;.acl_administer_reservations</tt> </td>
<td align="left"> The ACL which controls who can <i>administer</i> reservations to the given queue. If the given user/group has necessary ACLs on the given queue or they can submit, delete, update and list all reservations. ACLs for this property <i>are not</i> inherited from the parent queue if not specified. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.root.&lt;queue&gt;.acl_list_reservations</tt> </td>
<td align="left"> The ACL which controls who can <i>list</i> reservations to the given queue. If the given user/group has necessary ACLs on the given queue they can list all applications. ACLs for this property <i>are not</i> inherited from the parent queue if not specified. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.root.&lt;queue&gt;.acl_submit_reservations</tt> </td>
<td align="left"> The ACL which controls who can <i>submit</i> reservations to the given queue. If the given user/group has necessary ACLs on the given queue they can submit reservations. ACLs for this property <i>are not</i> inherited from the parent queue if not specified. </td></tr>
</tbody>
</table></div>
<div class="section">
<h3><a name="Configuring_ReservationSystem_with_CapacityScheduler"></a>Configuring <tt>ReservationSystem</tt> with <tt>CapacityScheduler</tt></h3>
<p>The <tt>CapacityScheduler</tt> supports the <b>ReservationSystem</b> which allows users to reserve resources ahead of time. The application can request the reserved resources at runtime by specifying the <tt>reservationId</tt> during submission. The following configuration parameters can be configured in yarn-site.xml for <tt>ReservationSystem</tt>.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.reservation-system.enable</tt> </td>
<td align="left"> <i>Mandatory</i> parameter: to enable the <tt>ReservationSystem</tt> in the <b>ResourceManager</b>. Boolean value expected. The default value is <i>false</i>, i.e. <tt>ReservationSystem</tt> is not enabled by default. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.resourcemanager.reservation-system.class</tt> </td>
<td align="left"> <i>Optional</i> parameter: the class name of the <tt>ReservationSystem</tt>. The default value is picked based on the configured Scheduler, i.e. if <tt>CapacityScheduler</tt> is configured, then it is <tt>CapacityReservationSystem</tt>. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.reservation-system.plan.follower</tt> </td>
<td align="left"> <i>Optional</i> parameter: the class name of the <tt>PlanFollower</tt> that runs on a timer, and synchronizes the <tt>CapacityScheduler</tt> with the <tt>Plan</tt> and viceversa. The default value is picked based on the configured Scheduler, i.e. if <tt>CapacityScheduler</tt> is configured, then it is <tt>CapacitySchedulerPlanFollower</tt>. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.resourcemanager.reservation-system.planfollower.time-step</tt> </td>
<td align="left"> <i>Optional</i> parameter: the frequency in milliseconds of the <tt>PlanFollower</tt> timer. Long value expected. The default value is <i>1000</i>. </td></tr>
</tbody>
</table>
<p>The <tt>ReservationSystem</tt> is integrated with the <tt>CapacityScheduler</tt> queue hierachy and can be configured for any <b>LeafQueue</b> currently. The <tt>CapacityScheduler</tt> supports the following parameters to tune the <tt>ReservationSystem</tt>:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.reservable</tt> </td>
<td align="left"> <i>Mandatory</i> parameter: indicates to the <tt>ReservationSystem</tt> that the queue&#x2019;s resources is available for users to reserve. Boolean value expected. The default value is <i>false</i>, i.e. reservations are not enabled in <i>LeafQueues</i> by default. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.reservation-agent</tt> </td>
<td align="left"> <i>Optional</i> parameter: the class name that will be used to determine the implementation of the <tt>ReservationAgent</tt> which will attempt to place the user&#x2019;s reservation request in the <tt>Plan</tt>. The default value is <i>org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.AlignedPlannerWithGreedy</i>. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.reservation-move-on-expiry</tt> </td>
<td align="left"> <i>Optional</i> parameter to specify to the <tt>ReservationSystem</tt> whether the applications should be moved or killed to the parent reservable queue (configured above) when the associated reservation expires. Boolean value expected. The default value is <i>true</i> indicating that the application will be moved to the reservable queue. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.show-reservations-as-queues</tt> </td>
<td align="left"> <i>Optional</i> parameter to show or hide the reservation queues in the Scheduler UI. Boolean value expected. The default value is <i>false</i>, i.e. reservation queues will be hidden. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.reservation-policy</tt> </td>
<td align="left"> <i>Optional</i> parameter: the class name that will be used to determine the implementation of the <tt>SharingPolicy</tt> which will validate if the new reservation doesn&#x2019;t violate any invariants.. The default value is <i>org.apache.hadoop.yarn.server.resourcemanager.reservation.CapacityOverTimePolicy</i>. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.reservation-window</tt> </td>
<td align="left"> <i>Optional</i> parameter representing the time in milliseconds for which the <tt>SharingPolicy</tt> will validate if the constraints in the Plan are satisfied. Long value expected. The default value is one day. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.instantaneous-max-capacity</tt> </td>
<td align="left"> <i>Optional</i> parameter: maximum capacity at any time in percentage (%) as a float that the <tt>SharingPolicy</tt> allows a single user to reserve. The default value is 1, i.e. 100%. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.average-capacity</tt> </td>
<td align="left"> <i>Optional</i> parameter: the average allowed capacity which will aggregated over the <i>ReservationWindow</i> in percentage (%) as a float that the <tt>SharingPolicy</tt> allows a single user to reserve. The default value is 1, i.e. 100%. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.reservation-planner</tt> </td>
<td align="left"> <i>Optional</i> parameter: the class name that will be used to determine the implementation of the <i>Planner</i> which will be invoked if the <tt>Plan</tt> capacity fall below (due to scheduled maintenance or node failures) the user reserved resources. The default value is <i>org.apache.hadoop.yarn.server.resourcemanager.reservation.planning.SimpleCapacityReplanner</i> which scans the <tt>Plan</tt> and greedily removes reservations in reversed order of acceptance (LIFO) till the reserved resources are within the <tt>Plan</tt> capacity </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.reservation-enforcement-window</tt> </td>
<td align="left"> <i>Optional</i> parameter representing the time in milliseconds for which the <tt>Planner</tt> will validate if the constraints in the Plan are satisfied. Long value expected. The default value is one hour. </td></tr>
</tbody>
</table></div>
<div class="section">
<h3><a name="Dynamic_Auto-Creation_and_Management_of_Leaf_Queues"></a>Dynamic Auto-Creation and Management of Leaf Queues</h3>
<p>The <tt>CapacityScheduler</tt> supports auto-creation of <b>leaf queues</b> under parent queues which have been configured to enable this feature.</p>
<ul>
<li>Setup for dynamic auto-created leaf queues through queue mapping</li>
</ul>
<p><b>user-group queue mapping(s)</b> listed in <tt>yarn.scheduler.capacity.queue-mappings</tt> need to specify an additional parent queue parameter to identify which parent queue the auto-created leaf queues need to be created under. Refer above <tt>Queue Mapping based on User or Group</tt> section for more details. Please note that such parent queues also need to enable auto-creation of child queues as mentioned in <tt>Parent queue configuration for dynamic leaf queue creation and management</tt> section below</p>
<p>Example:</p>
<div>
<div>
<pre class="source"> &lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.queue-mappings&lt;/name&gt;
&lt;value&gt;u:user1:queue1,g:group1:queue2,u:user2:%primary_group,u:%user:parent1.%user&lt;/value&gt;
&lt;description&gt;
Here, u:%user:parent1.%user mapping allows any &lt;user&gt; other than user1,
user2 to be mapped to its own user specific leaf queue which
will be auto-created under &lt;parent1&gt;.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
<ul>
<li>Parent queue configuration for dynamic leaf queue auto-creation and management</li>
</ul>
<p>The <tt>Dynamic Queue Auto-Creation and Management</tt> feature is integrated with the <tt>CapacityScheduler</tt> queue hierarchy and can be configured for a <b>ParentQueue</b> currently to auto-create leaf queues. Such parent queues do not support other pre-configured queues to co-exist along with auto-created queues. The <tt>CapacityScheduler</tt> supports the following parameters to enable auto-creation of queues</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.auto-create-child-queue.enabled</tt> </td>
<td align="left"> <i>Mandatory</i> parameter: Indicates to the <tt>CapacityScheduler</tt> that auto leaf queue creation needs to be enabled for the specified parent queue. Boolean value expected. The default value is <i>false</i>, i.e. auto leaf queue creation is not enabled in <i>ParentQueue</i> by default. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.auto-create-child-queue.management-policy</tt> </td>
<td align="left"> <i>Optional</i> parameter: the class name that will be used to determine the implementation of the <tt>AutoCreatedQueueManagementPolicy</tt> which will manage leaf queues and their capacities dynamically under this parent queue. The default value is <i>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.queuemanagement.GuaranteedOrZeroCapacityOverTimePolicy</i>. Users or groups might submit applications to the auto-created leaf queues for a limited time and stop using them. Hence there could be more number of leaf queues auto-created under the parent queue than its guaranteed capacity. The current policy implementation allots either configured or zero capacity on a <b>best-effort</b> basis based on availability of capacity on the parent queue and the application submission order across leaf queues. </td></tr>
</tbody>
</table>
<ul>
<li>Configuring <tt>Auto-Created Leaf Queues</tt> with <tt>CapacityScheduler</tt></li>
</ul>
<p>The parent queue which has been enabled for auto leaf queue creation,supports the configuration of template parameters for automatic configuration of the auto-created leaf queues. The auto-created queues support all of the leaf queue configuration parameters except for <b>Queue ACL</b>, <b>Absolute Resource</b> configurations. Queue ACLs are currently inherited from the parent queue i.e they are not configurable on the leaf queue template</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.leaf-queue-template.capacity</tt> </td>
<td align="left"> <i>Mandatory</i> parameter: Specifies the minimum guaranteed capacity for the auto-created leaf queues. Currently <i>Absolute Resource</i> configurations are not supported on auto-created leaf queues </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.leaf-queue-template.&lt;leaf-queue-property&gt;</tt> </td>
<td align="left"> <i>Optional</i> parameter: For other queue parameters that can be configured on auto-created leaf queues like maximum-capacity, user-limit-factor, maximum-am-resource-percent &#x2026; - Refer <b>Queue Properties</b> section </td></tr>
</tbody>
</table>
<p>Example:</p>
<div>
<div>
<pre class="source"> &lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.auto-create-child-queue.enabled&lt;/name&gt;
&lt;value&gt;true&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.leaf-queue-template.capacity&lt;/name&gt;
&lt;value&gt;5&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.leaf-queue-template.maximum-capacity&lt;/name&gt;
&lt;value&gt;100&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.leaf-queue-template.user-limit-factor&lt;/name&gt;
&lt;value&gt;3.0&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.leaf-queue-template.ordering-policy&lt;/name&gt;
&lt;value&gt;fair&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.GPU.capacity&lt;/name&gt;
&lt;value&gt;50&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.accessible-node-labels&lt;/name&gt;
&lt;value&gt;GPU,SSD&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.leaf-queue-template.accessible-node-labels&lt;/name&gt;
&lt;value&gt;GPU&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;yarn.scheduler.capacity.root.parent1.leaf-queue-template.accessible-node-labels.GPU.capacity&lt;/name&gt;
&lt;value&gt;5&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
<ul>
<li>Scheduling Edit Policy configuration for auto-created queue management</li>
</ul>
<p>Admins need to specify an additional <tt>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.QueueManagementDynamicEditPolicy</tt> scheduling edit policy to the list of current scheduling edit policies as a comma separated string in <tt>yarn.resourcemanager.scheduler.monitor.policies</tt> configuration. For more details, refer <tt>Capacity Scheduler container preemption</tt> section above</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.monitor.capacity.queue-management.monitoring-interval</tt> </td>
<td align="left"> Time in milliseconds between invocations of this QueueManagementDynamicEditPolicy policy. Default value is 1500 </td></tr>
</tbody>
</table></div>
<div class="section">
<h3><a name="Other_Properties"></a>Other Properties</h3>
<ul>
<li>Resource Calculator</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.resource-calculator</tt> </td>
<td align="left"> The ResourceCalculator implementation to be used to compare Resources in the scheduler. The default i.e. org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator only uses Memory while DominantResourceCalculator uses Dominant-resource to compare multi-dimensional resources such as Memory, CPU etc. A Java ResourceCalculator class name is expected. </td></tr>
</tbody>
</table>
<ul>
<li>Data Locality</li>
</ul>
<p>Capacity Scheduler leverages <tt>Delay Scheduling</tt> to honor task locality constraints. There are 3 levels of locality constraint: node-local, rack-local and off-switch. The scheduler counts the number of missed opportunities when the locality cannot be satisfied, and waits this count to reach a threshold before relaxing the locality constraint to next level. The threshold can be configured in following properties:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.node-locality-delay</tt> </td>
<td align="left"> Number of missed scheduling opportunities after which the CapacityScheduler attempts to schedule rack-local containers. Typically, this should be set to number of nodes in the cluster. By default is setting approximately number of nodes in one rack which is 40. Positive integer value is expected. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.rack-locality-additional-delay</tt> </td>
<td align="left"> Number of additional missed scheduling opportunities over the node-locality-delay ones, after which the CapacityScheduler attempts to schedule off-switch containers. By default this value is set to -1, in this case, the number of missed opportunities for assigning off-switch containers is calculated based on the formula <tt>L * C / N</tt>, where <tt>L</tt> is number of locations (nodes or racks) specified in the resource request, <tt>C</tt> is the number of requested containers, and <tt>N</tt> is the size of the cluster. </td></tr>
</tbody>
</table>
<p>Note, this feature should be disabled if YARN is deployed separately with the file system, as locality is meaningless. This can be done by setting <tt>yarn.scheduler.capacity.node-locality-delay</tt> to <tt>-1</tt>, in this case, request&#x2019;s locality constraint is ignored.</p>
<ul>
<li>Container Allocation per NodeManager Heartbeat</li>
</ul>
<p>The <tt>CapacityScheduler</tt> supports the following parameters to control how many containers can be allocated in each NodeManager heartbeat.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.per-node-heartbeat.multiple-assignments-enabled</tt> </td>
<td align="left"> Whether to allow multiple container assignments in one NodeManager heartbeat. Defaults to true. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.capacity.per-node-heartbeat.maximum-container-assignments</tt> </td>
<td align="left"> If <tt>multiple-assignments-enabled</tt> is true, the maximum amount of containers that can be assigned in one NodeManager heartbeat. Default value is 100, which limits the maximum number of container assignments per heartbeat to 100. Set this value to -1 will disable this limit. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.capacity.per-node-heartbeat.maximum-offswitch-assignments</tt> </td>
<td align="left"> If <tt>multiple-assignments-enabled</tt> is true, the maximum amount of off-switch containers that can be assigned in one NodeManager heartbeat. Defaults to 1, which represents only one off-switch allocation allowed in one heartbeat. </td></tr>
</tbody>
</table></div>
<div class="section">
<h3><a name="Reviewing_the_configuration_of_the_CapacityScheduler"></a>Reviewing the configuration of the CapacityScheduler</h3>
<p>Once the installation and configuration is completed, you can review it after starting the YARN cluster from the web-ui.</p>
<ul>
<li>
<p>Start the YARN cluster in the normal manner.</p>
</li>
<li>
<p>Open the <tt>ResourceManager</tt> web UI.</p>
</li>
<li>
<p>The <i>/scheduler</i> web-page should show the resource usages of individual queues.</p>
</li>
</ul></div></div>
<div class="section">
<h2><a name="Changing_Queue_Configuration"></a>Changing Queue Configuration</h2>
<p>Changing queue/scheduler properties and adding/removing queues can be done in two ways, via file or via API. This behavior can be changed via <tt>yarn.scheduler.configuration.store.class</tt> in yarn-site.xml. Possible values are <i>file</i>, which allows modifying properties via file; <i>memory</i>, which allows modifying properties via API, but does not persist changes across restart; <i>leveldb</i>, which allows modifying properties via API and stores changes in leveldb backing store; and <i>zk</i>, which allows modifying properties via API and stores changes in zookeeper backing store. The default value is <i>file</i>.</p>
<div class="section">
<h3><a name="Changing_queue_configuration_via_file"></a>Changing queue configuration via file</h3>
<p>To edit by file, you need to edit <b>conf/capacity-scheduler.xml</b> and run <i>yarn rmadmin -refreshQueues</i>.</p>
<div>
<div>
<pre class="source">$ vi $HADOOP_CONF_DIR/capacity-scheduler.xml
$ $HADOOP_YARN_HOME/bin/yarn rmadmin -refreshQueues
</pre></div></div>
<div class="section">
<h4><a name="Deleting_queue_via_file"></a>Deleting queue via file</h4>
<p>Step 1: Stop the queue</p>
<p>Before deleting a leaf queue, the leaf queue should not have any running/pending apps and has to BE STOPPED by changing <tt>yarn.scheduler.capacity.&lt;queue-path&gt;.state</tt>. See the [Queue Administration &amp; Permissions](CapacityScheduler.html#Queue Properties) section. Before deleting a parent queue, all its child queues should not have any running/pending apps and have to BE STOPPED. The parent queue also needs to be STOPPED</p>
<p>Step 2: Delete the queue</p>
<p>Remove the queue configurations from the file and run refresh as described above</p></div></div>
<div class="section">
<h3><a name="Changing_queue_configuration_via_API"></a>Changing queue configuration via API</h3>
<p>Editing by API uses a backing store for the scheduler configuration. To enable this, the following parameters can be configured in yarn-site.xml.</p>
<p><b>Note:</b> This feature is in alpha phase and is subject to change.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.configuration.store.class</tt> </td>
<td align="left"> The type of backing store to use, as described <a href="CapacityScheduler.html#Changing_Queue_Configuration">above</a>. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.configuration.mutation.acl-policy.class</tt> </td>
<td align="left"> An ACL policy can be configured to restrict which users can modify which queues. Default value is <i>org.apache.hadoop.yarn.server.resourcemanager.scheduler.DefaultConfigurationMutationACLPolicy</i>, which only allows YARN admins to make any configuration modifications. Another value is <i>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.QueueAdminConfigurationMutationACLPolicy</i>, which only allows queue modifications if the caller is an admin of the queue. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.configuration.store.max-logs</tt> </td>
<td align="left"> Configuration changes are audit logged in the backing store, if using leveldb or zookeeper. This configuration controls the maximum number of audit logs to store, dropping the oldest logs when exceeded. Default is 1000. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.configuration.leveldb-store.path</tt> </td>
<td align="left"> The storage path of the configuration store when using leveldb. Default value is <i>${hadoop.tmp.dir}/yarn/system/confstore</i>. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.scheduler.configuration.leveldb-store.compaction-interval-secs</tt> </td>
<td align="left"> The interval for compacting the configuration store in seconds, when using leveldb. Default value is 86400, or one day. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.scheduler.configuration.zk-store.parent-path</tt> </td>
<td align="left"> The zookeeper root node path for configuration store related information, when using zookeeper. Default value is <i>/confstore</i>. </td></tr>
</tbody>
</table>
<p><b>Note:</b> When enabling scheduler configuration mutations via <tt>yarn.scheduler.configuration.store.class</tt>, <i>yarn rmadmin -refreshQueues</i> will be disabled, i.e. it will no longer be possible to update configuration via file.</p>
<p>See the <a href="ResourceManagerRest.html#Scheduler_Configuration_Mutation_API">YARN Resource Manager REST API</a> for examples on how to change scheduler configuration via REST, and <a href="YarnCommands.html#schedulerconf">YARN Commands Reference</a> for examples on how to change scheduler configuration via command line.</p></div></div>
<div class="section">
<h2><a name="Updating_a_Container_.28Experimental_-_API_may_change_in_the_future.29"></a>Updating a Container (Experimental - API may change in the future)</h2>
<p>Once an Application Master has received a Container from the Resource Manager, it may request the Resource Manager to update certain attributes of the container.</p>
<p>Currently only two types of container updates are supported:</p>
<ul>
<li><b>Resource Update</b> : Where the AM can request the RM to update the resource size of the container. For eg: Change the container from a 2GB, 2 vcore container to a 4GB, 2 vcore container.</li>
<li><b>ExecutionType Update</b> : Where the AM can request the RM to update the ExecutionType of the container. For eg: Change the execution type from <i>GUARANTEED</i> to <i>OPPORTUNISTIC</i> or vice versa.</li>
</ul>
<p>This is facilitated by the AM populating the <b>updated_containers</b> field, which is a list of type <b>UpdateContainerRequestProto</b>, in <b>AllocateRequestProto.</b> The AM can make multiple container update requests in the same allocate call.</p>
<p>The schema of the <b>UpdateContainerRequestProto</b> is as follows:</p>
<div>
<div>
<pre class="source">message UpdateContainerRequestProto {
required int32 container_version = 1;
required ContainerIdProto container_id = 2;
required ContainerUpdateTypeProto update_type = 3;
optional ResourceProto capability = 4;
optional ExecutionTypeProto execution_type = 5;
}
</pre></div></div>
<p>The <b>ContainerUpdateTypeProto</b> is an enum:</p>
<div>
<div>
<pre class="source">enum ContainerUpdateTypeProto {
INCREASE_RESOURCE = 0;
DECREASE_RESOURCE = 1;
PROMOTE_EXECUTION_TYPE = 2;
DEMOTE_EXECUTION_TYPE = 3;
}
</pre></div></div>
<p>As constrained by the above enum, the scheduler currently supports changing either the resource update OR executionType of a container in one update request.</p>
<p>The AM must also provide the latest <b>ContainerProto</b> it received from the RM. This is the container which the RM will attempt to update.</p>
<p>If the RM is able to update the requested container, the updated container will be returned, in the <b>updated_containers</b> list field of type <b>UpdatedContainerProto</b> in the <b>AllocateResponseProto</b> return value of either the same allocate call or in one of the subsequent calls.</p>
<p>The schema of the <b>UpdatedContainerProto</b> is as follows:</p>
<div>
<div>
<pre class="source">message UpdatedContainerProto {
required ContainerUpdateTypeProto update_type = 1;
required ContainerProto container = 2;
}
</pre></div></div>
<p>It specifies the type of container update that was performed on the Container and the updated Container object which container an updated token.</p>
<p>The container token can then be used by the AM to ask the corresponding NM to either start the container, if the container has not already been started or update the container using the updated token.</p>
<p>The <b>DECREASE_RESOURCE</b> and <b>DEMOTE_EXECUTION_TYPE</b> container updates are automatic - the AM does not explicitly have to ask the NM to decrease the resources of the container. The other update types require the AM to explicitly ask the NM to update the container.</p>
<p>If the <b>yarn.resourcemanager.auto-update.containers</b> configuration parameter is set to <b>true</b> (false by default), The RM will ensure that all container updates are automatic.</p></div>
<div class="section">
<h2><a name="Activities"></a>Activities</h2>
<p>Scheduling activities are activity messages used for debugging on some critical scheduling path, they can be recorded and exposed via RESTful API with minor impact on the scheduler performance. Currently, there are two types of activities supported: <b>scheduler activities</b> and <b>application activities</b>.</p>
<div class="section">
<h3><a name="Scheduler_Activities"></a>Scheduler Activities</h3>
<p>Scheduler activities include useful scheduling info in a scheduling cycle, which illustrate how the scheduler allocates a container. Scheduler activities REST API (<tt>http://rm-http-address:port/ws/v1/cluster/scheduler/activities</tt>) provides a way to enable recording scheduler activities and fetch them from cache. To eliminate the performance impact, scheduler automatically disables recording activities at the end of a scheduling cycle, you can query the RESTful API again to get the latest scheduler activities.</p>
<p>See the <a href="ResourceManagerRest.html#Scheduler_Activities_API">YARN Resource Manager REST API</a> for query parameters, output structure and examples about scheduler activities.</p></div>
<div class="section">
<h3><a name="Application_Activities"></a>Application Activities</h3>
<p>Application activities include useful scheduling info for a specified application, which illustrate how the requirements are satisfied or just skipped. Application activities REST API (<tt>http://rm-http-address:port/ws/v1/cluster/scheduler/app-activities/{appid}</tt>) provides a way to enable recording application activities for a specified application within a few seconds or fetch historical application activities from cache, available actions which include &#x201c;refresh&#x201d; and &#x201c;get&#x201d; can be specified by the &#x201c;actions&#x201d; parameter:</p>
<ul>
<li>Query with parameter &#x201c;actions=refresh&#x201d; will enable recording application activities for the specified application for a certain time (defaults to 3 seconds) and get a simple response like: {&#x201c;appActivities&#x201d;:{&#x201c;applicationId&#x201d;:&#x201c;application_1562308866454_0001&#x201d;,&#x201c;diagnostic&#x201d;:&#x201c;Successfully received action: refresh&#x201d;,&#x201c;timestamp&#x201d;:1562308869253,&#x201c;dateTime&#x201d;:&#x201c;Fri Jul 05 14:41:09 CST 2019&#x201d;}}.</li>
<li>Query with parameter &#x201c;actions=get&#x201d; will not enable recording but directly get historical application activities from cache.</li>
<li>If no actions parameter is specified, default actions are &#x201c;refresh,get&#x201d;, which means both &#x201c;refresh&#x201d; and &#x201c;get&#x201d; will be performed.</li>
</ul>
<p>See the <a href="ResourceManagerRest.html#Application_Activities_API">YARN Resource Manager REST API</a> for query parameters, output structure and examples about application activities.</p></div>
<div class="section">
<h3><a name="Configuration"></a>Configuration</h3>
<p>The CapacityScheduler supports the following parameters to control the cache size and the expiration of scheduler/application activities.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.activities-manager.cleanup-interval-ms</tt> </td>
<td align="left"> The cleanup interval for activities in milliseconds. Defaults to 5000. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.resourcemanager.activities-manager.scheduler-activities.ttl-ms</tt> </td>
<td align="left"> Time to live for scheduler activities in milliseconds. Defaults to 600000. </td></tr>
<tr class="b">
<td align="left"> <tt>yarn.resourcemanager.activities-manager.app-activities.ttl-ms</tt> </td>
<td align="left"> Time to live for application activities in milliseconds. Defaults to 600000. </td></tr>
<tr class="a">
<td align="left"> <tt>yarn.resourcemanager.activities-manager.app-activities.max-queue-length</tt> </td>
<td align="left"> Max queue length for app activities. Defaults to 100. </td></tr>
</tbody>
</table></div>
<div class="section">
<h3><a name="Web_UI"></a>Web UI</h3>
<p>Activities info is available in the application attempt page on RM Web UI, where outstanding requests are aggregated and displayed. Simply click the refresh button to get the latest activities info.</p></div></div>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2021
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>