tree: 7ccc51a977fbb5402bf7cd5fcf9d4149fda70cde [path history] [tgz]
  1. acl/
  2. k8s-api-access/
  3. placement-rules/
  4. priority/
  5. resource-limits/
  6. README.md
deployments/examples/authz/README.md

Authorization use cases

Yunikorn offers a range of features, including advanced capabilities like hierarchical resource queues, access control lists, resource limits, preemption, priority, and placement rules for managing your cluster. This page presents a real-world scenario to demonstrate the practical application of these features.

The following will be included in this article:

Prerequisites

Before configuring yunikorn-config, we need to create users using Authentication and RBAC from Kubernetes.

To create the necessary users for the examples, Please use ./create-user.sh to create a user.

After the user is created, the pod can be obtained by the following command to confirm the creation is successful:

kubectl --context=sue-context get pod

In our use cases, we frequently simulate different users deploying YAML files. To accomplish this, we utilize the --context command to select the appropriate user for each deployment:

kubectl --context=sue-context apply -f ./acl/nginx-1.yaml

When you are done testing, you can run ./remove-user.sh to delete all users.

Access control with ACL

In the yunikorn-configs.yaml, we utilize adminacl to restrict access to the queue to only authorized users.

See the documentation on User & Group Resolution or ACLs for more information.

queues:
  - name: root
    queues:
    - name: system
      adminacl: " admin"
    - name: tenants
      queues:
        - name: group-a
          adminacl: " group-a"
        - name: group-b
          adminacl: " group-b"

In the test case, users are given the option to specify the queue they want to use. The scheduler then checks if the user's application is permitted to be deployed to that queue.

The following example illustrates this scenario, along with the expected test results:

user, groupAssign queueresultYAML filename
sue, group-aroot.tenants.group-acreatednginx-1
sue, group-aroot.tenants.group-bblockednginx-1
kim, group-broot.tenants.group-ablockednginx-2
kim, group-broot.tenants.group-bcreatednginx-2
anonymous, anonymousroot.tenants.group-ablockednginx-3
anonymous, anonymousroot.tenants.group-bblockednginx-3

Placement of different users

In yunikorn-configs.yaml, we use placementrules to allow the scheduler to dynamically assign applications to a queue, and even create a new queue if needed.

See the documentation on App Placement Rules for more information.

placementrules:
  - name: provided
  create: true
  filter:
    type: allow
    users:
      - admin
    groups:
      - admin
  parent:
    name: fixed
    value: root.system

In the test case, the user doesn't need to specify the queue for their application. Instead, the scheduler will utilize the placement rules to assign the application to the appropriate queue. If needed, the scheduler will create new queues.

The following example illustrates this scenario, along with the expected test results:

placement ruleuser, groupprovide queuenamespaceExpected to be placed onYAML filename
providedadmin, adminroot.system.high-priorityroot.system.high-prioritynginx-admin.yaml
providedadmin, adminroot.system.low-priorityroot.system.low-prioritynginx-admin.yaml
usernamesue, group-aroot.tenants.group-a.suenginx-sue.yaml
tag (value: namespace)kim, group-bdevroot.tenants.group-b.devnginx-kim.yaml
tag (value: namespace)kim, group-btestroot.tenants.group-b.testnginx-kim.yaml

Limit usable resources on a queue level

In yunikorn-configs.yaml, we use resources to limit and reserve the amount of resources per queue.

See the documentation on Partition and Queue Configuration #Resources for more information.

queues:
- name: system
  adminacl: " admin"
  resources:
    guaranteed:
      {memory: 2G, vcore: 2}
    max:
      {memory: 6G, vcore: 6}

In the test case, users may request more resources than the queue allows, causing the scheduler to block applications that exceed the limits of each queue.

The following example illustrates this scenario, along with the expected test results:

user, groupResource Limits for Destination Queuesrequest resources for each replicasreplicaresultYAML filename
admin, admin{memory: 6G, vcore: 6}{memory: 512M, vcore: 250m}1run all replicanginx-admin.yaml
sue, group-A{memory: 2G, vcore: 4}{memory: 512M, vcore: 500m}5run 3 replica (4 replica will exceed the resource limit)nginx-sue.yaml

Preemption and priority scheduling with fencing

In yunikorn-configs.yaml, we use priority.offset and priority.policy to configure the priority in a queue.

See the documentation on App & Queue Priorities for more information.

- name: tenants
  properties:
    priority.policy: "fence"
  queues:
    - name: group-a
      adminacl: " group-a"
      properties:
        priority.offset: "20"

In a resource-constrained environment, we will deploy applications to three queues simultaneously, each with a different priority. The scheduler will then deploy applications based on the priority of the queue.

In the following tests, we run the environment with a node resource limit of {memory:16GB, vcore:16}. Note that results will vary based on the environment, and you can modify the YAML file we provide to achieve similar results.

The following example illustrates this scenario, along with the expected test results:

case 1 -

queueoffset# of deploy apps# of apps accept by yunikornYAML filename
root.system.low-priority100088system.yaml
root.system.normal-priority085system.yaml
root.system.high-priority-100080system.yaml

case 2 -

NOTE: You will need to deploy all of the following YAML files simultaneously.

queueoffset# of deploy apps# of apps accept by yunikornYAML filename
root.system.normal-priority0 (global)77nginx-admin.yaml
root.tenants.group-a20 (fenced)76nginx-sue.yaml
root.tenants.group-b5 (fenced)70nginx-kim.yaml