Apache Spark Kubernetes Operator

Clone this repo:
  1. bdbbbbe [SPARK-55470] Add a `Checkstyle` rule to enforce symbolic placeholder for logging by Dongjoon Hyun · 3 hours ago main
  2. 118fcbb [SPARK-55468] Log `Built-in Spark Version` by Dongjoon Hyun · 12 hours ago
  3. a2169bc [SPARK-55417] Add `create_spark_jira.py` script by Dongjoon Hyun · 2 days ago
  4. e478cd2 [SPARK-55425] Set `strategy.max-parrallel` to 20 for all GitHub Action jobs by Dongjoon Hyun · 3 days ago
  5. 8532615 [SPARK-55422] Fix the default value of `readinessProbe.failureThreshold` to 1 by Dongjoon Hyun · 3 days ago

Apache Spark K8s Operator

Release Artifact Hub GitHub Actions Build License Repo Size

Apache Spark™ K8s Operator is a subproject of Apache Spark and aims to extend K8s resource manager to manage Apache Spark applications via Operator Pattern.

Install Helm Chart

Apache Spark provides a Helm Chart.

helm repo add spark https://apache.github.io/spark-kubernetes-operator
helm repo update
helm install spark spark/spark-kubernetes-operator

Building Spark K8s Operator

Spark K8s Operator is built using Gradle. To build, run:

./gradlew build -x test

Running Tests

./gradlew build

Build Docker Image

./gradlew buildDockerImage

Install Helm Chart from the source code

helm install spark -f build-tools/helm/spark-kubernetes-operator/values.yaml build-tools/helm/spark-kubernetes-operator/

Run Spark Pi App

$ kubectl apply -f examples/pi.yaml

$ kubectl get sparkapp
NAME   CURRENT STATE      AGE
pi     ResourceReleased   4m10s

$ kubectl delete sparkapp/pi

Run Spark Cluster

$ kubectl apply -f examples/prod-cluster-with-three-workers.yaml

$ kubectl get sparkcluster
NAME   CURRENT STATE    AGE
prod   RunningHealthy   10s

$ kubectl port-forward prod-master-0 6066 &

$ ./examples/submit-pi-to-prod.sh
{
  "action" : "CreateSubmissionResponse",
  "message" : "Driver successfully submitted as driver-20260110030233-0000",
  "serverSparkVersion" : "4.1.1",
  "submissionId" : "driver-20260110030233-0000",
  "success" : true
}

$ curl http://localhost:6066/v1/submissions/status/driver-20260110030233-0000/
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "FINISHED",
  "serverSparkVersion" : "4.1.1",
  "submissionId" : "driver-20260110030233-0000",
  "success" : true,
  "workerHostPort" : "10.1.1.172:44233",
  "workerId" : "worker-20260110030145-10.1.1.172-44233"
}

$ kubectl delete sparkcluster prod
sparkcluster.spark.apache.org "prod" deleted

Run Spark Pi App on Apache YuniKorn scheduler

If you have not yet done so, follow YuniKorn docs to install the latest version:

helm repo add yunikorn https://apache.github.io/yunikorn-release

helm repo update

helm install yunikorn yunikorn/yunikorn --namespace yunikorn --version 1.7.0 --create-namespace --set embedAdmissionController=false

Submit a Spark app to YuniKorn enabled cluster:


$ kubectl apply -f examples/pi-on-yunikorn.yaml $ kubectl describe pod pi-on-yunikorn-0-driver ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduling 1s yunikorn default/pi-on-yunikorn-0-driver is queued and waiting for allocation Normal Scheduled 1s yunikorn Successfully assigned default/pi-on-yunikorn-0-driver to node docker-desktop Normal PodBindSuccessful 1s yunikorn Pod default/pi-on-yunikorn-0-driver is successfully bound to node docker-desktop Normal Pulled 0s kubelet Container image "apache/spark:4.1.1-scala" already present on machine Normal Created 0s kubelet Created container: spark-kubernetes-driver Normal Started 0s kubelet Started container spark-kubernetes-driver $ kubectl delete sparkapp pi-on-yunikorn sparkapplication.spark.apache.org "pi-on-yunikorn" deleted from default namespace

Clean Up

Check the existing Spark applications and clusters. If exists, delete them.

$ kubectl get sparkapp
No resources found in default namespace.

$ kubectl get sparkcluster
No resources found in default namespace.

Remove HelmChart and CRDs.

helm uninstall spark

kubectl delete crd sparkapplications.spark.apache.org

kubectl delete crd sparkclusters.spark.apache.org

Contributing

Please review the Contribution to Spark guide for information on how to get started contributing to the project.