commit | bb809ec2fcef7301611ff394ca64c65bff0e17ec | [log] [tgz] |
---|---|---|
author | Dongjoon Hyun <dongjoon@apache.org> | Fri Oct 04 16:31:03 2024 -0700 |
committer | Dongjoon Hyun <dongjoon@apache.org> | Fri Oct 04 16:31:03 2024 -0700 |
tree | ccfd98ac08ff687422e1247834026ee8a63a2046 | |
parent | 0b9d17feaa218b07904fbeed1f6037ec970d15ff [diff] |
[SPARK-49778][FOLLOWUP] Fix `cluster-with-template.yaml` by removing `master` prefix ### What changes were proposed in this pull request? This is a follow-up of the following to fix `cluster-with-template.yaml` by removing `master` prefix. - #136 ### Why are the changes needed? To make it valid. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Manual tests. ``` $ kubectl apply -f examples/cluster-with-template.yaml sparkcluster.spark.apache.org/cluster-with-template created ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #143 from dongjoon-hyun/SPARK-49778-2. Authored-by: Dongjoon Hyun <dongjoon@apache.org> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
Apache Spark⢠K8s Operator is a subproject of Apache Spark and aims to extend K8s resource manager to manage Apache Spark applications via Operator Pattern.
Spark K8s Operator is built using Gradle. To build, run:
$ ./gradlew build -x test
$ ./gradlew build
$ ./gradlew buildDockerImage
$ ./gradlew spark-operator-api:relocateGeneratedCRD $ helm install spark-kubernetes-operator --create-namespace -f build-tools/helm/spark-kubernetes-operator/values.yaml build-tools/helm/spark-kubernetes-operator/
$ kubectl apply -f examples/pi.yaml $ kubectl get sparkapp NAME CURRENT STATE AGE pi ResourceReleased 4m10s $ kubectl delete sparkapp/pi
$ kubectl apply -f examples/prod-cluster-with-three-workers.yaml $ kubectl get sparkcluster NAME CURRENT STATE AGE prod RunningHealthy 10s $ kubectl port-forward prod-master-0 6066 & $ ./examples/submit-pi-to-prod.sh { "action" : "CreateSubmissionResponse", "message" : "Driver successfully submitted as driver-20240821181327-0000", "serverSparkVersion" : "4.0.0-preview2", "submissionId" : "driver-20240821181327-0000", "success" : true } $ curl http://localhost:6066/v1/submissions/status/driver-20240821181327-0000/ { "action" : "SubmissionStatusResponse", "driverState" : "FINISHED", "serverSparkVersion" : "4.0.0-preview2", "submissionId" : "driver-20240821181327-0000", "success" : true, "workerHostPort" : "10.1.5.188:42099", "workerId" : "worker-20240821181236-10.1.5.188-42099" } $ kubectl delete sparkcluster prod sparkcluster.spark.apache.org "prod" deleted
$ helm install yunikorn yunikorn/yunikorn --namespace yunikorn --version 1.5.2 --create-namespace --set embedAdmissionController=false $ kubectl apply -f examples/pi-on-yunikorn.yaml $ kubectl describe pod pi-on-yunikorn-0-driver ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduling 14s yunikorn default/pi-on-yunikorn-0-driver is queued and waiting for allocation Normal Scheduled 14s yunikorn Successfully assigned default/pi-on-yunikorn-0-driver to node docker-desktop Normal PodBindSuccessful 14s yunikorn Pod default/pi-on-yunikorn-0-driver is successfully bound to node docker-desktop Normal TaskCompleted 6s yunikorn Task default/pi-on-yunikorn-0-driver is completed Normal Pulled 13s kubelet Container image "apache/spark:4.0.0-preview2" already present on machine Normal Created 13s kubelet Created container spark-kubernetes-driver Normal Started 13s kubelet Started container spark-kubernetes-driver $ kubectl delete sparkapp pi-on-yunikorn sparkapplication.spark.apache.org "pi-on-yunikorn" deleted
As of now, you can try spark-kubernetes-operator
nightly version in the following way.
$ helm install spark-kubernetes-operator \ https://nightlies.apache.org/spark/charts/spark-kubernetes-operator-0.1.0-SNAPSHOT.tgz
Check the existing Spark applications and clusters. If exists, delete them.
$ kubectl get sparkapp No resources found in default namespace. $ kubectl get sparkcluster No resources found in default namespace.
Remove HelmChart and CRDs.
$ helm uninstall spark-kubernetes-operator $ kubectl delete crd sparkapplications.spark.apache.org $ kubectl delete crd sparkclusters.spark.apache.org
In case of nightly builds, remove the snapshot image.
$ docker rmi apache/spark-kubernetes-operator:main-snapshot
Please review the Contribution to Spark guide for information on how to get started contributing to the project.