authors:
The Apache Flink Community is pleased to announce the preview release of the Apache Flink Kubernetes Operator (0.1.0)
The Flink Kubernetes Operator allows users to easily manage their Flink deployment lifecycle using native Kubernetes tooling.
The operator takes care of submitting, savepointing, upgrading and generally managing Flink jobs using the built-in Flink Kubernetes integration. This way users do not have to use the Flink Clients (e.g. CLI) or interact with the Flink jobs manually, they only have to declare the desired deployment specification and the operator will take care of the rest. It also make it easier to integrate Flink job management with CI/CD tooling.
Core Features
For a detailed [getting started guide]({{< param DocsBaseUrl >}}flink-kubernetes-operator-docs-release-0.1/docs/try-flink-kubernetes-operator/quick-start/) please check the documentation site.
When using the operator, users create FlinkDeployment
objects to describe their Flink application and session clusters deployments.
A minimal application deployment yaml would look like this:
apiVersion: flink.apache.org/v1alpha1 kind: FlinkDeployment metadata: namespace: default name: basic-example spec: image: flink:1.14 flinkVersion: v1_14 flinkConfiguration: taskmanager.numberOfTaskSlots: "2" serviceAccount: flink jobManager: replicas: 1 resource: memory: "2048m" cpu: 1 taskManager: resource: memory: "2048m" cpu: 1 job: jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar parallelism: 2 upgradeMode: stateless
Once applied to the cluster using kubectl apply -f your-deployment.yaml
the operator will spin up the application cluster for you. If you would like to upgrade or make changes to your application, you can simply modify the yaml and submit it again, the operator will execute the necessary steps (savepoint, shutdown, redeploy etc.) to upgrade your application.
To stop and delete your application cluster you can simply call kubectl delete -f your-deployment.yaml
.
You can read more about the [job management features]({{< param DocsBaseUrl >}}flink-kubernetes-operator-docs-release-0.1/docs/custom-resource/job-management/) on the documentation site.
The community is currently working on hardening the core operator logic, stabilizing the APIs and adding the remaining bits for making the Flink Kubernetes Operator production ready.
In the upcoming 1.0.0 release you can expect (at-least) the following additional features:
In the medium term you can also expect:
Please give the preview release a try, share your feedback on the Flink mailing list and contribute to the project!
The source artifacts and helm chart are now available on the updated Downloads page of the Flink website.
The official 0.1.0 release archive doubles as a Helm repository that you can easily register locally:
{{< highlight bash >}} $ helm repo add flink-kubernetes-operator-0.1.0 https://archive.apache.org/dist/flink/flink-kubernetes-operator-0.1.0/ $ helm install flink-kubernetes-operator flink-kubernetes-operator-0.1.0/flink-kubernetes-operator --set webhook.create=false {{< / highlight >}}
You can also find official Kubernetes Operator Docker images of the new version on Dockerhub.
For more details, check the [updated documentation]({{< param DocsBaseUrl >}}flink-kubernetes-operator-docs-release-0.1/) and the release notes. We encourage you to download the release and share your feedback with the community through the Flink mailing lists or JIRA.
The Apache Flink community would like to thank each and every one of the contributors that have made this release possible:
Aitozi, Biao Geng, Gyula Fora, Hao Xin, Jaegu Kim, Jaganathan Asokan, Junfan Zhang, Marton Balassi, Matyas Orhidi, Nicholas Jiang, Sandor Kelemen, Thomas Weise, Yang Wang, 愚鲤