This page provides instructions for deploying a Fluss cluster on Kubernetes using Helm charts. The chart creates a distributed streaming storage system with CoordinatorServer and TabletServer components.
Before installing the Fluss Helm chart, ensure you have:
:::note A Fluss cluster deployment requires a running ZooKeeper ensemble. To provide flexibility in deployment and enable reuse of existing infrastructure, the Fluss Helm chart does not include a bundled ZooKeeper cluster. If you don’t already have a ZooKeeper running, the installation documentation provides instructions for deploying one using Bitnami’s Helm chart. :::
| Component | Minimum Version | Recommended Version |
|---|---|---|
| Kubernetes | v1.19+ | v1.25+ |
| Helm | v3.8.0+ | v3.18.6+ |
| ZooKeeper | v3.6+ | v3.8+ |
| Apache Fluss (Container Image) | $FLUSS_VERSION$ | $FLUSS_VERSION$ |
| Minikube (Local Development) | v1.25+ | v1.32+ |
| Docker (Local Development) | v20.10+ | v24.0+ |
For local testing and development, you can deploy Fluss on Minikube. This is ideal for development, testing, and learning purposes.
# Start Minikube with recommended settings for Fluss minikube start # Verify cluster is ready kubectl cluster-info
To build images directly in Minikube you need to configure the Docker CLI to use Minikube's internal Docker daemon:
# Configure shell to use Minikube's Docker daemon eval $(minikube docker-env)
To build custom images please refer to Custom Container Images.
This installation process is generally working for a distributed Kubernetes cluster or a Minikube setup.
To start Zookeeper use Bitnami’s chart or your own deployment. If you have an existing Zookeeper cluster, you can skip this step. Example with Bitnami’s chart:
# Add Bitnami repository helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update # Deploy ZooKeeper helm install zk bitnami/zookeeper
helm repo add fluss https://downloads.apache.org/incubator/fluss/helm-chart helm repo update helm install helm-repo/fluss
helm install fluss ./helm
You can customize the installation by providing your own values.yaml file or setting individual parameters via the --set flag. Using a custom values file:
helm install fluss ./helm -f my-values.yaml
Or for example to change the ZooKeeper address via the --set flag:
helm install fluss ./helm \ --set configurationOverrides.zookeeper.address=<my-zk-cluster>:2181
# Uninstall Fluss helm uninstall fluss # Uninstall ZooKeeper helm uninstall zk # Delete PVCs kubectl delete pvc -l app.kubernetes.io/name=fluss # Stop Minikube minikube stop # Delete Minikube cluster minikube delete
The Fluss Helm chart deploys the following Kubernetes resources:
server.yaml settingspersistence.enabled=true# Check pod status kubectl get pods -l app.kubernetes.io/name=fluss # Check services kubectl get svc -l app.kubernetes.io/name=fluss # View logs kubectl logs -l app.kubernetes.io/component=coordinator kubectl logs -l app.kubernetes.io/component=tablet
The following table lists the configurable parameters of the Fluss chart and their default values.
| Parameter | Description | Default |
|---|---|---|
nameOverride | Override the name of the chart | "" |
fullnameOverride | Override the full name of the resources | "" |
| Parameter | Description | Default |
|---|---|---|
image.registry | Container image registry | "" |
image.repository | Container image repository | fluss |
image.tag | Container image tag | $FLUSS_VERSION$ |
image.pullPolicy | Container image pull policy | IfNotPresent |
image.pullSecrets | Container image pull secrets | [] |
| Parameter | Description | Default |
|---|---|---|
appConfig.internalPort | Internal communication port | 9123 |
appConfig.externalPort | External client port | 9124 |
| Parameter | Description | Default |
|---|---|---|
configurationOverrides.default.bucket.number | Default number of buckets for tables | 3 |
configurationOverrides.default.replication.factor | Default replication factor | 3 |
configurationOverrides.zookeeper.path.root | ZooKeeper root path for Fluss | /fluss |
configurationOverrides.zookeeper.address | ZooKeeper ensemble address | zk-zookeeper.{{ .Release.Namespace }}.svc.cluster.local:2181 |
configurationOverrides.remote.data.dir | Remote data directory for snapshots | /tmp/fluss/remote-data |
configurationOverrides.data.dir | Local data directory | /tmp/fluss/data |
configurationOverrides.internal.listener.name | Internal listener name | INTERNAL |
| Parameter | Description | Default |
|---|---|---|
persistence.enabled | Enable persistent volume claims | false |
persistence.size | Persistent volume size | 1Gi |
persistence.storageClass | Storage class name | nil (uses default) |
| Parameter | Description | Default |
|---|---|---|
resources.coordinatorServer.requests.cpu | CPU requests for coordinator | Not set |
resources.coordinatorServer.requests.memory | Memory requests for coordinator | Not set |
resources.coordinatorServer.limits.cpu | CPU limits for coordinator | Not set |
resources.coordinatorServer.limits.memory | Memory limits for coordinator | Not set |
resources.tabletServer.requests.cpu | CPU requests for tablet servers | Not set |
resources.tabletServer.requests.memory | Memory requests for tablet servers | Not set |
resources.tabletServer.limits.cpu | CPU limits for tablet servers | Not set |
resources.tabletServer.limits.memory | Memory limits for tablet servers | Not set |
For external ZooKeeper clusters:
configurationOverrides: zookeeper.address: "zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181" zookeeper.path.root: "/my-fluss-cluster"
The chart automatically configures listeners for internal cluster communication and external client access:
Custom listener configuration:
appConfig: internalPort: 9123 externalPort: 9124 configurationOverrides: bind.listeners: "INTERNAL://0.0.0.0:9123,CLIENT://0.0.0.0:9124" advertised.listeners: "CLIENT://my-cluster.example.com:9124"
Configure different storage backends:
configurationOverrides: data.dir: "/data/fluss" remote.data.dir: "s3://my-bucket/fluss-data"
# Upgrade to a newer chart version helm upgrade fluss ./helm # Upgrade with new configuration helm upgrade fluss ./helm -f values-new.yaml
The StatefulSets support rolling updates. When you update the configuration, pods will be restarted one by one to maintain availability.
To build and use custom Fluss images:
mvn clean package -DskipTests
# Copy build artifacts cp -r build-target/* docker/fluss/build-target # Build image cd docker docker build -t my-registry/fluss:custom-tag .
image: registry: my-registry repository: fluss tag: custom-tag pullPolicy: Always
The chart includes liveness and readiness probes:
livenessProbe: tcpSocket: port: 9124 initialDelaySeconds: 10 periodSeconds: 3 failureThreshold: 100 readinessProbe: tcpSocket: port: 9124 initialDelaySeconds: 10 periodSeconds: 3 failureThreshold: 100
Access logs from different components:
# Coordinator logs kubectl logs -l app.kubernetes.io/component=coordinator -f # Tablet server logs kubectl logs -l app.kubernetes.io/component=tablet -f # Specific pod logs kubectl logs coordinator-server-0 -f kubectl logs tablet-server-0 -f
Symptoms: Pods stuck in Pending or CrashLoopBackOff state
Solutions:
# Check pod events kubectl describe pod <pod-name> # Check resource availability kubectl describe nodes # Verify ZooKeeper connectivity kubectl exec -it <fluss-pod> -- nc -zv <zookeeper-host> 2181
Symptoms: ImagePullBackOff or ErrImagePull
Solutions:
Symptoms: Clients cannot connect to Fluss cluster
Solutions:
# Check service endpoints kubectl get endpoints # Test network connectivity kubectl exec -it <client-pod> -- nc -zv <fluss-service> 9124 # Verify DNS resolution kubectl exec -it <client-pod> -- nslookup <fluss-service>
# Get all resources kubectl get all -l app.kubernetes.io/name=fluss # Check configuration kubectl get configmap fluss-conf-file -o yaml # Get detailed pod information kubectl get pods -o wide -l app.kubernetes.io/name=fluss