This project provides Helm Charts for installing Apache Pulsar on Kubernetes.
Read Deploying Pulsar on Kubernetes for more details.
:warning: This helm chart is updated outside of the regular Pulsar release cycle and might lag behind a bit. It only supports basic Kubernetes features now. Currently, it can be used as no more than a template and starting point for a Kubernetes deployment. In many cases, it would require some customizations.
This Helm chart's default configuration DOES NOT meet production security requirements. Users MUST review and customize security settings for their specific environment.
IMPORTANT: This Helm chart provides a starting point for Pulsar deployments but requires significant security customization before use in production environments. We strongly recommend implementing:
As an open source project, we welcome contributions to improve security features. Please consider submitting pull requests to address security gaps or enhance existing security implementations.
As per the Pulsar Proxy documentation, it is explicitly stated that the Pulsar proxy is not designed for exposure to the public internet. The design assumes that deployments will be protected by network perimeter security measures. It is crucial to understand that relying solely on the default configuration can expose your deployment to significant security vulnerabilities.
This version introduces OpenID
authentication. Setting auth.authentication.provider
is no longer supported, you need to enable the provider with auth.authentication.<provider>.enabled
.
The default service type for the Pulsar proxy has changed from LoadBalancer
to ClusterIP
for security reasons. This limits access to within the Kubernetes environment by default.
If you need to expose the Pulsar Proxy outside the cluster:
USE INTERNAL LOAD BALANCERS ONLY
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
networking.gke.io/load-balancer-type: "Internal"
IMPLEMENT AUTHENTICATION AND AUTHORIZATION
USE TLS FOR ALL CONNECTIONS
NETWORK SECURITY
CLIENT IP ADDRESS BASED ACCESS RESTRICTIONS
proxy.service.loadBalancerSourceRanges
in your values.yaml:proxy: service: loadBalancerSourceRanges: - 10.0.0.0/8 # Private network range - 172.16.0.0/12 # Private network range - 192.168.0.0/16 # Private network range
loadBalancerSourceRanges
parameterAs an alternative method for external access, Pulsar has support for SNI proxy routing. SNI Proxy routing is supported with proxy servers such as Apache Traffic Server, HAProxy and Nginx.
Note: This option isn't currently implemented in the Apache Pulsar Helm chart.
IMPORTANT: Pulsar binary protocol cannot be exposed outside of the Kubernetes cluster using Kubernetes Ingress. Kubernetes Ingress works for the Admin REST API and topic lookups, but clients would be connecting to the advertised listener addresses returned by the brokers and it would only work when clients can connect directly to brokers. This is not a supported secure option for exposing Pulsar to untrusted networks.
loadBalancerSourceRanges
may be employed to restrict access to specified IP addresses or ranges. This, however, should not be considered a substitute for comprehensive security measures in production environments.The user assumes full responsibility for the security and integrity of their deployment. This includes, but is not limited to, the proper configuration of security features and adherence to best practices for securing network access. The providers of this Helm chart disclaim all warranties, whether express or implied, including any warranties of merchantability, fitness for a particular purpose, and non-infringement of third-party rights.
The providers of this Helm chart make no guarantees regarding the security of the chart under any circumstances. It is the user's responsibility to ensure that their deployment is secure and complies with all relevant security standards and regulations.
By using this Helm chart, the user acknowledges the risks associated with its default configuration and the necessity for proper security customization. The user further agrees that the providers of the Helm chart shall not be liable for any security breaches or incidents resulting from the use of the chart.
This Helm Chart includes all the components of Apache Pulsar for a complete experience.
It includes support for:
AdditionalCertificateOutputFormats=true
feature gate to be enabled in the cert-manager deployment when using cert-manager versions below 1.15.0)In order to use this chart to deploy Apache Pulsar on Kubernetes, the followings are required.
Before proceeding to deploying Pulsar, you need to prepare your environment.
helm
and kubectl
need to be installed on your computer.
To add this chart to your local Helm repository:
helm repo add apachepulsar https://pulsar.apache.org/charts helm repo update
You need a Kubernetes cluster whose version is 1.25 or higher in order to use this chart, due to the usage of certain Kubernetes features.
We provide some instructions to guide you through the preparation: http://pulsar.apache.org/docs/helm-prepare/
Configure your values file. The best way to know which values are available is to read the values.yaml. A best practice is to start with an empty values file and only set the keys that differ from the default configuration.
Anti-affinity rules for Zookeeper and Bookie components require at least one node per replica. For Kubernetes clusters with less than 3 nodes, you must disable this feature by adding this to your initial values.yaml file:
affinity: anti_affinity: false
Install the chart:
helm install -n <namespace> --create-namespace <release-name> -f your-values.yaml apachepulsar/pulsar
Observe the deployment progress
Watching events to view progress of deployment:
kubectl get -n <namespace> events -o wide --watch
Watching state of deployed Kubernetes objects, updated every 2 seconds:
watch kubectl get -n <namespace> all
Waiting until Pulsar Proxy is available:
kubectl wait --timeout=600s --for=condition=ready pod -n <namespace> -l component=proxy
Watching state with k9s (https://k9scli.io/topics/install/):
k9s -n <namespace>
Access the Pulsar cluster
The default values will create a ClusterIP
for the proxy you can use to interact with the cluster. To find the IP address of proxy use:
kubectl get service -n <k8s-namespace>
For more information, please follow our detailed quick start guide.
We provide a detailed guideline for you to customize the Helm Chart for a production-ready deployment.
You can also checkout out the example values file for different deployments.
In order to disable the victoria-metrics-k8s-stack, you can add the following to your values.yaml
. Victoria Metrics components can also be disabled and enabled individually if you only need specific monitoring features.
# disable VictoriaMetrics and related components victoria-metrics-k8s-stack: enabled: false victoria-metrics-operator: enabled: false vmsingle: enabled: false vmagent: enabled: false kube-state-metrics: enabled: false prometheus-node-exporter: enabled: false grafana: enabled: false Additionally, you'll need to set each component's `podMonitor` property to `false`. ```yaml # disable pod monitors autorecovery: podMonitor: enabled: false bookkeeper: podMonitor: enabled: false oxia: server: podMonitor: enabled: false coordinator: podMonitor: enabled: false broker: podMonitor: enabled: false proxy: podMonitor: enabled: false zookeeper: podMonitor: enabled: false
This is shown in some examples/values-disable-monitoring.yaml.
The Pulsar Manager can be deployed alongside the pulsar cluster instance. Depending on the given settings it uses an existing Secret within the given namespace or creates a new one, with random passwords for both, the UI and the internal database.
To forward the UI use (assumes you did not change the namespace):
kubectl port-forward $(kubectl get pods -l component=pulsar-manager -o jsonpath='{.items[0].metadata.name}') 9527:9527
And then opening the browser to http://localhost:9527
The default user is pulsar
and you can find out the password with this command
kubectl get secret -l component=pulsar-manager -o=jsonpath="{.items[0].data.UI_PASSWORD}" | base64 --decode
The Apache Pulsar Helm Chart uses the victoria-metrics-k8s-stack
Helm Chart to deploy Grafana.
There are several ways to configure Grafana dashboards. The default values.yaml
comes with examples of Pulsar dashboards which get downloaded from the Apache-2.0 licensed lhotari/pulsar-grafana-dashboards OSS project by URL.
Dashboards can be configured in values.yaml
or by adding ConfigMap
items with the label grafana_dashboard: "1"
. In values.yaml
, it's possible to include dashboards by URL or by grafana.com dashboard id (gnetId
and revision
). Please see the Grafana Helm chart documentation for importing dashboards.
You can connect to Grafana by forwarding port 3000
kubectl port-forward $(kubectl get pods -l app.kubernetes.io/name=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000
And then opening the browser to http://localhost:3000 . The default user is admin
.
You can find out the password with this command
kubectl get secret -l app.kubernetes.io/name=grafana -o=jsonpath="{.items[0].data.admin-password}" | base64 --decode
apache/pulsar
GitHub repo contains some Grafana dashboards here.Note: if you have third party dashboards that you would like included in this list, please open a pull request.
Once your Pulsar Chart is installed, configuration changes and chart updates should be done using helm upgrade
.
helm repo add apachepulsar https://pulsar.apache.org/charts helm repo update # If you are using the provided victoria-metrics-k8s-stack for monitoring, this installs or upgrades the required CRDs ./scripts/victoria-metrics-k8s-stack/upgrade_vm_operator_crds.sh # get the existing values.yaml used for the most recent deployment helm get values -n <namespace> <pulsar-release-name> > values.yaml # upgrade the deployment helm upgrade -n <namespace> -f values.yaml <pulsar-release-name> apachepulsar/pulsar
For more detailed information, see our Upgrading guide.
The TLS configuration for ZooKeeper has been changed to fix certificate and private key expiration issues. This change impacts configurations that have tls.enabled
and tls.zookeeper.enabled
set in values.yaml
. The revised solution requires the AdditionalCertificateOutputFormats=true
feature gate to be enabled in the cert-manager
deployment when using cert-manager versions below 1.15.0. If you installed cert-manager
using ./scripts/cert-manager/install-cert-manager.sh
, you can re-run the updated script to set the feature gate. The script currently installs or upgrades cert-manager LTS version 1.12.17, where the feature gate must be explicitly enabled.
LoadBalancer
to ClusterIP
Please check the section “External Access Recommendations” for guidance and also check the security advisory section. You will need to configure keys under proxy.service
in your values.yaml
to preserve existing functionality since the default has been changed.
The kube-prometheus-stack
was replaced with victoria-metrics-k8s-stack
in Pulsar Helm chart version 4.0.0. The trigger for the change was incompatibilities discovered in testing with most recent kube-prometheus-stack
and Prometheus 3.2.1 which failed to scrape Pulsar metrics in certain cases without providing proper error messages or debug information at debug level logging.
Victoria Metrics is Apache 2.0 Licensed OSS and it's a fully compatible drop-in replacement for Prometheus which is fast and efficient.
Before upgrading to Pulsar Helm Chart version 4.0.0, it is recommended to disable kube-prometheus-stack in the original Helm chart version that is used:
# get the existing values.yaml used for the most recent deployment helm get values -n <namespace> <pulsar-release-name> > values.yaml # disable kube-prometheus-stack in the currently used version before upgrading to Pulsar Helm chart 4.0.0 helm upgrade -n <namespace> -f values.yaml --version <your-current-chart-version> --set kube-prometheus-stack.enabled=false <pulsar-release-name> apachepulsar/pulsar
After, this you can proceed with helm upgrade
.
The 2.10.0+ Apache Pulsar docker image is a non-root container, by default. That complicates an upgrade to 2.10.0 because the existing files are owned by the root user but are not writable by the root group. In order to leverage this new security feature, the Bookkeeper and Zookeeper StatefulSet securityContexts are configurable in the values.yaml
. They default to:
securityContext: fsGroup: 0 fsGroupChangePolicy: "OnRootMismatch"
This configuration is ideal for regular Kubernetes clusters where the UID is stable across restarts. If the process UID is subject to change (like it is in OpenShift), you'll need to set fsGroupChangePolicy: "Always"
.
The official docker image assumes that it is run as a member of the root group.
If you upgrade to the latest version of the helm chart before upgrading to Pulsar 2.10.0, then when you perform your first upgrade to version >= 2.10.0, you will need to set fsGroupChangePolicy: "Always"
on the first upgrade and then set it back to fsGroupChangePolicy: "OnRootMismatch"
on subsequent upgrades. This is because the root file won't mismatch permissions, but the RocksDB lock file will. If you have direct access to the persistent volumes, you can alternatively run chgrp -R g+w /pulsar/data
before upgrading.
Here is a sample error you can expect if the RocksDB lock file is not correctly owned by the root group:
2022-05-14T03:45:06,903+0000 ERROR org.apache.bookkeeper.server.Main - Failed to build bookie server java.io.IOException: Error open RocksDB database at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:199) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:88) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.lambda$static$0(KeyValueStorageRocksDB.java:62) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.bookie.storage.ldb.LedgerMetadataIndex.<init>(LedgerMetadataIndex.java:68) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage.<init>(SingleDirectoryDbLedgerStorage.java:169) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.newSingleDirectoryDbLedgerStorage(DbLedgerStorage.java:150) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.initialize(DbLedgerStorage.java:129) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.bookie.Bookie.<init>(Bookie.java:818) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.proto.BookieServer.newBookie(BookieServer.java:152) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.proto.BookieServer.<init>(BookieServer.java:120) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.server.service.BookieService.<init>(BookieService.java:52) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.server.Main.buildBookieServer(Main.java:304) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.server.Main.doMain(Main.java:226) [org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] at org.apache.bookkeeper.server.Main.main(Main.java:208) [org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] Caused by: org.rocksdb.RocksDBException: while open a file for lock: /pulsar/data/bookkeeper/ledgers/current/ledgers/LOCK: Permission denied at org.rocksdb.RocksDB.open(Native Method) ~[org.rocksdb-rocksdbjni-6.10.2.jar:?] at org.rocksdb.RocksDB.open(RocksDB.java:239) ~[org.rocksdb-rocksdbjni-6.10.2.jar:?] at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:196) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4] ... 13 more
helm upgrade
error “unable to build kubernetes objects from current release manifest”Example of the error message:
Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest: [resource mapping not found for name: "pulsar-bookie" namespace: "pulsar" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1" ensure CRDs are installed first, resource mapping not found for name: "pulsar-broker" namespace: "pulsar" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1" ensure CRDs are installed first, resource mapping not found for name: "pulsar-zookeeper" namespace: "pulsar" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1" ensure CRDs are installed first]
Helm documentation explains issues with managing releases deployed using outdated APIs when the Kubernetes cluster has been upgraded to a version where these APIs are removed. This happens regardless of whether the chart in the upgrade includes supported API versions. In this case, you can use the following workaround:
Install the Helm mapkubeapis plugin:
helm plugin install https://github.com/helm/helm-mapkubeapis
Run the helm mapkubeapis
command with the appropriate namespace and release name. In this example, we use the namespace “pulsar” and release name “pulsar”:
helm mapkubeapis --namespace pulsar pulsar
This workaround addresses the issue by updating in-place Helm release metadata that contains deprecated or removed Kubernetes APIs to a new instance with supported Kubernetes APIs and should allow for a successful Helm upgrade.
To uninstall the Pulsar Chart, run the following command:
helm uninstall <pulsar-release-name>
For the purposes of continuity, these charts have some Kubernetes objects that are not removed when performing helm uninstall
. These items we require you to conciously remove them, as they affect re-deployment should you choose to.
We‘ve done our best to make these charts as seamless as possible, occasionally troubles do surface outside of our control. We’ve collected tips and tricks for troubleshooting common issues. Please examine these first before raising an issue, and feel free to add to them by raising a Pull Request!
In example commands, k8s is namespace pulsar
replace with your deployment namespace.
Connecting to vmsingle
pod for web UI.
kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l app.kubernetes.io/name=vmsingle -o jsonpath='{.items[0].metadata.name}') 8429:8429
Now you can access the UI at http://localhost:8429 and http://localhost:8429/vmui (for similar UI as in Prometheus)
Connection to vmagent
pod for debugging targets.
kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l app.kubernetes.io/name=vmagent -o jsonpath='{.items[0].metadata.name}') 8429:8429
Now you can access the UI at http://localhost:8429
Active Targets UI
Scraping Configuration
See RELEASE.md