tree: f8dbcf98977410591e51f0c55e3df4efcad65dee [path history] [tgz]
  1. ci/
  2. templates/
  3. Chart.yaml
  4. NEWS.md
  5. README.md
  6. README.md.gotmpl
  7. values.yaml
couchdb/README.md

CouchDB

Version: 4.5.0 AppVersion: 3.3.2

Apache CouchDB is a database featuring seamless multi-master sync, that scales from big data to mobile, with an intuitive HTTP/JSON API and designed for reliability.

This chart deploys a CouchDB cluster as a StatefulSet. It creates a ClusterIP Service in front of the Deployment for load balancing by default, but can also be configured to deploy other Service types or an Ingress Controller. The default persistence mechanism is simply the ephemeral local filesystem, but production deployments should set persistentVolume.enabled to true to attach storage volumes to each Pod in the Deployment.

TL;DR

$ helm repo add couchdb https://apache.github.io/couchdb-helm
$ helm install couchdb/couchdb \
  --version=4.5.0 \
  --set allowAdminParty=true \
  --set couchdbConfig.couchdb.uuid=$(curl https://www.uuidgenerator.net/api/version4 2>/dev/null | tr -d -)

Prerequisites

  • Kubernetes 1.9+ with Beta APIs enabled
  • Ingress requires Kubernetes 1.19+

Installing the Chart

To install the chart with the release name my-release:

Add the CouchDB Helm repository:

$ helm repo add couchdb https://apache.github.io/couchdb-helm

Afterwards install the chart replacing the UUID decafbaddecafbaddecafbaddecafbad with a custom one:

$ helm install \
  --name my-release \
  --version=4.5.0 \
  --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad \
  couchdb/couchdb

This will create a Secret containing the admin credentials for the cluster. Those credentials can be retrieved as follows:

$ kubectl get secret my-release-couchdb -o go-template='{{ .data.adminPassword }}' | base64 --decode

If you prefer to configure the admin credentials directly you can create a Secret containing adminUsername, adminPassword and cookieAuthSecret keys:

$  kubectl create secret generic my-release-couchdb --from-literal=adminUsername=foo --from-literal=adminPassword=bar --from-literal=cookieAuthSecret=baz

If you want to set the adminHash directly to achieve consistent salts between different nodes you need to add it to the secret:

$  kubectl create secret generic my-release-couchdb \
   --from-literal=adminUsername=foo \
   --from-literal=cookieAuthSecret=baz \
   --from-literal=adminHash=-pbkdf2-d4b887da....

and then install the chart while overriding the createAdminSecret setting:

$ helm install \
  --name my-release \
  --version=4.5.0 \
  --set createAdminSecret=false \
  --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad \
  couchdb/couchdb

This Helm chart deploys CouchDB on the Kubernetes cluster in a default configuration. The configuration section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release Deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Upgrading an existing Release to a new major version

A major chart version change (like v0.2.3 -> v1.0.0) indicates that there is an incompatible breaking change needing manual actions.

Upgrade to 3.0.0

Since version 3.0.0 setting the CouchDB server instance UUID is mandatory. Therefore, you need to generate a UUID and supply it as a value during the upgrade as follows:

$ helm upgrade <release-name> \
  --version=3.6.4 \
  --reuse-values \
  --set couchdbConfig.couchdb.uuid=<UUID> \
  couchdb/couchdb

Upgrade to 4.0.0

Breaking change between v3 and v4 is the adminHash in the secret that no longer uses the password.ini. It stores the adminHash only instead, make sure to change it if you use your own secret.

Migrating from stable/couchdb

This chart replaces the stable/couchdb chart previously hosted by Helm and continues the version semantics. You can upgrade directly from stable/couchdb to this chart using:

$ helm repo add couchdb https://apache.github.io/couchdb-helm
$ helm upgrade my-release --version=4.5.0 couchdb/couchdb

Configuration

The following table lists the most commonly configured parameters of the CouchDB chart and their default values:

KeyTypeDefaultDescription
allowAdminPartyboolfalseIf allowAdminParty is enabled the cluster will start up without any database administrator account; i.e., all users will be granted administrative access. Otherwise, the system will look for a Secret called -couchdb containing adminUsername, adminPassword and cookieAuthSecret keys. See the createAdminSecret flag. ref: https://kubernetes.io/docs/concepts/configuration/secret/
clusterSizeint3the initial number of nodes in the CouchDB cluster.
couchdbConfigobject{"chttpd":{"bind_address":"any","require_valid_user":false}}couchdbConfig will override default CouchDB configuration settings. The contents of this map are reformatted into a .ini file laid down by a ConfigMap object. ref: http://docs.couchdb.org/en/latest/config/index.html
createAdminSecretbooltrueIf createAdminSecret is enabled a Secret called -couchdb will be created containing auto-generated credentials. Users who prefer to set these values themselves have a couple of options: 1) The adminUsername, adminPassword, adminHash, and cookieAuthSecret can be defined directly in the chart‘s values. Note that all of a chart’s values are currently stored in plaintext in a ConfigMap in the tiller namespace. 2) This flag can be disabled and a Secret with the required keys can be created ahead of time.
enableSearchboolfalseFlip this to flag to include the Search container in each Pod
erlangFlagsobject{"name":"couchdb"}erlangFlags is a map that is passed to the Erlang VM as flags using the ERL_FLAGS env. The name flag is required to establish connectivity between cluster nodes. ref: http://erlang.org/doc/man/erl.html#init_flags
persistentVolumeobject{"accessModes":["ReadWriteOnce"],"enabled":false,"size":"10Gi"}The storage volume used by each Pod in the StatefulSet. If a persistentVolume is not enabled, the Pods will use emptyDir ephemeral local storage. Setting the storageClass attribute to “-” disables dynamic provisioning of Persistent Volumes; leaving it unset will invoke the default provisioner.

You can set the values of the couchdbConfig map according to the official configuration. The following shows the map's default values and required options to set:

ParameterDescriptionDefault
couchdb.uuidUUID for this CouchDB server instance (Required in a cluster)
chttpd.bind_addresslistens on all interfaces when set to anyany
chttpd.require_valid_userdisables all the anonymous requests to the port 5984 when truefalse

A variety of other parameters are also configurable. See the comments in the values.yaml file for further details:

ParameterDefault
adminUsernameadmin
adminPasswordauto-generated
adminHash
cookieAuthSecretauto-generated
image.repositorycouchdb
image.tag3.3.2
image.pullPolicyIfNotPresent
searchImage.repositorykocolosk/couchdb-search
searchImage.tag0.1.0
searchImage.pullPolicyIfNotPresent
initImage.repositorybusybox
initImage.taglatest
initImage.pullPolicyAlways
ingress.enabledfalse
ingress.className
ingress.hostschart-example.local
ingress.annotations
ingress.path/
ingress.tls
persistentVolume.accessModesReadWriteOnce
persistentVolume.storageClassDefault for the Kube cluster
persistentVolume.annotations{}
persistentVolume.existingClaims[] (a list of existing PV/PVC volume value objects with volumeName, claimName, persistentVolumeName and volumeSource defined)
persistentVolume.volumeName
persistentVolume.claimName
persistentVolume.volumeSource
persistentVolume.annotations{}
podDisruptionBudget.enabledfalse
podDisruptionBudget.minAvailablenil
podDisruptionBudget.maxUnavailable1
podManagementPolicyParallel
affinity
topologySpreadConstraints
labels
annotations
tolerations
resources
initResources
autoSetup.enabledfalse (if set to true, must have service.enabled set to true and a correct adminPassword - deploy it with the --wait flag to avoid first jobs failure)
autoSetup.image.repositorycurlimages/curl
autoSetup.image.taglatest
autoSetup.image.pullPolicyAlways
autoSetup.defaultDatabases[_global_changes]
service.annotations
service.enabledtrue
service.typeClusterIP
service.externalPort5984
service.targetPort5984
dns.clusterDomainSuffixcluster.local
networkPolicy.enabledtrue
serviceAccount.enabledtrue
serviceAccount.createtrue
serviceAccount.imagePullSecrets
sidecars{}
livenessProbe.enabledtrue
livenessProbe.failureThreshold3
livenessProbe.initialDelaySeconds0
livenessProbe.periodSeconds10
livenessProbe.successThreshold1
livenessProbe.timeoutSeconds1
readinessProbe.enabledtrue
readinessProbe.failureThreshold3
readinessProbe.initialDelaySeconds0
readinessProbe.periodSeconds10
readinessProbe.successThreshold1
readinessProbe.timeoutSeconds1
prometheusPort.enabledfalse
prometheusPort.port17896
prometheusPort.bind_address0.0.0.0
placementConfig.enabledfalse
placementConfig.image.repositorycaligrafix/couchdb-autoscaler-placement-manager
placementConfig.image.tag0.1.0
podSecurityContext
containerSecurityContext

Feedback, Issues, Contributing

General feedback is welcome at our user or developer mailing lists.

Apache CouchDB has a CONTRIBUTING file with details on how to get started with issue reporting or contributing to the upkeep of this project. In short, use GitHub Issues, do not report anything on Docker's website.

Non-Apache CouchDB Development Team Contributors