tree: 0202f4d6301bea07a6e203f6deded5196466e224 [path history] [tgz]
  1. templates/
  2. .helmignore
  3. Chart.yaml
  4. README.md
  5. values.yaml
infrastructure-provisioning/terraform/gcp/ssn-gke/main/modules/helm_charts/mongodb-chart/README.md

MongoDB

MongoDB is a cross-platform document-oriented database. Classified as a NoSQL database, MongoDB eschews the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas, making the integration of data in certain types of applications easier and faster.

This Helm chart is deprecated

Given the stable deprecation timeline, the Bitnami maintained MongoDB Helm chart is now located at bitnami/charts.

The Bitnami repository is already included in the Hubs and we will continue providing the same cadence of updates, support, etc that we've been keeping here these years. Installation instructions are very similar, just adding the bitnami repo and using it during the installation (bitnami/<chart> instead of stable/<chart>)

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/<chart>           # Helm 3
$ helm install --name my-release bitnami/<chart>    # Helm 2

To update an exisiting stable deployment with a chart hosted in the bitnami repository you can execute

$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm upgrade my-release bitnami/<chart>

Issues and PRs related to the chart itself will be redirected to bitnami/charts GitHub repository. In the same way, we'll be happy to answer questions related to this migration process in this issue created as a common place for discussion.

TL;DR;

$ helm install my-release stable/mongodb

Introduction

This chart bootstraps a MongoDB deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the BKPR.

Prerequisites

  • Kubernetes 1.12+
  • Helm 2.11+ or Helm 3.0-beta3+
  • PV provisioner support in the underlying infrastructure
  • ReadWriteMany volumes for deployment scaling

Installing the Chart

To install the chart with the release name my-release:

$ helm install my-release stable/mongodb

The command deploys MongoDB on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Uninstalling the Chart

To uninstall/delete the my-release deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Parameters

The following table lists the configurable parameters of the MongoDB chart and their default values.

ParameterDescriptionDefault
global.imageRegistryGlobal Docker image registrynil
global.imagePullSecretsGlobal Docker registry secret names as an array[] (does not add image pull secrets to deployed pods)
global.storageClassGlobal storage class for dynamic provisioningnil
image.registryMongoDB image registrydocker.io
image.repositoryMongoDB Image namebitnami/mongodb
image.tagMongoDB Image tag{TAG_NAME}
image.pullPolicyImage pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an array[] (does not add image pull secrets to deployed pods)
image.debugSpecify if debug logs should be enabledfalse
nameOverrideString to partially override mongodb.fullname template with a string (will prepend the release name)nil
fullnameOverrideString to fully override mongodb.fullname template with a stringnil
volumePermissions.enabledEnable init container that changes volume permissions in the data directory (for cases where the default k8s runAsUser and fsUser values do not work)false
volumePermissions.image.registryInit container volume-permissions image registrydocker.io
volumePermissions.image.repositoryInit container volume-permissions image namebitnami/minideb
volumePermissions.image.tagInit container volume-permissions image tagbuster
volumePermissions.image.pullPolicyInit container volume-permissions image pull policyAlways
volumePermissions.resourcesInit container resource requests/limitnil
clusterDomainDefault Kubernetes cluster domaincluster.local
usePasswordEnable password authenticationtrue
existingSecretExisting secret with MongoDB credentialsnil
mongodbRootPasswordMongoDB admin passwordrandom alphanumeric string (10)
mongodbUsernameMongoDB custom user (mandatory if mongodbDatabase is set)nil
mongodbPasswordMongoDB custom user passwordrandom alphanumeric string (10)
mongodbDatabaseDatabase to createnil
mongodbEnableIPv6Switch to enable/disable IPv6 on MongoDBfalse
mongodbDirectoryPerDBSwitch to enable/disable DirectoryPerDB on MongoDBfalse
mongodbSystemLogVerbosityMongoDB system log verbosity level0
mongodbDisableSystemLogWhether to disable MongoDB system log or notfalse
mongodbExtraFlagsMongoDB additional command line flags[]
service.nameKubernetes service namenil
service.annotationsKubernetes service annotations, evaluated as a template{}
service.typeKubernetes Service typeClusterIP
service.clusterIPStatic clusterIP or None for headless servicesnil
service.portMongoDB service port27017
service.nodePortPort to bind to for NodePort service typenil
service.loadBalancerIPStatic IP Address to use for LoadBalancer service typenil
service.externalIPsExternal IP list to use with ClusterIP service type[]
service.loadBalancerSourceRangesList of IP ranges allowed access to load balancer (if supported)[] (does not add IP range restrictions to the service)
replicaSet.enabledSwitch to enable/disable replica set configurationfalse
replicaSet.nameName of the replica setrs0
replicaSet.useHostnamesEnable DNS hostnames in the replica set configtrue
replicaSet.keyKey used for authentication in the replica setrandom alphanumeric string (10)
replicaSet.replicas.secondaryNumber of secondary nodes in the replica set1
replicaSet.replicas.arbiterNumber of arbiter nodes in the replica set1
replicaSet.pdb.enabledSwitch to enable/disable Pod Disruption Budgettrue
replicaSet.pdb.minAvailable.secondaryPDB (min available) for the MongoDB Secondary nodes1
replicaSet.pdb.minAvailable.arbiterPDB (min available) for the MongoDB Arbiter nodes1
replicaSet.pdb.maxUnavailable.secondaryPDB (max unavailable) for the MongoDB Secondary nodesnil
replicaSet.pdb.maxUnavailable.arbiterPDB (max unavailable) for the MongoDB Arbiter nodesnil
annotationsAnnotations to be added to the deployment or statefulsets{}
labelsAdditional labels for the deployment or statefulsets{}
podAnnotationsAnnotations to be added to pods{}
podLabelsAdditional labels for the pod(s).{}
resourcesPod resources{}
resourcesArbiterPod resources for arbiter when replica set is enabled{}
priorityClassNamePod priority class name``
extraEnvVarsArray containing extra env vars to be added to all pods in the cluster (evaluated as a template)nil
nodeSelectorNode labels for pod assignment{}
affinityAffinity for pod assignment{}
affinityArbiterAffinity for arbiter pod assignment{}
tolerationsToleration labels for pod assignment{}
updateStrategyStatefulsets update strategy policyRollingUpdate
securityContext.enabledEnable security contexttrue
securityContext.fsGroupGroup ID for the container1001
securityContext.runAsUserUser ID for the container1001
schedulerNameName of the k8s scheduler (other than default)nil
sidecarsAdd additional containers to pod[]
extraVolumesAdd additional volumes to deployment[]
extraVolumeMountsAdd additional volumes mounts to pod[]
sidecarsArbiterAdd additional containers to arbiter pod[]
extraVolumesArbiterAdd additional volumes to arbiter deployment[]
extraVolumeMountsArbiterAdd additional volumes mounts to arbiter pod[]
persistence.enabledUse a PVC to persist datatrue
persistence.mountPathPath to mount the volume at/bitnami/mongodb
persistence.subPathSubdirectory of the volume to mount at""
persistence.storageClassStorage class of backing PVCnil (uses alpha storage class annotation)
persistence.accessModesUse volume as ReadOnly or ReadWrite[ReadWriteOnce]
persistence.sizeSize of data volume8Gi
persistence.annotationsPersistent Volume annotations{}
persistence.existingClaimName of an existing PVC to use (avoids creating one if this is given)nil
useStatefulSetSet to true to use StatefulSet instead of Deployment even when replicaSet.enalbed=falsenil
extraInitContainersAdditional init containers as a string to be passed to the tpl function{}
livenessProbe.enabledEnable/disable the Liveness probetrue
livenessProbe.initialDelaySecondsDelay before liveness probe is initiated30
livenessProbe.periodSecondsHow often to perform the probe10
livenessProbe.timeoutSecondsWhen the probe times out5
livenessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
livenessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6
readinessProbe.enabledEnable/disable the Readiness probetrue
readinessProbe.initialDelaySecondsDelay before readiness probe is initiated5
readinessProbe.periodSecondsHow often to perform the probe10
readinessProbe.timeoutSecondsWhen the probe times out5
readinessProbe.failureThresholdMinimum consecutive failures for the probe to be considered failed after having succeeded.6
readinessProbe.successThresholdMinimum consecutive successes for the probe to be considered successful after having failed.1
initConfigMap.nameCustom config map with init scriptsnil
configmapMongoDB configuration file to be usednil
ingress.enabledEnable ingress controller resourcefalse
ingress.certManagerAdd annotations for cert-managerfalse
ingress.annotationsIngress annotations[]
ingress.hosts[0].nameHostname to your MongoDB installationmongodb.local
ingress.hosts[0].pathPath within the url structure/
ingress.tls[0].hosts[0]TLS hostsmongodb.local
ingress.tls[0].secretNameTLS Secret (certificates)mongodb.local-tls
ingress.secrets[0].nameTLS Secret Namenil
ingress.secrets[0].certificateTLS Secret Certificatenil
ingress.secrets[0].keyTLS Secret Keynil
metrics.enabledStart a side-car prometheus exporterfalse
metrics.image.registryMongoDB exporter image registrydocker.io
metrics.image.repositoryMongoDB exporter image namebitnami/mongodb-exporter
metrics.image.tagMongoDB exporter image tag{TAG_NAME}
metrics.image.pullPolicyImage pull policyAlways
metrics.image.pullSecretsSpecify docker-registry secret names as an array[] (does not add image pull secrets to deployed pods)
metrics.podAnnotations.prometheus.io/scrapeAdditional annotations for Metrics exporter podtrue
metrics.podAnnotations.prometheus.io/portAdditional annotations for Metrics exporter pod"9216"
metrics.extraArgsString with extra arguments for the MongoDB Exporter``
metrics.resourcesExporter resource requests/limit{}
metrics.serviceMonitor.enabledCreate ServiceMonitor Resource for scraping metrics using PrometheusOperatorfalse
metrics.serviceMonitor.namespaceOptional namespace which Prometheus is running innil
metrics.serviceMonitor.additionalLabelsUsed to pass Labels that are required by the Installed Prometheus Operator{}
metrics.serviceMonitor.relabellingsSpecify Metric Relabellings to add to the scrape endpointnil
metrics.serviceMonitor.alerting.rulesDefine individual alerting rules as required{}
metrics.serviceMonitor.alerting.additionalLabelsUsed to pass Labels that are required by the Installed Prometheus Operator{}
metrics.livenessProbe.enabledEnable/disable the Liveness Check of Prometheus metrics exporterfalse
metrics.livenessProbe.initialDelaySecondsInitial Delay for Liveness Check of Prometheus metrics exporter15
metrics.livenessProbe.periodSecondsHow often to perform Liveness Check of Prometheus metrics exporter5
metrics.livenessProbe.timeoutSecondsTimeout for Liveness Check of Prometheus metrics exporter5
metrics.livenessProbe.failureThresholdFailure Threshold for Liveness Check of Prometheus metrics exporter3
metrics.livenessProbe.successThresholdSuccess Threshold for Liveness Check of Prometheus metrics exporter1
metrics.readinessProbe.enabledEnable/disable the Readiness Check of Prometheus metrics exporterfalse
metrics.readinessProbe.initialDelaySecondsInitial Delay for Readiness Check of Prometheus metrics exporter5
metrics.readinessProbe.periodSecondsHow often to perform Readiness Check of Prometheus metrics exporter5
metrics.readinessProbe.timeoutSecondsTimeout for Readiness Check of Prometheus metrics exporter1
metrics.readinessProbe.failureThresholdFailure Threshold for Readiness Check of Prometheus metrics exporter3
metrics.readinessProbe.successThresholdSuccess Threshold for Readiness Check of Prometheus metrics exporter1

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install my-release \
  --set mongodbRootPassword=secretpassword,mongodbUsername=my-user,mongodbPassword=my-password,mongodbDatabase=my-database \
    stable/mongodb

The above command sets the MongoDB root account password to secretpassword. Additionally, it creates a standard database user named my-user, with the password my-password, who has access to a database named my-database.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install my-release -f values.yaml stable/mongodb

Tip: You can use the default values.yaml

Configuration and installation details

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Production configuration and horizontal scaling

This chart includes a values-production.yaml file where you can find some parameters oriented to production configuration in comparison to the regular values.yaml. You can use this file instead of the default one.

  • Switch to enable/disable replica set configuration:
- replicaSet.enabled: false
+ replicaSet.enabled: true
  • Start a side-car prometheus exporter:
- metrics.enabled: false
+ metrics.enabled: true
  • Enable/disable the Liveness Check of Prometheus metrics exporter:
- metrics.livenessProbe.enabled: false
+ metrics.livenessProbe.enabled: true
  • Enable/disable the Readiness Check of Prometheus metrics exporter:
- metrics.readinessProbe.enabled: false
+ metrics.readinessProbe.enabled: true

To horizontally scale this chart, you can use the --replicas flag to modify the number of secondary nodes in your MongoDB replica set.

Replication

You can start the MongoDB chart in replica set mode with the following parameter: replicaSet.enabled=true

Some characteristics of this chart are:

  • Each of the participants in the replication has a fixed stateful set so you always know where to find the primary, secondary or arbiter nodes.
  • The number of secondary and arbiter nodes can be scaled out independently.
  • Easy to move an application from using a standalone MongoDB server to use a replica set.

Initialize a fresh instance

The Bitnami MongoDB image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder files/docker-entrypoint-initdb.d so they can be consumed as a ConfigMap. Also you can create a custom config map and give it via initConfigMap(check options for more details).

The allowed extensions are .sh, and .js.

Persistence

The Bitnami MongoDB image stores the MongoDB data and configurations at the /bitnami/mongodb path of the container.

The chart mounts a Persistent Volume at this location. The volume is created using dynamic volume provisioning.

Adjust permissions of persistent volume mountpoint

As the image run as non-root by default, it is necessary to adjust the ownership of the persistent volume so that the container can write data into it.

By default, the chart is configured to use Kubernetes Security Context to automatically change the ownership of the volume. However, this feature does not work in all Kubernetes distributions. As an alternative, this chart supports using an initContainer to change the ownership of the volume before mounting it in the final destination.

You can enable this initContainer by setting volumePermissions.enabled to true.

Upgrading

To 7.0.0

From this version, the way of setting the ingress rules has changed. Instead of using ingress.paths and ingress.hosts as separate objects, you should now define the rules as objects inside the ingress.hosts value, for example:

ingress:
  hosts:
  - name: mongodb.local
    path: /

To 6.0.0

From this version, mongodbEnableIPv6 is set to false by default in order to work properly in most k8s clusters, if you want to use IPv6 support, you need to set this variable to true by adding --set mongodbEnableIPv6=true to your helm command. You can find more information in the bitnami/mongodb image README.

To 5.0.0

When enabling replicaset configuration, backwards compatibility is not guaranteed unless you modify the labels used on the chart's statefulsets. Use the workaround below to upgrade from versions previous to 5.0.0. The following example assumes that the release name is my-release:

$ kubectl delete statefulset my-release-mongodb-arbiter my-release-mongodb-primary my-release-mongodb-secondary --cascade=false

Configure Ingress

MongoDB can exposed externally using an Ingress controller. To do so, it's necessary to:

For instance, if you installed the MongoDB chart in the default namespace, you can install the stable/nginx-ingress chart setting the “tcp” parameter in the values.yaml used to install the chart as shown below:

...

tcp:
  27017: "default/mongodb:27017"