tree: 0b63b4ff41d1d84661da1227506eba68a8d50fb3 [path history] [tgz]
  1. charts/
  2. templates/
  3. .helmignore
  4. Chart.lock
  5. Chart.yaml
  6. README.md
  7. values.schema.json
  8. values.yaml
charts/immich/charts/redis/README.md

Bitnami package for Redis(R)

Redis(R) is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.

Overview of Redis®

Disclaimer: Redis is a registered trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Bitnami is for referential purposes only and does not indicate any sponsorship, endorsement, or affiliation between Redis Ltd.

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/redis

Looking to use Redis® in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.

Introduction

This chart bootstraps a Redis® deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Choose between Redis® Helm Chart and Redis® Cluster Helm Chart

You can choose any of the two Redis® Helm charts for deploying a Redis® cluster.

  1. Redis® Helm Chart will deploy a master-replica cluster, with the option of enabling using Redis® Sentinel.
  2. Redis® Cluster Helm Chart will deploy a Redis® Cluster topology with sharding.

The main features of each chart are the following:

Redis®Redis® Cluster
Supports multiple databasesSupports only one database. Better if you have a big dataset
Single write point (single master)Multiple write points (multiple masters)
Redis® TopologyRedis® Cluster Topology

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/redis

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys Redis® on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This will deploy a sidecar container with redis_exporter in all pods and a metrics service, which can be configured under the metrics.service section. This metrics service will have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Use a different Redis® version

To modify the application version used in this chart, specify a different version of the image using the image.tag parameter and/or a different repository using the image.repository parameter.

Bootstrapping with an External Cluster

This chart is equipped with the ability to bring online a set of Pods that connect to an existing Redis deployment that lies outside of Kubernetes. This effectively creates a hybrid Redis Deployment where both Pods in Kubernetes and Instances such as Virtual Machines can partake in a single Redis Deployment. This is helpful in situations where one may be migrating Redis from Virtual Machines into Kubernetes, for example. To take advantage of this, use the following as an example configuration:

replica:
  externalMaster:
    enabled: true
    host: external-redis-0.internal
sentinel:
  externalMaster:
    enabled: true
    host: external-redis-0.internal

:warning: This is currently limited to clusters in which Sentinel and Redis run on the same node! :warning:

Please also note that the external sentinel must be listening on port 26379, and this is currently not configurable.

Once the Kubernetes Redis Deployment is online and confirmed to be working with the existing cluster, the configuration can then be removed and the cluster will remain connected.

External DNS

This chart is equipped to allow leveraging the ExternalDNS project. Doing so will enable ExternalDNS to publish the FQDN for each instance, in the format of <pod-name>.<release-name>.<dns-suffix>. Example, when using the following configuration:

useExternalDNS:
  enabled: true
  suffix: prod.example.org
  additionalAnnotations:
    ttl: 10

On a cluster where the name of the Helm release is a, the hostname of a Pod is generated as: a-redis-node-0.a-redis.prod.example.org. The IP of that FQDN will match that of the associated Pod. This modifies the following parameters of the Redis/Sentinel configuration using this new FQDN:

  • replica-announce-ip
  • known-sentinel
  • known-replica
  • announce-ip

:warning: This requires a working installation of external-dns to be fully functional. :warning:

See the official ExternalDNS documentation for additional configuration options.

Cluster topologies

Default: Master-Replicas

When installing the chart with architecture=replication, it will deploy a Redis® master StatefulSet and a Redis® replicas StatefulSet. The replicas will be read-replicas of the master. Two services will be exposed:

  • Redis® Master service: Points to the master, where read-write operations can be performed
  • Redis® Replicas service: Points to the replicas, where only read operations are allowed by default.

In case the master crashes, the replicas will wait until the master node is respawned again by the Kubernetes Controller Manager.

Standalone

When installing the chart with architecture=standalone, it will deploy a standalone Redis® StatefulSet. A single service will be exposed:

  • Redis® Master service: Points to the master, where read-write operations can be performed

Master-Replicas with Sentinel

When installing the chart with architecture=replication and sentinel.enabled=true, it will deploy a Redis® master StatefulSet (only one master allowed) and a Redis® replicas StatefulSet. In this case, the pods will contain an extra container with Redis® Sentinel. This container will form a cluster of Redis® Sentinel nodes, which will promote a new master in case the actual one fails.

On graceful termination of the Redis® master pod, a failover of the master is initiated to promote a new master. The Redis® Sentinel container in this pod will wait for the failover to occur before terminating. If sentinel.redisShutdownWaitFailover=true is set (the default), the Redis® container will wait for the failover as well before terminating. This increases availability for reads during failover, but may cause stale reads until all clients have switched to the new master.

In addition to this, only one service is exposed:

  • Redis® service: Exposes port 6379 for Redis® read-only operations and port 26379 for accessing Redis® Sentinel.

For read-only operations, access the service using port 6379. For write operations, it's necessary to access the Redis® Sentinel cluster and query the current master using the command below (using redis-cli or similar):

SENTINEL get-master-addr-by-name <name of your MasterSet. e.g: mymaster>

This command will return the address of the current master, which can be accessed from inside the cluster.

In case the current master crashes, the Sentinel containers will elect a new master node.

master.count greater than 1 is not designed for use when sentinel.enabled=true.

Multiple masters (experimental)

When master.count is greater than 1, special care must be taken to create a consistent setup.

An example of use case is the creation of a redundant set of standalone masters or master-replicas per Kubernetes node where you must ensure:

  • No more than 1 master can be deployed per Kubernetes node
  • Replicas and writers can only see the single master of their own Kubernetes node

One way of achieving this is by setting master.service.internalTrafficPolicy=Local in combination with a master.affinity.podAntiAffinity spec to never schedule more than one master per Kubernetes node.

It's recommended to only change master.count if you know what you are doing. master.count greater than 1 is not designed for use when sentinel.enabled=true.

Update credentials

The Bitnami Redis chart, when upgrading, reuses the secret previously rendered by the chart or the one specified in auth.existingSecret. To update credentials, use one of the following:

  • Run helm upgrade specifying a new password in auth.password
  • Run helm upgrade specifying a new secret in auth.existingSecret

Using a password file

To use a password file for Redis® you need to create a secret containing the password and then deploy the chart using that secret. Follow these instructions:

  • Create the secret with the password. It is important that the file with the password must be called redis-password.
kubectl create secret generic redis-password-secret --from-file=redis-password.yaml
  • Deploy the Helm Chart using the secret name as parameter:
usePassword=true
usePasswordFiles=true
existingSecret=redis-password-secret
sentinels.enabled=true
metrics.enabled=true

Securing traffic using TLS

TLS support can be enabled in the chart by specifying the tls. parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the cluster:

  • tls.enabled: Enable TLS support. Defaults to false
  • tls.existingSecret: Name of the secret that contains the certificates. No defaults.
  • tls.certFilename: Certificate filename. No defaults.
  • tls.certKeyFilename: Certificate key filename. No defaults.
  • tls.certCAFilename: CA Certificate filename. No defaults.

For example:

First, create the secret with the certificates files:

kubectl create secret generic certificates-tls-secret --from-file=./cert.pem --from-file=./cert.key --from-file=./ca.pem

Then, use the following parameters:

tls.enabled="true"
tls.existingSecret="certificates-tls-secret"
tls.certFilename="cert.pem"
tls.certKeyFilename="cert.key"
tls.certCAFilename="ca.pem"

Metrics

The chart optionally can start a metrics exporter for prometheus. The metrics endpoint (port 9121) is exposed in the service. Metrics can be scraped from within the cluster using something similar as the described in the example Prometheus scrape configuration. If metrics are to be scraped from outside the cluster, the Kubernetes API proxy can be utilized to access the endpoint.

If you have enabled TLS by specifying tls.enabled=true you also need to specify TLS option to the metrics exporter. You can do that via metrics.extraArgs. You can find the metrics exporter CLI flags for TLS here. For example:

You can either specify metrics.extraArgs.skip-tls-verification=true to skip TLS verification or providing the following values under metrics.extraArgs for TLS client authentication:

tls-client-key-file
tls-client-cert-file
tls-ca-cert-file

Deploy a custom metrics script in the sidecar

A custom Lua script can be added to the redis-exporter sidecar by way of the metrics.extraArgs.script parameter. The pathname of the script must exist on the container, or the redis_exporter process (and therefore the whole pod) will refuse to start. The script can be provided to the sidecar containers via the metrics.extraVolumes and metrics.extraVolumeMounts parameters:

metrics:
  extraVolumeMounts:
    - name: '{{ printf "%s-metrics-script-file" (include "common.names.fullname" .) }}'
      mountPath: '{{ printf "/mnt/%s/" (include "common.names.name" .) }}'
      readOnly: true
  extraVolumes:
    - name: '{{ printf "%s-metrics-script-file" (include "common.names.fullname" .) }}'
      configMap:
        name: '{{ printf "%s-metrics-script" (include "common.names.fullname" .) }}'
  extraArgs:
    script: '{{ printf "/mnt/%s/my_custom_metrics.lua" (include "common.names.name" .) }}'

Then deploy the script into the correct location via extraDeploy:

extraDeploy:
  - apiVersion: v1
    kind: ConfigMap
    metadata:
      name: '{{ printf "%s-metrics-script" (include "common.names.fullname" .) }}'
    data:
      my_custom_metrics.lua: |
        -- LUA SCRIPT CODE HERE, e.g.,
        return {'bitnami_makes_the_best_charts', '1'}

Host Kernel Settings

Redis® may require some changes in the kernel of the host machine to work as expected, in particular increasing the somaxconn value and disabling transparent huge pages. To do so, you can set up a privileged initContainer with the sysctlImage config values, for example:

sysctlImage:
  enabled: true
  mountHostSys: true
  command:
    - /bin/sh
    - -c
    - |-
      install_packages procps
      sysctl -w net.core.somaxconn=10000
      echo never > /host-sys/kernel/mm/transparent_hugepage/enabled

Alternatively, for Kubernetes 1.12+ you can set securityContext.sysctls which will configure sysctls for master and slave pods. Example:

securityContext:
  sysctls:
  - name: net.core.somaxconn
    value: "10000"

Note that this will not disable transparent huge tables.

Backup and restore

To backup and restore Redis deployments on Kubernetes, you will need to create a snapshot of the data in the source cluster, and later restore it in a new cluster with the new parameters. Follow the instructions below:

Step 1: Backup the deployment

  • Connect to one of the nodes and start the Redis CLI tool. Then, run the commands below:

    $ kubectl exec -it my-release-master-0 bash
    $ redis-cli
    127.0.0.1:6379> auth your_current_redis_password
    OK
    127.0.0.1:6379> save
    OK
    
  • Copy the dump file from the Redis node:

    kubectl cp my-release-master-0:/data/dump.rdb dump.rdb -c redis
    

Step 2: Restore the data on the destination cluster

To restore the data in a new cluster, you will need to create a PVC and then upload the dump.rdb file to the new volume.

Follow the following steps:

  • In the values.yaml file set the appendonly parameter to no. You can skip this step if it is already configured as no

    commonConfiguration: |-
       # Enable AOF https://redis.io/topics/persistence#append-only-file
       appendonly no
       # Disable RDB persistence, AOF persistence already enabled.
       save ""
    

    Note that the Enable AOF comment belongs to the original config file and what you're actually doing is disabling it. This change will only be neccessary for the temporal cluster you're creating to upload the dump.

  • Start the new cluster to create the PVCs. Use the command below as an example:

    helm install new-redis  -f values.yaml .  --set cluster.enabled=true  --set cluster.slaveCount=3
    
  • Now that the PVC were created, stop it and copy the dump.rdp file on the persisted data by using a helping pod.

    $ helm delete new-redis
    
    $ kubectl run --generator=run-pod/v1 -i --rm --tty volpod --overrides='
    {
        "apiVersion": "v1",
        "kind": "Pod",
        "metadata": {
            "name": "redisvolpod"
        },
        "spec": {
            "containers": [{
               "command": [
                    "tail",
                    "-f",
                    "/dev/null"
               ],
               "image": "bitnami/minideb",
               "name": "mycontainer",
               "volumeMounts": [{
                   "mountPath": "/mnt",
                   "name": "redisdata"
                }]
            }],
            "restartPolicy": "Never",
            "volumes": [{
                "name": "redisdata",
                "persistentVolumeClaim": {
                    "claimName": "redis-data-new-redis-master-0"
                }
            }]
        }
    }' --image="bitnami/minideb"
    
    $ kubectl cp dump.rdb redisvolpod:/mnt/dump.rdb
    $ kubectl delete pod volpod
    
  • Restart the cluster:

    INFO: The appendonly parameter can be safely restored to your desired value.

    helm install new-redis  -f values.yaml .  --set cluster.enabled=true  --set cluster.slaveCount=3
    

NetworkPolicy

To enable network policy for Redis®, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled to true.

With NetworkPolicy enabled, only pods with the generated client label will be able to connect to Redis. This label will be displayed in the output after a successful install.

With networkPolicy.ingressNSMatchLabels pods from other namespaces can connect to Redis. Set networkPolicy.ingressNSPodMatchLabels to match pod labels in matched namespace. For example, for a namespace labeled redis=external and pods in that namespace labeled redis-client=true the fields should be set:

networkPolicy:
  enabled: true
  ingressNSMatchLabels:
    redis: external
  ingressNSPodMatchLabels:
    redis-client: true

Setting Pod's affinity

This chart allows you to set your custom affinity using the XXX.affinity parameter(s). Find more information about Pod's affinity in the Kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset, XXX.podAntiAffinityPreset, or XXX.nodeAffinityPreset parameters.

Persistence

By default, the chart mounts a Persistent Volume at the /data path. The volume is created using dynamic volume provisioning. If a Persistent Volume Claim already exists, specify it during installation.

Existing PersistentVolumeClaim

  1. Create the PersistentVolume
  2. Create the PersistentVolumeClaim
  3. Install the chart
helm install my-release --set master.persistence.existingClaim=PVC_NAME oci://REGISTRY_NAME/REPOSITORY_NAME/redis

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Parameters

Global parameters

NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.storageClassDEPRECATED: use global.defaultStorageClass instead""
global.redis.passwordGlobal Redis® password (overrides auth.password)""
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto

Common parameters

NameDescriptionValue
kubeVersionOverride Kubernetes version""
nameOverrideString to partially override common.names.fullname""
fullnameOverrideString to fully override common.names.fullname""
namespaceOverrideString to fully override common.names.namespace""
commonLabelsLabels to add to all deployed objects{}
commonAnnotationsAnnotations to add to all deployed objects{}
configmapChecksumAnnotationsEnable checksum annotations used to trigger rolling updates when ConfigMap(s) changetrue
secretChecksumAnnotationsEnable checksum annotations used to trigger rolling updates when Secret(s) changetrue
secretAnnotationsAnnotations to add to secret{}
clusterDomainKubernetes cluster domain namecluster.local
extraDeployArray of extra objects to deploy with the release[]
useHostnamesUse hostnames internally when announcing replication. If false, the hostname will be resolved to an IP addresstrue
nameResolutionThresholdFailure threshold for internal hostnames resolution5
nameResolutionTimeoutTimeout seconds between probes for internal hostnames resolution5
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the deployment["sleep"]
diagnosticMode.argsArgs to override all containers in the deployment["infinity"]

Redis® Image parameters

NameDescriptionValue
image.registryRedis® image registryREGISTRY_NAME
image.repositoryRedis® image repositoryREPOSITORY_NAME/redis
image.digestRedis® image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
image.pullPolicyRedis® image pull policyIfNotPresent
image.pullSecretsRedis® image pull secrets[]
image.debugEnable image debug modefalse

Redis® common configuration parameters

NameDescriptionValue
architectureRedis® architecture. Allowed values: standalone or replicationreplication
auth.enabledEnable password authenticationtrue
auth.sentinelEnable authentication on sentinels tootrue
auth.passwordRedis® password""
auth.existingSecretThe name of an existing secret with Redis® credentials""
auth.existingSecretPasswordKeyPassword key to be retrieved from existing secret""
auth.usePasswordFilesMount credentials as files instead of using an environment variabletrue
auth.usePasswordFileFromSecretMount password file from secrettrue
auth.acl.enabledEnables the support of the Redis ACL systemfalse
auth.acl.sentinelEnables the support of the Redis ACL system for Sentinel Nodesfalse
auth.acl.usersA list of the configured users in the Redis ACL system[]
auth.acl.userSecretName of the Secret, containing user credentials for ACL users. Keys must match usernames.""
commonConfigurationCommon configuration to be added into the ConfigMap""
existingConfigmapThe name of an existing ConfigMap with your custom configuration for Redis® nodes""

Redis® master configuration parameters

NameDescriptionValue
master.countNumber of Redis® master instances to deploy (experimental, requires additional configuration)1
master.revisionHistoryLimitThe number of old history to retain to allow rollback10
master.configurationConfiguration for Redis® master nodes""
master.disableCommandsArray with Redis® commands to disable on master nodes["FLUSHDB","FLUSHALL"]
master.commandOverride default container command (useful when using custom images)[]
master.argsOverride default container args (useful when using custom images)[]
master.enableServiceLinksWhether information about services should be injected into pod's environment variabletrue
master.preExecCmdsAdditional commands to run prior to starting Redis® master[]
master.extraFlagsArray with additional command line flags for Redis® master[]
master.extraEnvVarsArray with extra environment variables to add to Redis® master nodes[]
master.extraEnvVarsCMName of existing ConfigMap containing extra env vars for Redis® master nodes""
master.extraEnvVarsSecretName of existing Secret containing extra env vars for Redis® master nodes""
master.containerPorts.redisContainer port to open on Redis® master nodes6379
master.startupProbe.enabledEnable startupProbe on Redis® master nodesfalse
master.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe20
master.startupProbe.periodSecondsPeriod seconds for startupProbe5
master.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
master.startupProbe.failureThresholdFailure threshold for startupProbe5
master.startupProbe.successThresholdSuccess threshold for startupProbe1
master.livenessProbe.enabledEnable livenessProbe on Redis® master nodestrue
master.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe20
master.livenessProbe.periodSecondsPeriod seconds for livenessProbe5
master.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
master.livenessProbe.failureThresholdFailure threshold for livenessProbe5
master.livenessProbe.successThresholdSuccess threshold for livenessProbe1
master.readinessProbe.enabledEnable readinessProbe on Redis® master nodestrue
master.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe20
master.readinessProbe.periodSecondsPeriod seconds for readinessProbe5
master.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe1
master.readinessProbe.failureThresholdFailure threshold for readinessProbe5
master.readinessProbe.successThresholdSuccess threshold for readinessProbe1
master.customStartupProbeCustom startupProbe that overrides the default one{}
master.customLivenessProbeCustom livenessProbe that overrides the default one{}
master.customReadinessProbeCustom readinessProbe that overrides the default one{}
master.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if master.resources is set (master.resources is recommended for production).nano
master.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
master.podSecurityContext.enabledEnabled Redis® master pods' Security Contexttrue
master.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
master.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
master.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
master.podSecurityContext.fsGroupSet Redis® master pod's Security Context fsGroup1001
master.containerSecurityContext.enabledEnabled Redis® master containers' Security Contexttrue
master.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
master.containerSecurityContext.runAsUserSet Redis® master containers' Security Context runAsUser1001
master.containerSecurityContext.runAsGroupSet Redis® master containers' Security Context runAsGroup1001
master.containerSecurityContext.runAsNonRootSet Redis® master containers' Security Context runAsNonRoottrue
master.containerSecurityContext.allowPrivilegeEscalationIs it possible to escalate Redis® pod(s) privilegesfalse
master.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context read-only root filesystemtrue
master.containerSecurityContext.seccompProfile.typeSet Redis® master containers' Security Context seccompProfileRuntimeDefault
master.containerSecurityContext.capabilities.dropSet Redis® master containers' Security Context capabilities to drop["ALL"]
master.kindUse either Deployment, StatefulSet (default) or DaemonSetStatefulSet
master.schedulerNameAlternate scheduler for Redis® master pods""
master.updateStrategy.typeRedis® master statefulset strategy typeRollingUpdate
master.minReadySecondsHow many seconds a pod needs to be ready before killing the next, during update0
master.priorityClassNameRedis® master pods' priorityClassName""
master.automountServiceAccountTokenMount Service Account token in podfalse
master.hostAliasesRedis® master pods host aliases[]
master.podLabelsExtra labels for Redis® master pods{}
master.podAnnotationsAnnotations for Redis® master pods{}
master.shareProcessNamespaceShare a single process namespace between all of the containers in Redis® master podsfalse
master.podAffinityPresetPod affinity preset. Ignored if master.affinity is set. Allowed values: soft or hard""
master.podAntiAffinityPresetPod anti-affinity preset. Ignored if master.affinity is set. Allowed values: soft or hardsoft
master.nodeAffinityPreset.typeNode affinity preset type. Ignored if master.affinity is set. Allowed values: soft or hard""
master.nodeAffinityPreset.keyNode label key to match. Ignored if master.affinity is set""
master.nodeAffinityPreset.valuesNode label values to match. Ignored if master.affinity is set[]
master.affinityAffinity for Redis® master pods assignment{}
master.nodeSelectorNode labels for Redis® master pods assignment{}
master.tolerationsTolerations for Redis® master pods assignment[]
master.topologySpreadConstraintsSpread Constraints for Redis® master pod assignment[]
master.dnsPolicyDNS Policy for Redis® master pod""
master.dnsConfigDNS Configuration for Redis® master pod{}
master.lifecycleHooksfor the Redis® master container(s) to automate configuration before or after startup{}
master.extraVolumesOptionally specify extra list of additional volumes for the Redis® master pod(s)[]
master.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the Redis® master container(s)[]
master.sidecarsAdd additional sidecar containers to the Redis® master pod(s)[]
master.initContainersAdd additional init containers to the Redis® master pod(s)[]
master.persistence.enabledEnable persistence on Redis® master nodes using Persistent Volume Claimstrue
master.persistence.mediumProvide a medium for emptyDir volumes.""
master.persistence.sizeLimitSet this to enable a size limit for emptyDir volumes.""
master.persistence.pathThe path the volume will be mounted at on Redis® master containers/data
master.persistence.subPathThe subdirectory of the volume to mount on Redis® master containers""
master.persistence.subPathExprUsed to construct the subPath subdirectory of the volume to mount on Redis® master containers""
master.persistence.storageClassPersistent Volume storage class""
master.persistence.accessModesPersistent Volume access modes["ReadWriteOnce"]
master.persistence.sizePersistent Volume size8Gi
master.persistence.annotationsAdditional custom annotations for the PVC{}
master.persistence.labelsAdditional custom labels for the PVC{}
master.persistence.selectorAdditional labels to match for the PVC{}
master.persistence.dataSourceCustom PVC data source{}
master.persistence.existingClaimUse a existing PVC which must be created manually before bound""
master.persistentVolumeClaimRetentionPolicy.enabledControls if and how PVCs are deleted during the lifecycle of a StatefulSetfalse
master.persistentVolumeClaimRetentionPolicy.whenScaledVolume retention behavior when the replica count of the StatefulSet is reducedRetain
master.persistentVolumeClaimRetentionPolicy.whenDeletedVolume retention behavior that applies when the StatefulSet is deletedRetain
master.service.typeRedis® master service typeClusterIP
master.service.portNames.redisRedis® master service port nametcp-redis
master.service.ports.redisRedis® master service port6379
master.service.nodePorts.redisNode port for Redis® master""
master.service.externalTrafficPolicyRedis® master service external traffic policyCluster
master.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
master.service.internalTrafficPolicyRedis® master service internal traffic policy (requires Kubernetes v1.22 or greater to be usable)Cluster
master.service.clusterIPRedis® master service Cluster IP""
master.service.loadBalancerIPRedis® master service Load Balancer IP""
master.service.loadBalancerClassmaster service Load Balancer class if service type is LoadBalancer (optional, cloud specific)""
master.service.loadBalancerSourceRangesRedis® master service Load Balancer sources[]
master.service.externalIPsRedis® master service External IPs[]
master.service.annotationsAdditional custom annotations for Redis® master service{}
master.service.sessionAffinitySession Affinity for Kubernetes service, can be "None" or "ClientIP"None
master.service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
master.terminationGracePeriodSecondsInteger setting the termination grace period for the redis-master pods30
master.serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
master.serviceAccount.nameThe name of the ServiceAccount to use.""
master.serviceAccount.automountServiceAccountTokenWhether to auto mount the service account tokenfalse
master.serviceAccount.annotationsAdditional custom annotations for the ServiceAccount{}
master.pdb.createEnable/disable a Pod Disruption Budget creationtrue
master.pdb.minAvailableMinimum number/percentage of pods that should remain scheduled{}
master.pdb.maxUnavailableMaximum number/percentage of pods that may be made unavailable. Defaults to 1 if both master.pdb.minAvailable and master.pdb.maxUnavailable are empty.{}
master.extraPodSpecOptionally specify extra PodSpec for the Redis® master pod(s){}
master.annotationsAdditional custom annotations for Redis® Master resource{}

Redis® replicas configuration parameters

NameDescriptionValue
replica.kindUse either DaemonSet or StatefulSet (default)StatefulSet
replica.replicaCountNumber of Redis® replicas to deploy3
replica.revisionHistoryLimitThe number of old history to retain to allow rollback10
replica.configurationConfiguration for Redis® replicas nodes""
replica.disableCommandsArray with Redis® commands to disable on replicas nodes["FLUSHDB","FLUSHALL"]
replica.commandOverride default container command (useful when using custom images)[]
replica.argsOverride default container args (useful when using custom images)[]
replica.enableServiceLinksWhether information about services should be injected into pod's environment variabletrue
replica.preExecCmdsAdditional commands to run prior to starting Redis® replicas[]
replica.extraFlagsArray with additional command line flags for Redis® replicas[]
replica.extraEnvVarsArray with extra environment variables to add to Redis® replicas nodes[]
replica.extraEnvVarsCMName of existing ConfigMap containing extra env vars for Redis® replicas nodes""
replica.extraEnvVarsSecretName of existing Secret containing extra env vars for Redis® replicas nodes""
replica.externalMaster.enabledUse external master for bootstrappingfalse
replica.externalMaster.hostExternal master host to bootstrap from""
replica.externalMaster.portPort for Redis service external master host6379
replica.containerPorts.redisContainer port to open on Redis® replicas nodes6379
replica.startupProbe.enabledEnable startupProbe on Redis® replicas nodestrue
replica.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe10
replica.startupProbe.periodSecondsPeriod seconds for startupProbe10
replica.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
replica.startupProbe.failureThresholdFailure threshold for startupProbe22
replica.startupProbe.successThresholdSuccess threshold for startupProbe1
replica.livenessProbe.enabledEnable livenessProbe on Redis® replicas nodestrue
replica.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe20
replica.livenessProbe.periodSecondsPeriod seconds for livenessProbe5
replica.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
replica.livenessProbe.failureThresholdFailure threshold for livenessProbe5
replica.livenessProbe.successThresholdSuccess threshold for livenessProbe1
replica.readinessProbe.enabledEnable readinessProbe on Redis® replicas nodestrue
replica.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe20
replica.readinessProbe.periodSecondsPeriod seconds for readinessProbe5
replica.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe1
replica.readinessProbe.failureThresholdFailure threshold for readinessProbe5
replica.readinessProbe.successThresholdSuccess threshold for readinessProbe1
replica.customStartupProbeCustom startupProbe that overrides the default one{}
replica.customLivenessProbeCustom livenessProbe that overrides the default one{}
replica.customReadinessProbeCustom readinessProbe that overrides the default one{}
replica.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if replica.resources is set (replica.resources is recommended for production).nano
replica.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
replica.podSecurityContext.enabledEnabled Redis® replicas pods' Security Contexttrue
replica.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
replica.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
replica.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
replica.podSecurityContext.fsGroupSet Redis® replicas pod's Security Context fsGroup1001
replica.containerSecurityContext.enabledEnabled Redis® replicas containers' Security Contexttrue
replica.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
replica.containerSecurityContext.runAsUserSet Redis® replicas containers' Security Context runAsUser1001
replica.containerSecurityContext.runAsGroupSet Redis® replicas containers' Security Context runAsGroup1001
replica.containerSecurityContext.runAsNonRootSet Redis® replicas containers' Security Context runAsNonRoottrue
replica.containerSecurityContext.allowPrivilegeEscalationSet Redis® replicas pod's Security Context allowPrivilegeEscalationfalse
replica.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context read-only root filesystemtrue
replica.containerSecurityContext.seccompProfile.typeSet Redis® replicas containers' Security Context seccompProfileRuntimeDefault
replica.containerSecurityContext.capabilities.dropSet Redis® replicas containers' Security Context capabilities to drop["ALL"]
replica.schedulerNameAlternate scheduler for Redis® replicas pods""
replica.updateStrategy.typeRedis® replicas statefulset strategy typeRollingUpdate
replica.minReadySecondsHow many seconds a pod needs to be ready before killing the next, during update0
replica.priorityClassNameRedis® replicas pods' priorityClassName""
replica.podManagementPolicypodManagementPolicy to manage scaling operation of %%MAIN_CONTAINER_NAME%% pods""
replica.automountServiceAccountTokenMount Service Account token in podfalse
replica.hostAliasesRedis® replicas pods host aliases[]
replica.podLabelsExtra labels for Redis® replicas pods{}
replica.podAnnotationsAnnotations for Redis® replicas pods{}
replica.shareProcessNamespaceShare a single process namespace between all of the containers in Redis® replicas podsfalse
replica.podAffinityPresetPod affinity preset. Ignored if replica.affinity is set. Allowed values: soft or hard""
replica.podAntiAffinityPresetPod anti-affinity preset. Ignored if replica.affinity is set. Allowed values: soft or hardsoft
replica.nodeAffinityPreset.typeNode affinity preset type. Ignored if replica.affinity is set. Allowed values: soft or hard""
replica.nodeAffinityPreset.keyNode label key to match. Ignored if replica.affinity is set""
replica.nodeAffinityPreset.valuesNode label values to match. Ignored if replica.affinity is set[]
replica.affinityAffinity for Redis® replicas pods assignment{}
replica.nodeSelectorNode labels for Redis® replicas pods assignment{}
replica.tolerationsTolerations for Redis® replicas pods assignment[]
replica.topologySpreadConstraintsSpread Constraints for Redis® replicas pod assignment[]
replica.dnsPolicyDNS Policy for Redis® replica pods""
replica.dnsConfigDNS Configuration for Redis® replica pods{}
replica.lifecycleHooksfor the Redis® replica container(s) to automate configuration before or after startup{}
replica.extraVolumesOptionally specify extra list of additional volumes for the Redis® replicas pod(s)[]
replica.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the Redis® replicas container(s)[]
replica.sidecarsAdd additional sidecar containers to the Redis® replicas pod(s)[]
replica.initContainersAdd additional init containers to the Redis® replicas pod(s)[]
replica.persistence.enabledEnable persistence on Redis® replicas nodes using Persistent Volume Claimstrue
replica.persistence.mediumProvide a medium for emptyDir volumes.""
replica.persistence.sizeLimitSet this to enable a size limit for emptyDir volumes.""
replica.persistence.pathThe path the volume will be mounted at on Redis® replicas containers/data
replica.persistence.subPathThe subdirectory of the volume to mount on Redis® replicas containers""
replica.persistence.subPathExprUsed to construct the subPath subdirectory of the volume to mount on Redis® replicas containers""
replica.persistence.storageClassPersistent Volume storage class""
replica.persistence.accessModesPersistent Volume access modes["ReadWriteOnce"]
replica.persistence.sizePersistent Volume size8Gi
replica.persistence.annotationsAdditional custom annotations for the PVC{}
replica.persistence.labelsAdditional custom labels for the PVC{}
replica.persistence.selectorAdditional labels to match for the PVC{}
replica.persistence.dataSourceCustom PVC data source{}
replica.persistence.existingClaimUse a existing PVC which must be created manually before bound""
replica.persistentVolumeClaimRetentionPolicy.enabledControls if and how PVCs are deleted during the lifecycle of a StatefulSetfalse
replica.persistentVolumeClaimRetentionPolicy.whenScaledVolume retention behavior when the replica count of the StatefulSet is reducedRetain
replica.persistentVolumeClaimRetentionPolicy.whenDeletedVolume retention behavior that applies when the StatefulSet is deletedRetain
replica.service.typeRedis® replicas service typeClusterIP
replica.service.ports.redisRedis® replicas service port6379
replica.service.nodePorts.redisNode port for Redis® replicas""
replica.service.externalTrafficPolicyRedis® replicas service external traffic policyCluster
replica.service.internalTrafficPolicyRedis® replicas service internal traffic policy (requires Kubernetes v1.22 or greater to be usable)Cluster
replica.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
replica.service.clusterIPRedis® replicas service Cluster IP""
replica.service.loadBalancerIPRedis® replicas service Load Balancer IP""
replica.service.loadBalancerClassreplicas service Load Balancer class if service type is LoadBalancer (optional, cloud specific)""
replica.service.loadBalancerSourceRangesRedis® replicas service Load Balancer sources[]
replica.service.annotationsAdditional custom annotations for Redis® replicas service{}
replica.service.sessionAffinitySession Affinity for Kubernetes service, can be "None" or "ClientIP"None
replica.service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
replica.terminationGracePeriodSecondsInteger setting the termination grace period for the redis-replicas pods30
replica.autoscaling.enabledEnable replica autoscaling settingsfalse
replica.autoscaling.minReplicasMinimum replicas for the pod autoscaling1
replica.autoscaling.maxReplicasMaximum replicas for the pod autoscaling11
replica.autoscaling.targetCPUPercentage of CPU to consider when autoscaling""
replica.autoscaling.targetMemoryPercentage of Memory to consider when autoscaling""
replica.serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
replica.serviceAccount.nameThe name of the ServiceAccount to use.""
replica.serviceAccount.automountServiceAccountTokenWhether to auto mount the service account tokenfalse
replica.serviceAccount.annotationsAdditional custom annotations for the ServiceAccount{}
replica.pdb.createEnable/disable a Pod Disruption Budget creationtrue
replica.pdb.minAvailableMinimum number/percentage of pods that should remain scheduled{}
replica.pdb.maxUnavailableMaximum number/percentage of pods that may be made unavailable. Defaults to 1 if both replica.pdb.minAvailable and replica.pdb.maxUnavailable are empty.{}
replica.extraPodSpecOptionally specify extra PodSpec for the Redis® replicas pod(s){}
replica.annotationsAdditional custom annotations for Redis® replicas resource{}

Redis® Sentinel configuration parameters

NameDescriptionValue
sentinel.enabledUse Redis® Sentinel on Redis® pods.false
sentinel.image.registryRedis® Sentinel image registryREGISTRY_NAME
sentinel.image.repositoryRedis® Sentinel image repositoryREPOSITORY_NAME/redis-sentinel
sentinel.image.digestRedis® Sentinel image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
sentinel.image.pullPolicyRedis® Sentinel image pull policyIfNotPresent
sentinel.image.pullSecretsRedis® Sentinel image pull secrets[]
sentinel.image.debugEnable image debug modefalse
sentinel.annotationsAdditional custom annotations for Redis® Sentinel resource{}
sentinel.masterSetMaster set namemymaster
sentinel.quorumSentinel Quorum2
sentinel.getMasterTimeoutAmount of time to allow before get_sentinel_master_info() times out.90
sentinel.automateClusterRecoveryAutomate cluster recovery in cases where the last replica is not considered a good replica and Sentinel won't automatically failover to it.false
sentinel.redisShutdownWaitFailoverWhether the Redis® master container waits for the failover at shutdown (in addition to the Redis® Sentinel container).true
sentinel.downAfterMillisecondsTimeout for detecting a Redis® node is down60000
sentinel.failoverTimeoutTimeout for performing a election failover180000
sentinel.parallelSyncsNumber of replicas that can be reconfigured in parallel to use the new master after a failover1
sentinel.configurationConfiguration for Redis® Sentinel nodes""
sentinel.commandOverride default container command (useful when using custom images)[]
sentinel.argsOverride default container args (useful when using custom images)[]
sentinel.enableServiceLinksWhether information about services should be injected into pod's environment variabletrue
sentinel.preExecCmdsAdditional commands to run prior to starting Redis® Sentinel[]
sentinel.extraEnvVarsArray with extra environment variables to add to Redis® Sentinel nodes[]
sentinel.extraEnvVarsCMName of existing ConfigMap containing extra env vars for Redis® Sentinel nodes""
sentinel.extraEnvVarsSecretName of existing Secret containing extra env vars for Redis® Sentinel nodes""
sentinel.externalMaster.enabledUse external master for bootstrappingfalse
sentinel.externalMaster.hostExternal master host to bootstrap from""
sentinel.externalMaster.portPort for Redis service external master host6379
sentinel.containerPorts.sentinelContainer port to open on Redis® Sentinel nodes26379
sentinel.startupProbe.enabledEnable startupProbe on Redis® Sentinel nodestrue
sentinel.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe10
sentinel.startupProbe.periodSecondsPeriod seconds for startupProbe10
sentinel.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
sentinel.startupProbe.failureThresholdFailure threshold for startupProbe22
sentinel.startupProbe.successThresholdSuccess threshold for startupProbe1
sentinel.livenessProbe.enabledEnable livenessProbe on Redis® Sentinel nodestrue
sentinel.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe20
sentinel.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
sentinel.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
sentinel.livenessProbe.failureThresholdFailure threshold for livenessProbe6
sentinel.livenessProbe.successThresholdSuccess threshold for livenessProbe1
sentinel.readinessProbe.enabledEnable readinessProbe on Redis® Sentinel nodestrue
sentinel.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe20
sentinel.readinessProbe.periodSecondsPeriod seconds for readinessProbe5
sentinel.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe1
sentinel.readinessProbe.failureThresholdFailure threshold for readinessProbe6
sentinel.readinessProbe.successThresholdSuccess threshold for readinessProbe1
sentinel.customStartupProbeCustom startupProbe that overrides the default one{}
sentinel.customLivenessProbeCustom livenessProbe that overrides the default one{}
sentinel.customReadinessProbeCustom readinessProbe that overrides the default one{}
sentinel.persistence.enabledEnable persistence on Redis® sentinel nodes using Persistent Volume Claims (Experimental)false
sentinel.persistence.storageClassPersistent Volume storage class""
sentinel.persistence.accessModesPersistent Volume access modes["ReadWriteOnce"]
sentinel.persistence.sizePersistent Volume size100Mi
sentinel.persistence.annotationsAdditional custom annotations for the PVC{}
sentinel.persistence.labelsAdditional custom labels for the PVC{}
sentinel.persistence.selectorAdditional labels to match for the PVC{}
sentinel.persistence.dataSourceCustom PVC data source{}
sentinel.persistence.mediumProvide a medium for emptyDir volumes.""
sentinel.persistence.sizeLimitSet this to enable a size limit for emptyDir volumes.""
sentinel.persistentVolumeClaimRetentionPolicy.enabledControls if and how PVCs are deleted during the lifecycle of a StatefulSetfalse
sentinel.persistentVolumeClaimRetentionPolicy.whenScaledVolume retention behavior when the replica count of the StatefulSet is reducedRetain
sentinel.persistentVolumeClaimRetentionPolicy.whenDeletedVolume retention behavior that applies when the StatefulSet is deletedRetain
sentinel.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if sentinel.resources is set (sentinel.resources is recommended for production).nano
sentinel.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
sentinel.containerSecurityContext.enabledEnabled Redis® Sentinel containers' Security Contexttrue
sentinel.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
sentinel.containerSecurityContext.runAsUserSet Redis® Sentinel containers' Security Context runAsUser1001
sentinel.containerSecurityContext.runAsGroupSet Redis® Sentinel containers' Security Context runAsGroup1001
sentinel.containerSecurityContext.runAsNonRootSet Redis® Sentinel containers' Security Context runAsNonRoottrue
sentinel.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context read-only root filesystemtrue
sentinel.containerSecurityContext.allowPrivilegeEscalationSet Redis® Sentinel containers' Security Context allowPrivilegeEscalationfalse
sentinel.containerSecurityContext.seccompProfile.typeSet Redis® Sentinel containers' Security Context seccompProfileRuntimeDefault
sentinel.containerSecurityContext.capabilities.dropSet Redis® Sentinel containers' Security Context capabilities to drop["ALL"]
sentinel.lifecycleHooksfor the Redis® sentinel container(s) to automate configuration before or after startup{}
sentinel.extraVolumesOptionally specify extra list of additional volumes for the Redis® Sentinel[]
sentinel.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the Redis® Sentinel container(s)[]
sentinel.service.typeRedis® Sentinel service typeClusterIP
sentinel.service.ports.redisRedis® service port for Redis®6379
sentinel.service.ports.sentinelRedis® service port for Redis® Sentinel26379
sentinel.service.nodePorts.redisNode port for Redis®""
sentinel.service.nodePorts.sentinelNode port for Sentinel""
sentinel.service.externalTrafficPolicyRedis® Sentinel service external traffic policyCluster
sentinel.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
sentinel.service.clusterIPRedis® Sentinel service Cluster IP""
sentinel.service.createMasterEnable master service pointing to the current master (experimental)false
sentinel.service.loadBalancerIPRedis® Sentinel service Load Balancer IP""
sentinel.service.loadBalancerClasssentinel service Load Balancer class if service type is LoadBalancer (optional, cloud specific)""
sentinel.service.loadBalancerSourceRangesRedis® Sentinel service Load Balancer sources[]
sentinel.service.annotationsAdditional custom annotations for Redis® Sentinel service{}
sentinel.service.sessionAffinitySession Affinity for Kubernetes service, can be "None" or "ClientIP"None
sentinel.service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
sentinel.service.headless.annotationsAnnotations for the headless service.{}
sentinel.service.headless.extraPortsOptionally specify extra ports to expose for the headless service.[]
sentinel.masterService.enabledEnable master service pointing to the current master (experimental)false
sentinel.masterService.typeRedis® Sentinel master service typeClusterIP
sentinel.masterService.ports.redisRedis® service port for Redis®6379
sentinel.masterService.nodePorts.redisNode port for Redis®""
sentinel.masterService.externalTrafficPolicyRedis® master service external traffic policy""
sentinel.masterService.extraPortsExtra ports to expose (normally used with the sidecar value)[]
sentinel.masterService.clusterIPRedis® master service Cluster IP""
sentinel.masterService.loadBalancerIPRedis® master service Load Balancer IP""
sentinel.masterService.loadBalancerClassmaster service Load Balancer class if service type is LoadBalancer (optional, cloud specific)""
sentinel.masterService.loadBalancerSourceRangesRedis® master service Load Balancer sources[]
sentinel.masterService.annotationsAdditional custom annotations for Redis® master service{}
sentinel.masterService.sessionAffinitySession Affinity for Kubernetes service, can be "None" or "ClientIP"None
sentinel.masterService.sessionAffinityConfigAdditional settings for the sessionAffinity{}
sentinel.terminationGracePeriodSecondsInteger setting the termination grace period for the redis-node pods30
sentinel.extraPodSpecOptionally specify extra PodSpec for the Redis® Sentinel pod(s){}
sentinel.externalAccess.enabledEnable external access to the Redisfalse
sentinel.externalAccess.service.loadBalancerIPAnnotaionName of annotation to specify fixed IP for service in.""
sentinel.externalAccess.service.typeType for the services used to expose every PodLoadBalancer
sentinel.externalAccess.service.redisPortPort for the services used to expose redis-server6379
sentinel.externalAccess.service.sentinelPortPort for the services used to expose redis-sentinel26379
sentinel.externalAccess.service.loadBalancerIPArray of load balancer IPs for each Redis® node. Length must be the same as sentinel.replicaCount[]
sentinel.externalAccess.service.loadBalancerClassLoad Balancer class if service type is LoadBalancer (optional, cloud specific)""
sentinel.externalAccess.service.loadBalancerSourceRangesService Load Balancer sources[]
sentinel.externalAccess.service.annotationsAnnotations to add to the services used to expose every Pod of the Redis® Cluster{}

Other Parameters

NameDescriptionValue
serviceBindings.enabledCreate secret for service binding (Experimental)false
networkPolicy.enabledEnable creation of NetworkPolicy resourcestrue
networkPolicy.allowExternalDon't require client label for connectionstrue
networkPolicy.allowExternalEgressAllow the pod to access any range of port and all destinations.true
networkPolicy.extraIngressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.extraEgressAdd extra egress rules to the NetworkPolicy[]
networkPolicy.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces{}
networkPolicy.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces{}
networkPolicy.metrics.allowExternalDon't require client label for connections for metrics endpointtrue
networkPolicy.metrics.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces to metrics endpoint{}
networkPolicy.metrics.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces to metrics endpoint{}
podSecurityPolicy.createWhether to create a PodSecurityPolicy. WARNING: PodSecurityPolicy is deprecated in Kubernetes v1.21 or later, unavailable in v1.25 or laterfalse
podSecurityPolicy.enabledEnable PodSecurityPolicy's RBAC rulesfalse
rbac.createSpecifies whether RBAC resources should be createdfalse
rbac.rulesCustom RBAC rules to set[]
serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
serviceAccount.nameThe name of the ServiceAccount to use.""
serviceAccount.automountServiceAccountTokenWhether to auto mount the service account tokenfalse
serviceAccount.annotationsAdditional custom annotations for the ServiceAccount{}
pdbDEPRECATED Please use master.pdb and replica.pdb values instead{}
tls.enabledEnable TLS trafficfalse
tls.authClientsRequire clients to authenticatetrue
tls.autoGeneratedEnable autogenerated certificatesfalse
tls.existingSecretThe name of the existing secret that contains the TLS certificates""
tls.certificatesSecretDEPRECATED. Use existingSecret instead.""
tls.certFilenameCertificate filename""
tls.certKeyFilenameCertificate Key filename""
tls.certCAFilenameCA Certificate filename""
tls.dhParamsFilenameFile containing DH params (in order to support DH based ciphers)""

Metrics Parameters

NameDescriptionValue
metrics.enabledStart a sidecar prometheus exporter to expose Redis® metricsfalse
metrics.image.registryRedis® Exporter image registryREGISTRY_NAME
metrics.image.repositoryRedis® Exporter image repositoryREPOSITORY_NAME/redis-exporter
metrics.image.digestRedis® Exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
metrics.image.pullPolicyRedis® Exporter image pull policyIfNotPresent
metrics.image.pullSecretsRedis® Exporter image pull secrets[]
metrics.containerPorts.httpMetrics HTTP container port9121
metrics.startupProbe.enabledEnable startupProbe on Redis® replicas nodesfalse
metrics.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe10
metrics.startupProbe.periodSecondsPeriod seconds for startupProbe10
metrics.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
metrics.startupProbe.failureThresholdFailure threshold for startupProbe5
metrics.startupProbe.successThresholdSuccess threshold for startupProbe1
metrics.livenessProbe.enabledEnable livenessProbe on Redis® replicas nodestrue
metrics.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe10
metrics.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
metrics.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
metrics.livenessProbe.failureThresholdFailure threshold for livenessProbe5
metrics.livenessProbe.successThresholdSuccess threshold for livenessProbe1
metrics.readinessProbe.enabledEnable readinessProbe on Redis® replicas nodestrue
metrics.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
metrics.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
metrics.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe1
metrics.readinessProbe.failureThresholdFailure threshold for readinessProbe3
metrics.readinessProbe.successThresholdSuccess threshold for readinessProbe1
metrics.customStartupProbeCustom startupProbe that overrides the default one{}
metrics.customLivenessProbeCustom livenessProbe that overrides the default one{}
metrics.customReadinessProbeCustom readinessProbe that overrides the default one{}
metrics.commandOverride default metrics container init command (useful when using custom images)[]
metrics.redisTargetHostA way to specify an alternative Redis® hostnamelocalhost
metrics.extraArgsExtra arguments for Redis® exporter, for example:{}
metrics.extraEnvVarsArray with extra environment variables to add to Redis® exporter[]
metrics.containerSecurityContext.enabledEnabled Redis® exporter containers' Security Contexttrue
metrics.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
metrics.containerSecurityContext.runAsUserSet Redis® exporter containers' Security Context runAsUser1001
metrics.containerSecurityContext.runAsGroupSet Redis® exporter containers' Security Context runAsGroup1001
metrics.containerSecurityContext.runAsNonRootSet Redis® exporter containers' Security Context runAsNonRoottrue
metrics.containerSecurityContext.allowPrivilegeEscalationSet Redis® exporter containers' Security Context allowPrivilegeEscalationfalse
metrics.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context read-only root filesystemtrue
metrics.containerSecurityContext.seccompProfile.typeSet Redis® exporter containers' Security Context seccompProfileRuntimeDefault
metrics.containerSecurityContext.capabilities.dropSet Redis® exporter containers' Security Context capabilities to drop["ALL"]
metrics.extraVolumesOptionally specify extra list of additional volumes for the Redis® metrics sidecar[]
metrics.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the Redis® metrics sidecar[]
metrics.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production).nano
metrics.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
metrics.podLabelsExtra labels for Redis® exporter pods{}
metrics.podAnnotationsAnnotations for Redis® exporter pods{}
metrics.service.enabledCreate Service resource(s) for scraping metrics using PrometheusOperator ServiceMonitor, can be disabled when using a PodMonitortrue
metrics.service.typeRedis® exporter service typeClusterIP
metrics.service.ports.httpRedis® exporter service port9121
metrics.service.externalTrafficPolicyRedis® exporter service external traffic policyCluster
metrics.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
metrics.service.loadBalancerIPRedis® exporter service Load Balancer IP""
metrics.service.loadBalancerClassexporter service Load Balancer class if service type is LoadBalancer (optional, cloud specific)""
metrics.service.loadBalancerSourceRangesRedis® exporter service Load Balancer sources[]
metrics.service.annotationsAdditional custom annotations for Redis® exporter service{}
metrics.service.clusterIPRedis® exporter service Cluster IP""
metrics.serviceMonitor.portthe service port to scrape metrics fromhttp-metrics
metrics.serviceMonitor.enabledCreate ServiceMonitor resource(s) for scraping metrics using PrometheusOperatorfalse
metrics.serviceMonitor.namespaceThe namespace in which the ServiceMonitor will be created""
metrics.serviceMonitor.intervalThe interval at which metrics should be scraped30s
metrics.serviceMonitor.scrapeTimeoutThe timeout after which the scrape is ended""
metrics.serviceMonitor.relabelingsMetrics RelabelConfigs to apply to samples before scraping.[]
metrics.serviceMonitor.metricRelabelingsMetrics RelabelConfigs to apply to samples before ingestion.[]
metrics.serviceMonitor.honorLabelsSpecify honorLabels parameter to add the scrape endpointfalse
metrics.serviceMonitor.additionalLabelsAdditional labels that can be used so ServiceMonitor resource(s) can be discovered by Prometheus{}
metrics.serviceMonitor.podTargetLabelsLabels from the Kubernetes pod to be transferred to the created metrics[]
metrics.serviceMonitor.sampleLimitLimit of how many samples should be scraped from every Podfalse
metrics.serviceMonitor.targetLimitLimit of how many targets should be scrapedfalse
metrics.serviceMonitor.additionalEndpointsAdditional endpoints to scrape (e.g sentinel)[]
metrics.podMonitor.portthe pod port to scrape metrics frommetrics
metrics.podMonitor.enabledCreate PodMonitor resource(s) for scraping metrics using PrometheusOperatorfalse
metrics.podMonitor.namespaceThe namespace in which the PodMonitor will be created""
metrics.podMonitor.intervalThe interval at which metrics should be scraped30s
metrics.podMonitor.scrapeTimeoutThe timeout after which the scrape is ended""
metrics.podMonitor.relabelingsMetrics RelabelConfigs to apply to samples before scraping.[]
metrics.podMonitor.metricRelabelingsMetrics RelabelConfigs to apply to samples before ingestion.[]
metrics.podMonitor.honorLabelsSpecify honorLabels parameter to add the scrape endpointfalse
metrics.podMonitor.additionalLabelsAdditional labels that can be used so PodMonitor resource(s) can be discovered by Prometheus{}
metrics.podMonitor.podTargetLabelsLabels from the Kubernetes pod to be transferred to the created metrics[]
metrics.podMonitor.sampleLimitLimit of how many samples should be scraped from every Podfalse
metrics.podMonitor.targetLimitLimit of how many targets should be scrapedfalse
metrics.podMonitor.additionalEndpointsAdditional endpoints to scrape (e.g sentinel)[]
metrics.prometheusRule.enabledCreate a custom prometheusRule Resource for scraping metrics using PrometheusOperatorfalse
metrics.prometheusRule.namespaceThe namespace in which the prometheusRule will be created""
metrics.prometheusRule.additionalLabelsAdditional labels for the prometheusRule{}
metrics.prometheusRule.rulesCustom Prometheus rules[]

Init Container Parameters

NameDescriptionValue
volumePermissions.enabledEnable init container that changes the owner/group of the PV mount point to runAsUser:fsGroupfalse
volumePermissions.image.registryOS Shell + Utility image registryREGISTRY_NAME
volumePermissions.image.repositoryOS Shell + Utility image repositoryREPOSITORY_NAME/os-shell
volumePermissions.image.digestOS Shell + Utility image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
volumePermissions.image.pullPolicyOS Shell + Utility image pull policyIfNotPresent
volumePermissions.image.pullSecretsOS Shell + Utility image pull secrets[]
volumePermissions.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if volumePermissions.resources is set (volumePermissions.resources is recommended for production).nano
volumePermissions.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
volumePermissions.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
volumePermissions.containerSecurityContext.runAsUserSet init container's Security Context runAsUser0
volumePermissions.extraEnvVarsArray with extra environment variables to add to volume permissions init container.[]
kubectl.image.registryKubectl image registryREGISTRY_NAME
kubectl.image.repositoryKubectl image repositoryREPOSITORY_NAME/kubectl
kubectl.image.digestKubectl image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
kubectl.image.pullPolicyKubectl image pull policyIfNotPresent
kubectl.image.pullSecretsKubectl pull secrets[]
kubectl.commandkubectl command to execute["/opt/bitnami/scripts/kubectl-scripts/update-master-label.sh"]
kubectl.containerSecurityContext.enabledEnabled kubectl containers' Security Contexttrue
kubectl.containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
kubectl.containerSecurityContext.runAsUserSet kubectl containers' Security Context runAsUser1001
kubectl.containerSecurityContext.runAsGroupSet kubectl containers' Security Context runAsGroup1001
kubectl.containerSecurityContext.runAsNonRootSet kubectl containers' Security Context runAsNonRoottrue
kubectl.containerSecurityContext.allowPrivilegeEscalationSet kubectl containers' Security Context allowPrivilegeEscalationfalse
kubectl.containerSecurityContext.readOnlyRootFilesystemSet container's Security Context read-only root filesystemtrue
kubectl.containerSecurityContext.seccompProfile.typeSet kubectl containers' Security Context seccompProfileRuntimeDefault
kubectl.containerSecurityContext.capabilities.dropSet kubectl containers' Security Context capabilities to drop["ALL"]
kubectl.resources.limitsThe resources limits for the kubectl containers{}
kubectl.resources.requestsThe requested resources for the kubectl containers{}
sysctl.enabledEnable init container to modify Kernel settingsfalse
sysctl.image.registryOS Shell + Utility image registryREGISTRY_NAME
sysctl.image.repositoryOS Shell + Utility image repositoryREPOSITORY_NAME/os-shell
sysctl.image.digestOS Shell + Utility image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
sysctl.image.pullPolicyOS Shell + Utility image pull policyIfNotPresent
sysctl.image.pullSecretsOS Shell + Utility image pull secrets[]
sysctl.commandOverride default init-sysctl container command (useful when using custom images)[]
sysctl.mountHostSysMount the host /sys folder to /host-sysfalse
sysctl.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if sysctl.resources is set (sysctl.resources is recommended for production).nano
sysctl.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}

useExternalDNS Parameters

NameDescriptionValue
useExternalDNS.enabledEnable various syntax that would enable external-dns to work. Note this requires a working installation of external-dns to be usable.false
useExternalDNS.additionalAnnotationsExtra annotations to be utilized when external-dns is enabled.{}
useExternalDNS.annotationKeyThe annotation key utilized when external-dns is enabled. Setting this to false will disable annotations.external-dns.alpha.kubernetes.io/
useExternalDNS.suffixThe DNS suffix utilized when external-dns is enabled. Note that we prepend the suffix with the full name of the release.""

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
  --set auth.password=secretpassword \
    oci://REGISTRY_NAME/REPOSITORY_NAME/redis

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command sets the Redis® server password to secretpassword.

NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/redis

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml

Troubleshooting

Find more information about how to deal with common errors related to Bitnami's Helm charts in this troubleshooting guide.

Upgrading

To 20.5.0

This version introduces image verification for security purposes. To disable it, set global.security.allowInsecureImages to true. More details at GitHub issue.

A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an incompatible breaking change needing manual actions.

RDB compatibility

It's common to have RDB format changes across Redis® releases where we see backward compatibility but no forward compatibility. For example, v7.0 can load an RDB created by v6.2 , but the opposite is not true. When that's the case, the rolling update can cause replicas to temporarily stop synchronizing while they are running a lower version than master. For example, on a rolling update master-0 and replica-2 are updated first from version v6.2 to v7.0; replica-0 and replica-1 won't be able to start a full sync with master-0 because they are still running v6.2 and can't support the RDB format from version 7.0 that master is now using. This issue can be mitigated by splitting the upgrade into two stages: one for all replicas and another for any master.

  • Stage 1 (replicas only, as there's no master with an ordinal higher than 99): helm upgrade oci://REGISTRY_NAME/REPOSITORY_NAME/redis --set master.updateStrategy.rollingUpdate.partition=99
  • Stage 2 (anything else that is not up to date, in this case only master): helm upgrade oci://REGISTRY_NAME/REPOSITORY_NAME/redis

To 20.0.0

This major version updates the Redis® docker image version used from 7.2 to 7.4, the new stable version. There are no major changes in the chart, but we recommend checking the Redis® 7.4 release notes before upgrading.

To 19.0.0

This major bump changes the following security defaults:

  • runAsGroup is changed from 0 to 1001
  • readOnlyRootFilesystem is set to true
  • resourcesPreset is changed from none to the minimum size working in our test suites (NOTE: resourcesPreset is not meant for production usage, but resources adapted to your use case).
  • global.compatibility.openshift.adaptSecurityContext is changed from disabled to auto.

This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.

To 18.0.0

This major version updates the Redis® docker image version used from 7.0 to 7.2, the new stable version. There are no major changes in the chart, but we recommend checking the Redis® 7.2 release notes before upgrading.

NOTE: Due to an error in our release process, versions higher or equal than 17.15.4 already use 7.2 by default.

To 17.0.0

This major version updates the Redis® docker image version used from 6.2 to 7.0, the new stable version. There are no major changes in the chart, but we recommend checking the Redis® 7.0 release notes before upgrading.

To 16.0.0

This major release renames several values in this chart and adds missing features, in order to be inline with the rest of assets in the Bitnami charts repository.

Affected values:

  • master.service.port renamed as master.service.ports.redis.
  • master.service.nodePort renamed as master.service.nodePorts.redis.
  • replica.service.port renamed as replica.service.ports.redis.
  • replica.service.nodePort renamed as replica.service.nodePorts.redis.
  • sentinel.service.port renamed as sentinel.service.ports.redis.
  • sentinel.service.sentinelPort renamed as sentinel.service.ports.sentinel.
  • master.containerPort renamed as master.containerPorts.redis.
  • replica.containerPort renamed as replica.containerPorts.redis.
  • sentinel.containerPort renamed as sentinel.containerPorts.sentinel.
  • master.spreadConstraints renamed as master.topologySpreadConstraints
  • replica.spreadConstraints renamed as replica.topologySpreadConstraints

To 15.0.0

The parameter to enable the usage of StaticIDs was removed. The behavior is to always use StaticIDs.

To 14.8.0

The Redis® sentinel exporter was removed in this version because the upstream project was deprecated. The regular Redis® exporter is included in the sentinel scenario as usual.

To 14.0.0

  • Several parameters were renamed or disappeared in favor of new ones on this major version:
    • The term slave has been replaced by the term replica. Therefore, parameters prefixed with slave are now prefixed with replicas.
    • Credentials parameter are reorganized under the auth parameter.
    • cluster.enabled parameter is deprecated in favor of architecture parameter that accepts two values: standalone and replication.
    • securityContext.* is deprecated in favor of XXX.podSecurityContext and XXX.containerSecurityContext.
    • sentinel.metrics.* parameters are deprecated in favor of metrics.sentinel.* ones.
  • New parameters to add custom command, environment variables, sidecars, init containers, etc. were added.
  • Chart labels were adapted to follow the Helm charts standard labels.
  • values.yaml metadata was adapted to follow the format supported by Readme Generator for Helm.

Consequences:

Backwards compatibility is not guaranteed. To upgrade to 14.0.0, install a new release of the Redis® chart, and migrate the data from your previous release. You have 2 alternatives to do so:

  • Create a backup of the database, and restore it on the new release as explained in the Backup and restore section.
  • Reuse the PVC used to hold the master data on your previous release. To do so, use the master.persistence.existingClaim parameter. The following example assumes that the release name is redis:
helm install redis oci://REGISTRY_NAME/REPOSITORY_NAME/redis --set auth.password=[PASSWORD] --set master.persistence.existingClaim=[EXISTING_PVC]

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

| Note: you need to substitute the placeholder [EXISTING_PVC] with the name of the PVC used on your previous release, and [PASSWORD] with the password used in your previous release.

To 13.0.0

This major version updates the Redis® docker image version used from 6.0 to 6.2, the new stable version. There are no major changes in the chart and there shouldn't be any breaking changes in it as 6.2 is basically a stricter superset of 6.0. For more information, please refer to Redis® 6.2 release notes.

To 12.3.0

This version also introduces bitnami/common, a library chart as a dependency. More documentation about this new utility could be found here. Please, make sure that you have updated the chart dependencies before executing any upgrade.

To 12.0.0

On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

What changes were introduced in this major version?

  • Previous versions of this Helm Chart use apiVersion: v1 (installable by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2 (installable by Helm 3 only). Here you can find more information about the apiVersion field.
  • The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts

Considerations when upgrading to this version

  • If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
  • If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
  • If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the official Helm documentation about migrating from Helm v2 to v3

Useful links

To 11.0.0

When using sentinel, a new statefulset called -node was introduced. This will break upgrading from a previous version where the statefulsets are called master and slave. Hence the PVC will not match the new naming and won't be reused. If you want to keep your data, you will need to perform a backup and then a restore the data in this new version.

When deployed with sentinel enabled, only a group of nodes is deployed and the master/slave role is handled in the group. To avoid breaking the compatibility, the settings for this nodes are given through the slave.xxxx parameters in values.yaml

To 10.0.0

For releases with usePassword: true, the value sentinel.usePassword controls whether the password authentication also applies to the sentinel port. This defaults to true for a secure configuration, however it is possible to disable to account for the following cases:

  • Using a version of redis-sentinel prior to 5.0.1 where the authentication feature was introduced.
  • Where redis clients need to be updated to support sentinel authentication.

If using a master/slave topology, or with usePassword: false, no action is required.

To 9.0.0

The metrics exporter has been changed from a separate deployment to a sidecar container, due to the latest changes in the Redis® exporter code. Check the official page for more information. The metrics container image was changed from oliver006/redis_exporter to bitnami/redis-exporter (Bitnami's maintained package of oliver006/redis_exporter).

To 8.0.18

For releases with metrics.enabled: true the default tag for the exporter image is now v1.x.x. This introduces many changes including metrics names. You'll want to use this dashboard now. Please see the redis_exporter github page for more details.

To 7.0.0

This version causes a change in the Redis® Master StatefulSet definition, so the command helm upgrade would not work out of the box. As an alternative, one of the following could be done:

  • Recommended: Create a clone of the Redis® Master PVC (for example, using projects like this one). Then launch a fresh release reusing this cloned PVC.
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/redis --set persistence.existingClaim=<NEW PVC>

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

  • Alternative (not recommended, do at your own risk): helm delete --purge does not remove the PVC assigned to the Redis® Master StatefulSet. As a consequence, the following commands can be done to upgrade the release
helm delete --purge <RELEASE>
helm install <RELEASE> oci://REGISTRY_NAME/REPOSITORY_NAME/redis

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Previous versions of the chart were not using persistence in the slaves, so this upgrade would add it to them. Another important change is that no values are inherited from master to slaves. For example, in 6.0.0 slaves.readinessProbe.periodSeconds, if empty, would be set to master.readinessProbe.periodSeconds. This approach lacked transparency and was difficult to maintain. From now on, all the slave parameters must be configured just as it is done with the masters.

Some values have changed as well:

  • master.port and slave.port have been changed to redisPort (same value for both master and slaves)
  • master.securityContext and slave.securityContext have been changed to securityContext(same values for both master and slaves)

By default, the upgrade will not change the cluster topology. In case you want to use Redis® Sentinel, you must explicitly set sentinel.enabled to true.

To 6.0.0

Previous versions of the chart were using an init-container to change the permissions of the volumes. This was done in case the securityContext directive in the template was not enough for that (for example, with cephFS). In this new version of the chart, this container is disabled by default (which should not affect most of the deployments). If your installation still requires that init container, execute helm upgrade with the --set volumePermissions.enabled=true.

To 5.0.0

The default image in this release may be switched out for any image containing the redis-server and redis-cli binaries. If redis-server is not the default image ENTRYPOINT, master.command must be specified.

Breaking changes

  • master.args and slave.args are removed. Use master.command or slave.command instead in order to override the image entrypoint, or master.extraFlags to pass additional flags to redis-server.
  • disableCommands is now interpreted as an array of strings instead of a string of comma separated values.
  • master.persistence.path now defaults to /data.

To 4.0.0

This version removes the chart label from the spec.selector.matchLabels which is immutable since StatefulSet apps/v1beta2. It has been inadvertently added, causing any subsequent upgrade to fail. See https://github.com/helm/charts/issues/7726.

It also fixes https://github.com/helm/charts/issues/7726 where a deployment extensions/v1beta1 can not be upgraded if spec.selector is not explicitly set.

Finally, it fixes https://github.com/helm/charts/issues/7803 by removing mutable labels in spec.VolumeClaimTemplate.metadata.labels so that it is upgradable.

In order to upgrade, delete the Redis® StatefulSet before upgrading:

kubectl delete statefulsets.apps --cascade=false my-release-redis-master

And edit the Redis® slave (and metrics if enabled) deployment:

kubectl patch deployments my-release-redis-slave --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/chart"}]'
kubectl patch deployments my-release-redis-metrics --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/chart"}]'

License

Copyright © 2025 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.