Bitnami package for NGINX Open Source

NGINX Open Source is a web server that can be also used as a reverse proxy, load balancer, and HTTP cache. Recommended for high-demanding sites due to its ability to provide faster content.

Overview of NGINX Open Source

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/nginx

Tip: Did you know that this app is also available as a Kubernetes App on the Azure Marketplace? Kubernetes Apps are the easiest way to deploy Bitnami on AKS. Click here to see the listing on Azure Marketplace.

Looking to use NGINX Open Source in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.

⚠️ Important Notice: Upcoming changes to the Bitnami Catalog

Beginning August 28th, 2025, Bitnami will evolve its public catalog to offer a curated set of hardened, security-focused images under the new Bitnami Secure Images initiative. As part of this transition:

  • Granting community users access for the first time to security-optimized versions of popular container images.
  • Bitnami will begin deprecating support for non-hardened, Debian-based software images in its free tier and will gradually remove non-latest tags from the public catalog. As a result, community users will have access to a reduced number of hardened images. These images are published only under the “latest” tag and are intended for development purposes
  • Starting August 28th, over two weeks, all existing container images, including older or versioned tags (e.g., 2.50.0, 10.6), will be migrated from the public catalog (docker.io/bitnami) to the “Bitnami Legacy” repository (docker.io/bitnamilegacy), where they will no longer receive updates.
  • For production workloads and long-term support, users are encouraged to adopt Bitnami Secure Images, which include hardened containers, smaller attack surfaces, CVE transparency (via VEX/KEV), SBOMs, and enterprise support.

These changes aim to improve the security posture of all Bitnami users by promoting best practices for software supply chain integrity and up-to-date deployments. For more details, visit the Bitnami Secure Images announcement.

Introduction

Bitnami charts for Helm are carefully engineered, actively maintained and are the quickest and easiest way to deploy containers on a Kubernetes cluster that are ready to handle production workloads.

This chart bootstraps a NGINX Open Source deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/nginx

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

These commands deploy NGINX Open Source on the Kubernetes cluster in the default configuration.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This will deploy a sidecar container with nginx-prometheus-exporter in all pods and will expose it via the Nginx service. This service will be have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Securing traffic using TLS

Nginx can encrypt communications by setting tls.enabled=true. The chart allows two configuration options:

  • Provide your own secret using the tls.certificatesSecret value. Also set the correct name of the certificate files using the tls.certFilename, tls.certKeyFilename and tls.certCAFilename values.
  • Have the chart auto-generate the certificates using tls.autoGenerated=true.

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Use a different NGINX version

To modify the application version used in this chart, specify a different version of the image using the image.tag parameter and/or a different repository using the image.repository parameter.

Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

Deploying your custom web application

The NGINX chart allows you to deploy a custom web application using one of the following methods:

  • Cloning from a git repository: Set cloneStaticSiteFromGit.enabled to true and set the repository and branch using the cloneStaticSiteFromGit.repository and cloneStaticSiteFromGit.branch parameters. A sidecar will also pull the latest changes in an interval set by cloneStaticSitesFromGit.interval.
  • Providing a ConfigMap: Set the staticSiteConfigmap value to mount a ConfigMap in the NGINX html folder.
  • Using an existing PVC: Set the staticSitePVC value to mount an PersistentVolumeClaim with the static site content.

You can deploy a example web application using git deploying the chart with the following parameters:

cloneStaticSiteFromGit.enabled=true
cloneStaticSiteFromGit.repository=https://github.com/mdn/beginner-html-site-styled.git
cloneStaticSiteFromGit.branch=master

Providing a custom server block

This helm chart supports using custom custom server block for NGINX to use.

You can use the serverBlock or streamServerBlock value to provide a custom server block for NGINX to use. To do this, create a values files with your server block and install the chart using it:

serverBlock: |-
  server {
    listen 0.0.0.0:8080;
    location / {
      return 200 "hello!";
    }
  }

Warning: The above example is not compatible with enabling Prometheus metrics since it affects the /status endpoint.

In addition, you can also set an external ConfigMap with the configuration file. This is done by setting the existingServerBlockConfigmap parameter. Note that this will override the previous option.

Adding extra environment variables

In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars property.

extraEnvVars:
  - name: LOG_LEVEL
    value: error

Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM or the extraEnvVarsSecret values.

Setting Pod's affinity

This chart allows you to set your custom affinity using the affinity parameter. Find more information about Pod's affinity in the kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.

Deploying extra resources

There are cases where you may want to deploy extra objects, such a ConfigMap containing your app's configuration or some extra deployment with a micro service used by your app. For covering this case, the chart allows adding the full specification of other objects using the extraDeploy parameter.

Ingress

This chart provides support for ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve your application.

To enable ingress integration, please set ingress.enabled to true.

Hosts

Most likely you will only want to have one hostname that maps to this NGINX installation. If that's your case, the property ingress.hostname will set it. However, it is possible to have more than one host. To facilitate this, the ingress.extraHosts object can be specified as an array. You can also use ingress.extraTLS to add the TLS configuration for extra hosts.

For each host indicated at ingress.extraHosts, please indicate a name, path, and any annotations that you may want the ingress controller to know about.

For annotations, please see this document. Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers.

Parameters

Global parameters

NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto

Common parameters

NameDescriptionValue
nameOverrideString to partially override nginx.fullname template (will maintain the release name)""
fullnameOverrideString to fully override nginx.fullname template""
namespaceOverrideString to fully override common.names.namespace""
kubeVersionForce target Kubernetes version (using Helm capabilities if not set)""
clusterDomainKubernetes Cluster Domaincluster.local
extraDeployExtra objects to deploy (value evaluated as a template)[]
commonLabelsAdd labels to all the deployed resources{}
commonAnnotationsAdd annotations to all the deployed resources{}
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the the deployment(s)/statefulset(s)["sleep"]
diagnosticMode.argsArgs to override all containers in the the deployment(s)/statefulset(s)["infinity"]

NGINX parameters

NameDescriptionValue
image.registryNGINX image registryREGISTRY_NAME
image.repositoryNGINX image repositoryREPOSITORY_NAME/nginx
image.digestNGINX image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
image.pullPolicyNGINX image pull policyIfNotPresent
image.pullSecretsSpecify docker-registry secret names as an array[]
image.debugSet to true if you would like to see extra information on logsfalse
enableDefaultInitContainersIf set to false, disable all init containers except user-defined at initContainer.true
automountServiceAccountTokenMount Service Account token in podfalse
hostAliasesDeployment pod host aliases[]
commandOverride default container command (useful when using custom images)[]
argsOverride default container args (useful when using custom images)[]
extraEnvVarsExtra environment variables to be set on NGINX containers[]
extraEnvVarsCMConfigMap with extra environment variables""
extraEnvVarsSecretSecret with extra environment variables""

NGINX deployment parameters

NameDescriptionValue
replicaCountNumber of NGINX replicas to deploy1
revisionHistoryLimitThe number of old history to retain to allow rollback10
updateStrategy.typeNGINX deployment strategy typeRollingUpdate
updateStrategy.rollingUpdateNGINX deployment rolling update configuration parameters{}
podLabelsAdditional labels for NGINX pods{}
podAnnotationsAnnotations for NGINX pods{}
podAffinityPresetPod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
podAntiAffinityPresetPod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
nodeAffinityPreset.typeNode affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
nodeAffinityPreset.keyNode label key to match Ignored if affinity is set.""
nodeAffinityPreset.valuesNode label values to match. Ignored if affinity is set.[]
affinityAffinity for pod assignment{}
hostNetworkSpecify if host network should be enabled for NGINX podfalse
hostIPCSpecify if host IPC should be enabled for NGINX podfalse
dnsPolicySpecifies the DNS policy for the NGINX pod""
dnsConfigAllows users more control on the DNS settings for a Pod. Required if dnsPolicy is set to None{}
nodeSelectorNode labels for pod assignment. Evaluated as a template.{}
tolerationsTolerations for pod assignment. Evaluated as a template.[]
priorityClassNameNGINX pods' priorityClassName""
schedulerNameName of the k8s scheduler (other than default)""
terminationGracePeriodSecondsIn seconds, time the given to the NGINX pod needs to terminate gracefully""
topologySpreadConstraintsTopology Spread Constraints for pod assignment[]
tls.enabledEnable TLS transporttrue
tls.autoGeneratedAuto-generate self-signed certificatestrue
tls.existingSecretName of a secret containing the certificates""
tls.certFilenamePath of the certificate file when mounted as a secrettls.crt
tls.certKeyFilenamePath of the certificate key file when mounted as a secrettls.key
tls.certCAFilenamePath of the certificate CA file when mounted as a secretca.crt
tls.certContent of the certificate to be added to the secret""
tls.keyContent of the certificate key to be added to the secret""
tls.caContent of the certificate CA to be added to the secret""
podSecurityContext.enabledEnabled NGINX pods' Security Contexttrue
podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
podSecurityContext.supplementalGroupsSet filesystem extra groups[]
podSecurityContext.fsGroupSet NGINX pod's Security Context fsGroup1001
podSecurityContext.sysctlssysctl settings of the NGINX pods[]
containerSecurityContext.enabledEnabled containers' Security Contexttrue
containerSecurityContext.seLinuxOptionsSet SELinux options in container{}
containerSecurityContext.runAsUserSet containers' Security Context runAsUser1001
containerSecurityContext.runAsGroupSet containers' Security Context runAsGroup1001
containerSecurityContext.runAsNonRootSet container's Security Context runAsNonRoottrue
containerSecurityContext.privilegedSet container's Security Context privilegedfalse
containerSecurityContext.readOnlyRootFilesystemSet container's Security Context readOnlyRootFilesystemtrue
containerSecurityContext.allowPrivilegeEscalationSet container's Security Context allowPrivilegeEscalationfalse
containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
containerSecurityContext.seccompProfile.typeSet container's Security Context seccomp profileRuntimeDefault
containerPorts.httpSets http port inside NGINX container8080
containerPorts.httpsSets https port inside NGINX container8443
extraContainerPortsArray of additional container ports for the Nginx container[]
resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production).nano
resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
lifecycleHooksOptional lifecycleHooks for the NGINX container{}
startupProbe.enabledEnable startupProbefalse
startupProbe.initialDelaySecondsInitial delay seconds for startupProbe30
startupProbe.periodSecondsPeriod seconds for startupProbe10
startupProbe.timeoutSecondsTimeout seconds for startupProbe5
startupProbe.failureThresholdFailure threshold for startupProbe6
startupProbe.successThresholdSuccess threshold for startupProbe1
livenessProbe.enabledEnable livenessProbetrue
livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe30
livenessProbe.periodSecondsPeriod seconds for livenessProbe10
livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
livenessProbe.failureThresholdFailure threshold for livenessProbe6
livenessProbe.successThresholdSuccess threshold for livenessProbe1
readinessProbe.enabledEnable readinessProbetrue
readinessProbe.pathRequest path for livenessProbe/
readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe5
readinessProbe.periodSecondsPeriod seconds for readinessProbe5
readinessProbe.timeoutSecondsTimeout seconds for readinessProbe3
readinessProbe.failureThresholdFailure threshold for readinessProbe3
readinessProbe.successThresholdSuccess threshold for readinessProbe1
customStartupProbeCustom liveness probe for the Web component{}
customLivenessProbeOverride default liveness probe{}
customReadinessProbeOverride default readiness probe{}
autoscaling.enabledEnable autoscaling for NGINX deploymentfalse
autoscaling.minReplicasMinimum number of replicas to scale back""
autoscaling.maxReplicasMaximum number of replicas to scale out""
autoscaling.targetCPUTarget CPU utilization percentage""
autoscaling.targetMemoryTarget Memory utilization percentage""
extraVolumesArray to add extra volumes[]
extraVolumeMountsArray to add extra mount[]
serviceAccount.createEnable creation of ServiceAccount for nginx podtrue
serviceAccount.nameThe name of the ServiceAccount to use.""
serviceAccount.annotationsAnnotations for service account. Evaluated as a template.{}
serviceAccount.automountServiceAccountTokenAuto-mount the service account token in the podfalse
sidecarsSidecar parameters[]
sidecarSingleProcessNamespaceEnable sharing the process namespace with sidecarsfalse
initContainersExtra init containers[]
pdb.createCreated a PodDisruptionBudgettrue
pdb.minAvailableMin number of pods that must still be available after the eviction.""
pdb.maxUnavailableMax number of pods that can be unavailable after the eviction.""

Custom NGINX application parameters

NameDescriptionValue
cloneStaticSiteFromGit.enabledGet the server static content from a Git repositoryfalse
cloneStaticSiteFromGit.image.registryGit image registryREGISTRY_NAME
cloneStaticSiteFromGit.image.repositoryGit image repositoryREPOSITORY_NAME/git
cloneStaticSiteFromGit.image.digestGit image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
cloneStaticSiteFromGit.image.pullPolicyGit image pull policyIfNotPresent
cloneStaticSiteFromGit.image.pullSecretsSpecify docker-registry secret names as an array[]
cloneStaticSiteFromGit.repositoryGit Repository to clone static content from""
cloneStaticSiteFromGit.branchGit branch to checkout""
cloneStaticSiteFromGit.intervalInterval for sidecar container pull from the Git repository60
cloneStaticSiteFromGit.gitClone.commandOverride default container command for git-clone-repository[]
cloneStaticSiteFromGit.gitClone.argsOverride default container args for git-clone-repository[]
cloneStaticSiteFromGit.gitSync.commandOverride default container command for git-repo-syncer[]
cloneStaticSiteFromGit.gitSync.argsOverride default container args for git-repo-syncer[]
cloneStaticSiteFromGit.gitSync.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if cloneStaticSiteFromGit.gitSync.resources is set (cloneStaticSiteFromGit.gitSync.resources is recommended for production).nano
cloneStaticSiteFromGit.gitSync.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
cloneStaticSiteFromGit.extraEnvVarsAdditional environment variables to set for the in the containers that clone static site from git[]
cloneStaticSiteFromGit.extraEnvVarsSecretSecret with extra environment variables""
cloneStaticSiteFromGit.extraVolumeMountsAdd extra volume mounts for the Git containers[]
serverBlockCustom server block to be added to NGINX configuration""
streamServerBlockCustom stream server block to be added to NGINX configuration""
existingServerBlockConfigmapConfigMap with custom server block to be added to NGINX configuration""
existingStreamServerBlockConfigmapConfigMap with custom stream server block to be added to NGINX configuration""
staticSiteConfigmapName of existing ConfigMap with the server static site content""
staticSitePVCName of existing PVC with the server static site content""

Traffic Exposure parameters

NameDescriptionValue
service.typeService typeLoadBalancer
service.ports.httpService HTTP port80
service.ports.httpsService HTTPS port443
service.nodePortsSpecify the nodePort(s) value(s) for the LoadBalancer and NodePort service types.{}
service.targetPortTarget port reference value for the Loadbalancer service types can be specified explicitly.{}
service.clusterIPNGINX service Cluster IP""
service.loadBalancerIPLoadBalancer service IP address""
service.loadBalancerSourceRangesNGINX service Load Balancer sources[]
service.loadBalancerClassservice Load Balancer class if service type is LoadBalancer (optional, cloud specific)""
service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
service.sessionAffinitySession Affinity for Kubernetes service, can be "None" or "ClientIP"None
service.sessionAffinityConfigAdditional settings for the sessionAffinity{}
service.annotationsService annotations{}
service.externalTrafficPolicyEnable client source IP preservationCluster
networkPolicy.enabledSpecifies whether a NetworkPolicy should be createdtrue
networkPolicy.allowExternalDon't require server label for connectionstrue
networkPolicy.allowExternalEgressAllow the pod to access any range of port and all destinations.true
networkPolicy.extraIngressAdd extra ingress rules to the NetworkPolicy[]
networkPolicy.extraEgressAdd extra ingress rules to the NetworkPolicy (ignored if allowExternalEgress=true)[]
networkPolicy.ingressNSMatchLabelsLabels to match to allow traffic from other namespaces{}
networkPolicy.ingressNSPodMatchLabelsPod labels to match to allow traffic from other namespaces{}
ingress.enabledSet to true to enable ingress record generationfalse
ingress.selfSignedCreate a TLS secret for this ingress record using self-signed certificates generated by Helmfalse
ingress.pathTypeIngress path typeImplementationSpecific
ingress.apiVersionForce Ingress API version (automatically detected if not set)""
ingress.hostnameDefault host for the ingress resourcenginx.local
ingress.pathThe Path to Nginx. You may need to set this to '/*' in order to use this with ALB ingress controllers./
ingress.annotationsAdditional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.{}
ingress.ingressClassNameSet the ingerssClassName on the ingress record for k8s 1.18+""
ingress.tlsCreate TLS Secretfalse
ingress.tlsWwwPrefixAdds www subdomain to default certfalse
ingress.extraHostsThe list of additional hostnames to be covered with this ingress record.[]
ingress.extraPathsAny additional arbitrary paths that may need to be added to the ingress under the main host.[]
ingress.extraTlsThe tls configuration for additional hostnames to be covered with this ingress record.[]
ingress.secretsIf you're providing your own certificates, please use this to add the certificates as secrets[]
ingress.extraRulesThe list of additional rules to be added to this ingress record. Evaluated as a template[]
healthIngress.enabledSet to true to enable health ingress record generationfalse
healthIngress.selfSignedCreate a TLS secret for this ingress record using self-signed certificates generated by Helmfalse
healthIngress.pathTypeIngress path typeImplementationSpecific
healthIngress.hostnameWhen the health ingress is enabled, a host pointing to this will be createdexample.local
healthIngress.pathDefault path for the ingress record/
healthIngress.annotationsAdditional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.{}
healthIngress.tlsEnable TLS configuration for the hostname defined at healthIngress.hostname parameterfalse
healthIngress.extraHostsAn array with additional hostname(s) to be covered with the ingress record[]
healthIngress.extraPathsAn array with additional arbitrary paths that may need to be added to the ingress under the main host[]
healthIngress.extraTlsTLS configuration for additional hostnames to be covered[]
healthIngress.secretsTLS Secret configuration[]
healthIngress.ingressClassNameIngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)""
healthIngress.extraRulesThe list of additional rules to be added to this ingress record. Evaluated as a template[]

Metrics parameters

NameDescriptionValue
metrics.enabledStart a Prometheus exporter sidecar containerfalse
metrics.image.registryNGINX Prometheus exporter image registryREGISTRY_NAME
metrics.image.repositoryNGINX Prometheus exporter image repositoryREPOSITORY_NAME/nginx-exporter
metrics.image.digestNGINX Prometheus exporter image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
metrics.image.pullPolicyNGINX Prometheus exporter image pull policyIfNotPresent
metrics.image.pullSecretsSpecify docker-registry secret names as an array[]
metrics.portNGINX Container Status Port scraped by Prometheus Exporter""
metrics.extraArgsExtra arguments for Prometheus exporter[]
metrics.containerPorts.metricsPrometheus exporter container port9113
metrics.podAnnotationsAdditional annotations for NGINX Prometheus exporter pod(s){}
metrics.securityContext.enabledEnabled NGINX Exporter containers' Security Contextfalse
metrics.securityContext.seLinuxOptionsSet SELinux options in container{}
metrics.securityContext.runAsUserSet NGINX Exporter container's Security Context runAsUser1001
metrics.service.portNGINX Prometheus exporter service port9113
metrics.service.annotationsAnnotations for the Prometheus exporter service{}
metrics.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if metrics.resources is set (metrics.resources is recommended for production).nano
metrics.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
metrics.serviceMonitor.enabledCreates a Prometheus Operator ServiceMonitor (also requires metrics.enabled to be true)false
metrics.serviceMonitor.namespaceNamespace in which Prometheus is running""
metrics.serviceMonitor.tlsConfigTLS configuration used for scrape endpoints used by Prometheus{}
metrics.serviceMonitor.jobLabelThe name of the label on the target service to use as the job name in prometheus.""
metrics.serviceMonitor.intervalInterval at which metrics should be scraped.""
metrics.serviceMonitor.scrapeTimeoutTimeout after which the scrape is ended""
metrics.serviceMonitor.selectorPrometheus instance selector labels{}
metrics.serviceMonitor.labelsAdditional labels that can be used so PodMonitor will be discovered by Prometheus{}
metrics.serviceMonitor.relabelingsRelabelConfigs to apply to samples before scraping[]
metrics.serviceMonitor.metricRelabelingsMetricRelabelConfigs to apply to samples before ingestion[]
metrics.serviceMonitor.honorLabelshonorLabels chooses the metric's labels on collisions with target labelsfalse
metrics.prometheusRule.enabledif true, creates a Prometheus Operator PrometheusRule (also requires metrics.enabled to be true and metrics.prometheusRule.rules)false
metrics.prometheusRule.namespaceNamespace for the PrometheusRule Resource (defaults to the Release Namespace)""
metrics.prometheusRule.additionalLabelsAdditional labels that can be used so PrometheusRule will be discovered by Prometheus{}
metrics.prometheusRule.rulesPrometheus Rule definitions[]

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

helm install my-release \
  --set imagePullPolicy=Always \
    oci://REGISTRY_NAME/REPOSITORY_NAME/nginx

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command sets the imagePullPolicy to Always.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install my-release -f values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/nginx

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Tip: You can use the default values.yaml

Troubleshooting

Find more information about how to deal with common errors related to Bitnami's Helm charts in this troubleshooting guide.

Upgrading

To 19.0.0

The module ngx_http_dav_module, WebDAV protocol, has been converted into a dynamic module under the /opt/bitnami/nginx/modules directory. It is necessary to include the directive load_module /opt/bitnami/nginx/modules/ngx_http_dav_module.so; to enable its functionality.

To 18.3.0

This version introduces image verification for security purposes. To disable it, set global.security.allowInsecureImages to true. More details at GitHub issue.

To 16.0.0

This major bump changes the following security defaults:

  • runAsGroup is changed from 0 to 1001
  • readOnlyRootFilesystem is set to true
  • resourcesPreset is changed from none to the minimum size working in our test suites (NOTE: resourcesPreset is not meant for production usage, but resources adapted to your use case).
  • global.compatibility.openshift.adaptSecurityContext is changed from disabled to auto.

This could potentially break any customization or init scripts used in your deployment. If this is the case, change the default values to the previous ones.

To 11.0.0

This major release renames several values in this chart and adds missing features, in order to be aligned with the rest of the assets in the Bitnami charts repository.

Affected values:

  • service.port was renamed as service.ports.http.
  • service.httpsPort was deprecated. We recommend using service.ports.https.
  • serviceAccount.autoMount was renamed as serviceAccount.automountServiceAccountToken
  • metrics.serviceMonitor.additionalLabels was renamed as metrics.serviceMonitor.labels

To 10.0.0

This major release no longer uses the bitnami/nginx-ldap-auth-daemon container as a dependency since its upstream project is not actively maintained.

2022-04-12 edit:

Bitnami's reference implementation.

On 9 April 2022, security vulnerabilities in the NGINX LDAP reference implementation were publicly shared. Although the deprecation of this container from the Bitnami catalog was not related to this security issue, here you can find more information from the Bitnami security team.

To 8.0.0

On November 13, 2020, Helm v2 support was formally finished, this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL.

What changes were introduced in this major version?

  • Previous versions of this Helm Chart use apiVersion: v1 (installable by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2 (installable by Helm 3 only). Here you can find more information about the apiVersion field.
  • Move dependency information from the requirements.yaml to the Chart.yaml
  • After running helm dependency update, a Chart.lock file is generated containing the same structure used in the previous requirements.lock
  • The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts

Considerations when upgrading to this version

  • If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues
  • If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore
  • If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the official Helm documentation about migrating from Helm v2 to v3

Useful links

To 7.0.0

  • This version also introduces bitnami/common, a library chart as a dependency. More documentation about this new utility could be found here. Please, make sure that you have updated the chart dependencies before executing any upgrade.
  • Ingress configuration was also adapted to follow the Helm charts best practices.

Note: There is no backwards compatibility due to the above mentioned changes. It's necessary to install a new release of the chart, and migrate your existing application to the new NGINX instances.

To 5.6.0

Added support for the use of LDAP.

To 5.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 5.0.0. The following example assumes that the release name is nginx:

kubectl delete deployment nginx --cascade=false
helm upgrade nginx oci://REGISTRY_NAME/REPOSITORY_NAME/nginx

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

To 1.0.0

Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments. Use the workaround below to upgrade from versions previous to 1.0.0. The following example assumes that the release name is nginx:

kubectl patch deployment nginx --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/chart"}]'

License

Copyright © 2025 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.