blob: b3d35b6e71520145fb2b38d31242bf3db5bebe2b [file] [log] [blame]
Giorgi Lekveishvilicccf72f2023-05-19 16:13:22 +04001{{ template "chart.header" . }}
2[ingress-nginx](https://github.com/kubernetes/ingress-nginx) Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer
3
4{{ template "chart.versionBadge" . }}{{ template "chart.typeBadge" . }}{{ template "chart.appVersionBadge" . }}
5
6To use, add `ingressClassName: nginx` spec field or the `kubernetes.io/ingress.class: nginx` annotation to your Ingress resources.
7
8This chart bootstraps an ingress-nginx deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
9
10{{ template "chart.requirementsSection" . }}
11
12## Get Repo Info
13
14```console
15helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
16helm repo update
17```
18
19## Install Chart
20
21**Important:** only helm3 is supported
22
23```console
24helm install [RELEASE_NAME] ingress-nginx/ingress-nginx
25```
26
27The command deploys ingress-nginx on the Kubernetes cluster in the default configuration.
28
29_See [configuration](#configuration) below._
30
31_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._
32
33## Uninstall Chart
34
35```console
36helm uninstall [RELEASE_NAME]
37```
38
39This removes all the Kubernetes components associated with the chart and deletes the release.
40
41_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._
42
43## Upgrading Chart
44
45```console
46helm upgrade [RELEASE_NAME] [CHART] --install
47```
48
49_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._
50
51### Migrating from stable/nginx-ingress
52
53There are two main ways to migrate a release from `stable/nginx-ingress` to `ingress-nginx/ingress-nginx` chart:
54
551. For Nginx Ingress controllers used for non-critical services, the easiest method is to [uninstall](#uninstall-chart) the old release and [install](#install-chart) the new one
561. For critical services in production that require zero-downtime, you will want to:
57 1. [Install](#install-chart) a second Ingress controller
58 1. Redirect your DNS traffic from the old controller to the new controller
59 1. Log traffic from both controllers during this changeover
60 1. [Uninstall](#uninstall-chart) the old controller once traffic has fully drained from it
61
62Note that there are some different and upgraded configurations between the two charts, described by Rimas Mocevicius from JFrog in the "Upgrading to ingress-nginx Helm chart" section of [Migrating from Helm chart nginx-ingress to ingress-nginx](https://rimusz.net/migrating-to-ingress-nginx). As the `ingress-nginx/ingress-nginx` chart continues to update, you will want to check current differences by running [helm configuration](#configuration) commands on both charts.
63
64## Configuration
65
66See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands:
67
68```console
69helm show values ingress-nginx/ingress-nginx
70```
71
72### PodDisruptionBudget
73
74Note that the PodDisruptionBudget resource will only be defined if the replicaCount is greater than one,
75else it would make it impossible to evacuate a node. See [gh issue #7127](https://github.com/helm/charts/issues/7127) for more info.
76
77### Prometheus Metrics
78
79The Nginx ingress controller can export Prometheus metrics, by setting `controller.metrics.enabled` to `true`.
80
81You can add Prometheus annotations to the metrics service using `controller.metrics.service.annotations`.
82Alternatively, if you use the Prometheus Operator, you can enable ServiceMonitor creation using `controller.metrics.serviceMonitor.enabled`. And set `controller.metrics.serviceMonitor.additionalLabels.release="prometheus"`. "release=prometheus" should match the label configured in the prometheus servicemonitor ( see `kubectl get servicemonitor prometheus-kube-prom-prometheus -oyaml -n prometheus`)
83
84### ingress-nginx nginx\_status page/stats server
85
86Previous versions of this chart had a `controller.stats.*` configuration block, which is now obsolete due to the following changes in nginx ingress controller:
87
88- In [0.16.1](https://github.com/kubernetes/ingress-nginx/blob/main/Changelog.md#0161), the vts (virtual host traffic status) dashboard was removed
89- In [0.23.0](https://github.com/kubernetes/ingress-nginx/blob/main/Changelog.md#0230), the status page at port 18080 is now a unix socket webserver only available at localhost.
90 You can use `curl --unix-socket /tmp/nginx-status-server.sock http://localhost/nginx_status` inside the controller container to access it locally, or use the snippet from [nginx-ingress changelog](https://github.com/kubernetes/ingress-nginx/blob/main/Changelog.md#0230) to re-enable the http server
91
92### ExternalDNS Service Configuration
93
94Add an [ExternalDNS](https://github.com/kubernetes-sigs/external-dns) annotation to the LoadBalancer service:
95
96```yaml
97controller:
98 service:
99 annotations:
100 external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.
101```
102
103### AWS L7 ELB with SSL Termination
104
105Annotate the controller as shown in the [nginx-ingress l7 patch](https://github.com/kubernetes/ingress-nginx/blob/ab3a789caae65eec4ad6e3b46b19750b481b6bce/deploy/aws/l7/service-l7.yaml):
106
107```yaml
108controller:
109 service:
110 targetPorts:
111 http: http
112 https: http
113 annotations:
114 service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:XX-XXXX-X:XXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX
115 service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
116 service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
117 service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
118```
119
120### Additional Internal Load Balancer
121
122This setup is useful when you need both external and internal load balancers but don't want to have multiple ingress controllers and multiple ingress objects per application.
123
124By default, the ingress object will point to the external load balancer address, but if correctly configured, you can make use of the internal one if the URL you are looking up resolves to the internal load balancer's URL.
125
126You'll need to set both the following values:
127
128`controller.service.internal.enabled`
129`controller.service.internal.annotations`
130
131If one of them is missing the internal load balancer will not be deployed. Example you may have `controller.service.internal.enabled=true` but no annotations set, in this case no action will be taken.
132
133`controller.service.internal.annotations` varies with the cloud service you're using.
134
135Example for AWS:
136
137```yaml
138controller:
139 service:
140 internal:
141 enabled: true
142 annotations:
143 # Create internal ELB
144 service.beta.kubernetes.io/aws-load-balancer-internal: "true"
145 # Any other annotation can be declared here.
146```
147
148Example for GCE:
149
150```yaml
151controller:
152 service:
153 internal:
154 enabled: true
155 annotations:
156 # Create internal LB. More information: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing
157 # For GKE versions 1.17 and later
158 networking.gke.io/load-balancer-type: "Internal"
159 # For earlier versions
160 # cloud.google.com/load-balancer-type: "Internal"
161
162 # Any other annotation can be declared here.
163```
164
165Example for Azure:
166
167```yaml
168controller:
169 service:
170 annotations:
171 # Create internal LB
172 service.beta.kubernetes.io/azure-load-balancer-internal: "true"
173 # Any other annotation can be declared here.
174```
175
176Example for Oracle Cloud Infrastructure:
177
178```yaml
179controller:
180 service:
181 annotations:
182 # Create internal LB
183 service.beta.kubernetes.io/oci-load-balancer-internal: "true"
184 # Any other annotation can be declared here.
185```
186
187An use case for this scenario is having a split-view DNS setup where the public zone CNAME records point to the external balancer URL while the private zone CNAME records point to the internal balancer URL. This way, you only need one ingress kubernetes object.
188
189Optionally you can set `controller.service.loadBalancerIP` if you need a static IP for the resulting `LoadBalancer`.
190
191### Ingress Admission Webhooks
192
193With nginx-ingress-controller version 0.25+, the nginx ingress controller pod exposes an endpoint that will integrate with the `validatingwebhookconfiguration` Kubernetes feature to prevent bad ingress from being added to the cluster.
194**This feature is enabled by default since 0.31.0.**
195
196With nginx-ingress-controller in 0.25.* work only with kubernetes 1.14+, 0.26 fix [this issue](https://github.com/kubernetes/ingress-nginx/pull/4521)
197
198#### How the Chart Configures the Hooks
199A validating and configuration requires the endpoint to which the request is sent to use TLS. It is possible to set up custom certificates to do this, but in most cases, a self-signed certificate is enough. The setup of this component requires some more complex orchestration when using helm. The steps are created to be idempotent and to allow turning the feature on and off without running into helm quirks.
200
2011. A pre-install hook provisions a certificate into the same namespace using a format compatible with provisioning using end user certificates. If the certificate already exists, the hook exits.
2022. The ingress nginx controller pod is configured to use a TLS proxy container, which will load that certificate.
2033. Validating and Mutating webhook configurations are created in the cluster.
2044. A post-install hook reads the CA from the secret created by step 1 and patches the Validating and Mutating webhook configurations. This process will allow a custom CA provisioned by some other process to also be patched into the webhook configurations. The chosen failure policy is also patched into the webhook configurations
205
206#### Alternatives
207It should be possible to use [cert-manager/cert-manager](https://github.com/cert-manager/cert-manager) if a more complete solution is required.
208
209You can enable automatic self-signed TLS certificate provisioning via cert-manager by setting the `controller.admissionWebhooks.certManager.enabled` value to true.
210
211Please ensure that cert-manager is correctly installed and configured.
212
213### Helm Error When Upgrading: spec.clusterIP: Invalid value: ""
214
215If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:
216
217```console
218Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
219```
220
221Detail of how and why are in [this issue](https://github.com/helm/charts/pull/13646) but to resolve this you can set `xxxx.service.omitClusterIP` to `true` where `xxxx` is the service referenced in the error.
222
223As of version `1.26.0` of this chart, by simply not providing any clusterIP value, `invalid: spec.clusterIP: Invalid value: "": field is immutable` will no longer occur since `clusterIP: ""` will not be rendered.
224
225{{ template "chart.valuesSection" . }}