This example demonstrates how to achieve session affinity using cookies.
kubectl create -f ingress.yaml
+ Sticky Sessions - NGINX Ingress Controller
Sticky sessions This example demonstrates how to achieve session affinity using cookies.
Deployment Session affinity can be configured using the following annotations:
Name Description Value nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie
to enable session affinity string (NGINX only supports cookie
) nginx.ingress.kubernetes.io/affinity-mode The affinity mode defines how sticky a session is. Use balanced
to redistribute some sessions when scaling pods or persistent
for maximum stickiness. balanced
(default) or persistent
nginx.ingress.kubernetes.io/affinity-canary-behavior Defines session affinity behavior of canaries. By default the behavior is sticky
, and canaries respect session affinity configuration. Set this to legacy
to restore original canary behavior, when session affinity parameters were not respected. sticky
(default) or legacy
nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE
) nginx.ingress.kubernetes.io/session-cookie-secure Set the cookie as secure regardless the protocol of the incoming request "true"
or "false"
nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path ) nginx.ingress.kubernetes.io/session-cookie-samesite SameSite
attribute to apply to the cookie Browser accepted values are None
, Lax
, and Strict
nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none Will omit SameSite=None
attribute for older browsers which reject the more-recently defined SameSite=None
value "true"
or "false"
nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age
cookie directive number of seconds nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires
cookie directive by adding the seconds to the current date number of seconds nginx.ingress.kubernetes.io/session-cookie-change-on-failure When set to false
nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true
and previous attempt failed, sticky cookie will be changed to point to another upstream. true
or false
(defaults to false
)
You can create the session affinity example Ingress to test this:
kubectl create -f ingress.yaml
Validation You can confirm that the Ingress works:
$ kubectl describe ing nginx-test
Name: nginx-test
Namespace: default
@@ -31,7 +31,7 @@
Last-Modified: Tue, 24 Jan 2017 14:02:19 GMT
ETag: "58875e6b-264"
Accept-Ranges: bytes
-
In the example above, you can see that the response contains a Set-Cookie
header with the settings we have defined. This cookie is created by NGINX, it contains a randomly generated key corresponding to the upstream used for that request (selected using consistent hashing ) and has an Expires
directive. If the user changes this cookie, NGINX creates a new one and redirects the user to another upstream.
If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.
When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change.
When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.
Basic Authentication This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd
. It's important the file generated is named auth
(actually - that the secret has a key data.auth
), otherwise the ingress-controller returns a 503.
$ htpasswd -c auth foo
+ Basic Authentication - NGINX Ingress Controller
Basic Authentication This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd
. It's important the file generated is named auth
(actually - that the secret has a key data.auth
), otherwise the ingress-controller returns a 503.
Create htpasswd file $ htpasswd -c auth foo
New password: <bar>
New password:
Re-type new password:
Adding password for user foo
-
$ kubectl create secret generic basic-auth --from-file= auth
+
Convert htpasswd into a secret $ kubectl create secret generic basic-auth --from-file= auth
secret "basic-auth" created
-
$ kubectl get secret basic-auth -o yaml
+
Examine secret $ kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK
@@ -14,7 +14,7 @@
name: basic-auth
namespace: default
type: Opaque
-
Using kubectl, create an ingress tied to the basic-auth secret $ echo "
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
@@ -40,7 +40,7 @@
port:
number: 80
" | kubectl create -f -
-
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com'
+
Use curl to confirm authorization is required by the ingress $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com'
* Trying 10.2.29.4...
* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)
> GET / HTTP/1.1
@@ -64,7 +64,7 @@
</body>
</html>
* Connection #0 to host 10.2.29.4 left intact
-
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'
+
Use curl with the correct credentials to connect to the ingress $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'
* Trying 10.2.29.4...
* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)
* Server auth using Basic with user 'foo'
diff --git a/examples/auth/client-certs/index.html b/examples/auth/client-certs/index.html
index 9104063ad..e0c096927 100644
--- a/examples/auth/client-certs/index.html
+++ b/examples/auth/client-certs/index.html
@@ -1,10 +1,10 @@
- Client Certificate Authentication - NGINX Ingress Controller
Client Certificate Authentication It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup:
CA certificate and Key(Intermediate Certs need to be in CA) Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use) Client Certificate(Signed by CA) and Key For more details on the generation process, checkout the Prerequisite docs .
You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following:
openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem
-
Then, you can concatenate them all in only one file, named 'ca.crt' as the following:
cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt
-
Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm(Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.
Creating Certificate Secrets There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly.
You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA.
kubectl create secret generic ca-secret --from-file= ca.crt= ca.crt
+ Client Certificate Authentication - NGINX Ingress Controller
Client Certificate Authentication It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource.
Before getting started you must have the following Certificates configured:
CA certificate and Key (Intermediate Certs need to be in CA) Server Certificate (Signed by CA) and Key (CN should be equal the hostname you will use) Client Certificate (Signed by CA) and Key For more details on the generation process, checkout the Prerequisite docs .
You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following:
openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem
+
Then, you can concatenate them all into one file, named 'ca.crt' with the following:
cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt
+
Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm (Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.
Creating Certificate Secrets There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly.
You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA.
kubectl create secret generic ca-secret --from-file= ca.crt= ca.crt
kubectl create secret generic tls-secret --from-file= tls.crt= server.crt --from-file= tls.key= server.key
-
You can create a secret containing CA certificate along with the Server Certificate, that can be used for both TLS and Client Auth.
kubectl create secret generic ca-secret --from-file= tls.crt= server.crt --from-file= tls.key= server.key --from-file= ca.crt= ca.crt
+
You can create a secret containing CA certificate along with the Server Certificate that can be used for both TLS and Client Auth.
kubectl create secret generic ca-secret --from-file= tls.crt= server.crt --from-file= tls.key= server.key --from-file= ca.crt= ca.crt
If you want to also enable Certificate Revocation List verification you can create the secret also containing the CRL file in PEM format:
kubectl create secret generic ca-secret --from-file= ca.crt= ca.crt --from-file= ca.crl= ca.crl
-
Note: The CA Certificate must contain the trusted certificate authority chain to verify client certificates.
Setup Instructions Add the annotations as provided in the ingress.yaml example to your own ingress resources as required. Test by performing a curl against the Ingress Path without the Client Cert and expect a Status Code 400. Test by performing a curl against the Ingress Path with the Client Cert and expect a Status Code 200.
External Basic Authentication Example 1: Use an external service (Basic Auth) located in https://httpbin.org
$ kubectl create -f ingress.yaml
+ External Basic Authentication - NGINX Ingress Controller
External Basic Authentication Example 1 Use an external service (Basic Auth) located in https://httpbin.org
$ kubectl create -f ingress.yaml
ingress "external-auth" created
$ kubectl get ing external-auth
@@ -35,7 +35,7 @@ status:
ingress:
- ip: 172.17.4.99
$
-
Test 1: no username/password (expect code 401)
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'
+
Test 1: no username/password (expect code 401) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'
* Rebuilt URL to: http://172.17.4.99/
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
@@ -60,7 +60,7 @@ $
</body>
</html>
* Connection #0 to host 172.17.4.99 left intact
-
Test 2: valid username/password (expect code 200)
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd'
+
Test 2: valid username/password (expect code 200) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd'
* Rebuilt URL to: http://172.17.4.99/
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
@@ -103,7 +103,7 @@ x-real-ip=10.2.60.1
BODY:
* Connection #0 to host 172.17.4.99 left intact
-no body in request-
-
Test 3: invalid username/password (expect code 401)
curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user'
+
Test 3: invalid username/password (expect code 401) curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user'
* Rebuilt URL to: http://172.17.4.99/
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
@@ -131,7 +131,7 @@ BODY:
</body>
</html>
* Connection #0 to host 172.17.4.99 left intact
-
External OAUTH Authentication Overview The auth-url
and auth-signin
annotations allow you to use an external authentication provider to protect your Ingress resources.
Important
This annotation requires ingress-nginx-controller v0.9.0
or greater.)
Key Detail This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.
Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401
s to the same endpoint.
Sample:
...
+ External OAUTH Authentication - NGINX Ingress Controller
External OAUTH Authentication Overview The auth-url
and auth-signin
annotations allow you to use an external authentication provider to protect your Ingress resources.
Important
This annotation requires ingress-nginx-controller v0.9.0
or greater.
Key Detail This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.
Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401
s to the same endpoint.
Sample:
...
metadata :
name : application
annotations :
nginx.ingress.kubernetes.io/auth-url : "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin : "https://$host/oauth2/start?rd=$escaped_request_uri"
...
-
Example: OAuth2 Proxy + Kubernetes-Dashboard This example will show you how to deploy oauth2_proxy
into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider
Prepare Install the kubernetes dashboard kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml
-
Create a custom Github OAuth application
Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com
Authorization callback URL is the same as the base FQDN plus /oauth2/callback
, like https://foo.bar.com/oauth2/callback
Configure oauth2_proxy values in the file oauth2-proxy.yaml
with the values:
OAUTH2_PROXY_CLIENT_ID with the github <Client ID>
OAUTH2_PROXY_CLIENT_SECRET with the github <Client Secret>
OAUTH2_PROXY_COOKIE_SECRET with value of python -c 'import os,base64; print(base64.b64encode(os.urandom(16)).decode("ascii"))'
Customize the contents of the file dashboard-ingress.yaml
:
Replace __INGRESS_HOST__
with a valid FQDN and __INGRESS_SECRET__
with a Secret with a valid SSL certificate.
Deploy the oauth2 proxy and the ingress rules running: $ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml
-
Test the oauth integration accessing the configured URL, like https://foo.bar.com
Configuration Snippets Ingress The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example .
$ kubectl apply -f ingress.yaml
-
Test Check if the contents of the annotation are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf
Configuration Snippets Ingress The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at an example of specifying customer headers .
kubectl apply -f ingress.yaml
+
Test Check if the contents of the annotation are present in the nginx.conf file using:
kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf
+
This example demonstrates configuration of the nginx ingress controller via a ConfigMap to pass a custom list of headers to the upstream server.
custom-headers.yaml defines a ConfigMap in the ingress-nginx
namespace named custom-headers
, holding several custom X-prefixed HTTP headers.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/custom-headers.yaml
+ Custom Headers - NGINX Ingress Controller
Caveats Changes to the custom header config maps do not force a reload of the ingress-nginx-controllers.
Workaround To work around this limitation, perform a rolling restart of the deployment.
Example This example demonstrates configuration of the nginx ingress controller via a ConfigMap to pass a custom list of headers to the upstream server.
custom-headers.yaml defines a ConfigMap in the ingress-nginx
namespace named custom-headers
, holding several custom X-prefixed HTTP headers.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/custom-headers.yaml
configmap.yaml defines a ConfigMap in the ingress-nginx
namespace named ingress-nginx-controller
. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers
is set to cite the previously-created ingress-nginx/custom-headers
ConfigMap.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/configmap.yaml
The nginx ingress controller will read the ingress-nginx/ingress-nginx-controller
ConfigMap, find the proxy-set-headers
key, read HTTP headers from the ingress-nginx/custom-headers
ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.
The above example was for passing a custom list of headers to the upstream server. To pass the custom headers before sending response traffic to the client, use the add-headers key:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/configmap-client-response.yaml
Test Check the contents of the ConfigMaps are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx -- cat /etc/nginx/nginx.conf
This example demonstrates propagation of selected authentication service response headers to backend service.
Sample configuration includes:
Sample authentication service producing several response headers Authentication logic is based on HTTP header: requests with header User
containing string internal
are considered authenticated After successful authentication service generates response headers UserID
and UserRole
Sample echo service displaying header information Two ingress objects pointing to echo service Public, which allows access from unauthenticated users Private, which allows access from authenticated users only You can deploy the controller as follows:
$ kubectl create -f deploy/
+ External authentication - NGINX Ingress Controller
This example demonstrates propagation of selected authentication service response headers to a backend service.
Sample configuration includes:
Sample authentication service producing several response headers Authentication logic is based on HTTP header: requests with header User
containing string internal
are considered authenticated After successful authentication service generates response headers UserID
and UserRole
Sample echo service displaying header information Two ingress objects pointing to echo service Public, which allows access from unauthenticated users Private, which allows access from authenticated users only You can deploy the controller as follows:
$ kubectl create -f deploy/
deployment "demo-auth-service" created
service "demo-auth-service" created
ingress "demo-auth-service" created
@@ -16,7 +16,7 @@
NAME HOSTS ADDRESS PORTS AGE
public-demo-echo-service public-demo-echo-service.kube.local 80 1m
secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m
-
Test 1: public service with no auth header
$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192 .168.99.100
+
$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192 .168.99.100
* Rebuilt URL to: 192.168.99.100/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)
@@ -34,7 +34,7 @@
<
* Connection #0 to host 192.168.99.100 left intact
UserID: , UserRole:
-
Test 2: secure service with no auth header
$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192 .168.99.100
+
$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192 .168.99.100
* Rebuilt URL to: 192.168.99.100/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)
@@ -58,7 +58,7 @@
</body>
</html>
* Connection #0 to host 192.168.99.100 left intact
-
Test 3: public service with valid auth header
$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100
+
$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100
* Rebuilt URL to: 192.168.99.100/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)
@@ -77,7 +77,7 @@
<
* Connection #0 to host 192.168.99.100 left intact
UserID: 1443635317331776148, UserRole: admin
-
Test 4: secure service with valid auth header
$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100
+
$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100
* Rebuilt URL to: 192.168.99.100/
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)
diff --git a/examples/customization/ssl-dh-param/index.html b/examples/customization/ssl-dh-param/index.html
index 6da0e1102..3ce97dce4 100644
--- a/examples/customization/ssl-dh-param/index.html
+++ b/examples/customization/ssl-dh-param/index.html
@@ -1,4 +1,4 @@
- Custom DH parameters for perfect forward secrecy - NGINX Ingress Controller
Custom DH parameters for perfect forward secrecy This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with "Perfect Forward Secrecy".
Custom configuration $ cat configmap.yaml
+ Custom DH parameters for perfect forward secrecy - NGINX Ingress Controller
Custom DH parameters for perfect forward secrecy This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure a custom Diffie-Hellman parameters file to help with "Perfect Forward Secrecy".
Custom configuration $ cat configmap.yaml
apiVersion: v1
data:
ssl-dh-param: "ingress-nginx/lb-dhparam"
@@ -10,7 +10,7 @@
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
$ kubectl create -f configmap.yaml
-
Custom DH parameters secret $ > openssl dhparam 4096 2 > /dev/null | base64
+
Custom DH parameters secret $ openssl dhparam 4096 2 > /dev/null | base64
LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...
$ cat ssl-dh-param.yaml
apiVersion: v1
@@ -24,7 +24,8 @@
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
$ kubectl create -f ssl-dh-param.yaml
-
Test Check the contents of the configmap is present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf
Sysctl tuning This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch
kubectl patch deployment -n ingress-nginx ingress-nginx-controller \
+ Sysctl tuning - NGINX Ingress Controller
Sysctl tuning This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch
.
kubectl patch deployment -n ingress-nginx ingress-nginx-controller \
--patch="$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/sysctl/patch.json)"
Changes:
Backlog Queue setting net.core.somaxconn
from 128
to 32768
Ephemeral Ports setting net.ipv4.ip_local_port_range
from 32768 60999
to 1024 65000
In a post from the NGINX blog , it is possible to see an explanation for the changes.
Docker registry This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet
Deployment First we deploy the docker registry in the cluster:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/deployment.yaml
+ Docker registry - NGINX Ingress Controller
Docker registry This example demonstrates how to deploy a docker registry in the cluster and configure Ingress to enable access from the Internet.
Deployment First we deploy the docker registry in the cluster:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/deployment.yaml
Important
DO NOT RUN THIS IN PRODUCTION
This deployment uses emptyDir
in the volumeMount
which means the contents of the registry will be deleted when the pod dies.
The next required step is creation of the ingress rules. To do this we have two options: with and without TLS
Without TLS Download and edit the yaml deployment replacing registry.<your domain>
with a valid DNS name pointing to the ingress controller:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/ingress-without-tls.yaml
Important
Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.
Please check deploy a plain http registry
With TLS Download and edit the yaml deployment replacing registry.<your domain>
with a valid DNS name pointing to the ingress controller:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/ingress-with-tls.yaml
Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate.
Testing To test the registry is working correctly we download a known image from docker hub , create a tag pointing to the new registry and upload the image:
docker pull ubuntu:16.04
diff --git a/examples/grpc/index.html b/examples/grpc/index.html
index f88993b8e..6ec86f2ae 100644
--- a/examples/grpc/index.html
+++ b/examples/grpc/index.html
@@ -1,5 +1,5 @@
- gRPC - NGINX Ingress Controller
gRPC This example demonstrates how to route traffic to a gRPC service through the nginx controller.
Prerequisites You have a kubernetes cluster running. You have a domain name such as example.com
that is configured to route traffic to the ingress controller. You have the ingress-nginx-controller installed as per docs. You have a backend application running a gRPC server and listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application. Step 1: Create a Kubernetes Deployment
for gRPC app Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below: $ kubectl get po -A -o wide | grep go-grpc-greeter-server
-
If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.
As an example gRPC application, we can use this app https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go .
To create a container image for this app, you can use this Dockerfile .
If you use the Dockerfile mentioned above, to create a image, then given below is an example of a Kubernetes manifest, to create a deployment resource, that uses that image. If needed, then edit this manifest to suit your needs. Assuming the name of this yaml file is deployment.go-grpc-greeter-server.yaml
;
cat <<EOF | kubectl apply -f -
+ gRPC - NGINX Ingress Controller
gRPC This example demonstrates how to route traffic to a gRPC service through the Ingress-NGINX controller.
Prerequisites You have a kubernetes cluster running. You have a domain name such as example.com
that is configured to route traffic to the Ingress-NGINX controller. You have the ingress-nginx-controller installed as per docs. You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls
, in the same namespace as the gRPC application. Step 1: Create a Kubernetes Deployment
for gRPC app Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below: $ kubectl get po -A -o wide | grep go-grpc-greeter-server
+
If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.
As an example gRPC application, we can use this app https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go .
To create a container image for this app, you can use this Dockerfile .
If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs.
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -29,7 +29,7 @@ spec:
ports:
- containerPort: 50051
EOF
-
Step 2: Create the Kubernetes Service
for the gRPC app You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod ; cat <<EOF | kubectl apply -f -
+
Step 2: Create the Kubernetes Service
for the gRPC app You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod. cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
@@ -45,8 +45,8 @@ spec:
app: go-grpc-greeter-server
type: ClusterIP
EOF
-
You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml
and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this ; $ kubectl create -f service.go-grpc-greeter-server.yaml
-
Step 3: Create the Kubernetes Ingress
resource for the gRPC app Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster, in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type "kubernete.io/tls" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress; cat <<EOF | kubectl apply -f -
+
You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml
and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this: $ kubectl create -f service.go-grpc-greeter-server.yaml
+
Step 3: Create the Kubernetes Ingress
resource for the gRPC app Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type "kubernete.io/tls" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress. cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
@@ -76,12 +76,12 @@ spec:
hosts:
- grpctest.dev.mydomain.com
EOF
-
If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml
and edit it to match your deployment and service, you can create the ingress like this ; $ kubectl create -f ingress.go-grpc-greeter-server.yaml
+
If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml
and edit it to match your deployment and service, you can create the ingress like this: $ kubectl create -f ingress.go-grpc-greeter-server.yaml
The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive "insecure").
For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"
.
A few more things to note:
We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
. This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
We're terminating TLS at the ingress and have configured an SSL certificate wildcard.dev.mydomain.com
. The ingress matches traffic arriving as https://grpctest.dev.mydomain.com:443
and routes unencrypted messages to the backend Kubernetes service.
Step 4: test the connection Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility: $ grpcurl grpctest.dev.mydomain.com:443 helloworld.Greeter/SayHello
{
- "message": "
+ "message": "Hello "
}
-
Debugging Hints Obviously, watch the logs on your app. Watch the logs for the ingress-nginx-controller (increasing verbosity as needed). Double-check your address and ports. Set the GODEBUG=http2debug=2
environment variable to get detailed http/2 logging on the client and/or server. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540 . If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.
See also the specific GRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html
Notes on using response/request streams If your server does only response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout
to accommodate for this. If your service does only request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout
and the client_body_timeout
. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout
, grpc_send_timeout
and client_body_timeout
. Values for the timeouts must be specified as e.g. "1200s"
.
On the most recent versions of ingress-nginx, changing these timeouts requires using the nginx.ingress.kubernetes.io/server-snippet
annotation. There are plans for future releases to allow using the Kubernetes annotations to define each timeout separately.
Multi TLS certificate termination This example uses 2 different certificates to terminate SSL for 2 hostnames.
Deploy the controller by creating the rc in the parent dir Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml Create multi-tls.yaml This should generate a segment like:
$ kubectl exec -it ingress-nginx-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35
+ Multi TLS certificate termination - NGINX Ingress Controller
Multi TLS certificate termination This example uses 2 different certificates to terminate SSL for 2 hostnames.
Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml Create multi-tls.yaml This should generate a segment like:
$ kubectl exec -it ingress-nginx-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35
server {
listen 80;
listen 443 ssl http2;
diff --git a/examples/psp/index.html b/examples/psp/index.html
index 0dca4decd..9576036d8 100644
--- a/examples/psp/index.html
+++ b/examples/psp/index.html
@@ -1,5 +1,5 @@
- Pod Security Policy (PSP) - NGINX Ingress Controller
Pod Security Policy (PSP) In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called Pod Security Policy (PSP) .
PSP allows the cluster owner to define the permission of each object, for example creating a pod. If you have PSP enabled on the cluster, and you deploy ingress-nginx, you will need to provide the Deployment with the permissions to create pods.
Before applying any objects, first apply the PSP permissions by running:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/psp/psp.yaml
-
Note: PSP permissions must be granted before to the creation of the Deployment and the ReplicaSet.
Pod Security Policy (PSP) In most clusters today, by default, all resources (e.g. Deployments
and ReplicatSets
) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called Pod Security Policy (PSP) .
PSP allows the cluster owner to define the permission of each object, for example creating a pod. If you have PSP enabled on the cluster, and you deploy ingress-nginx, you will need to provide the Deployment
with the permissions to create pods.
Before applying any objects, first apply the PSP permissions by running:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/psp/psp.yaml
+
Note: PSP permissions must be granted before the creation of the Deployment
and the ReplicaSet
.
Rewrite This example demonstrates how to use the Rewrite annotations
Prerequisites You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster.
Deployment Rewriting can be controlled using the following annotations:
Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool
Examples Rewrite Target Attention
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target
are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group .
Note
Captured groups are saved in numbered placeholders, chronologically, in the form $1
, $2
... $n
. These placeholders can be used as parameters in the rewrite-target
annotation.
Create an Ingress rule with a rewrite annotation:
$ echo '
+ Rewrite - NGINX Ingress Controller
Rewrite This example demonstrates how to use Rewrite
annotations.
Prerequisites You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster.
Deployment Rewriting can be controlled using the following annotations:
Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in /
context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool
Examples Rewrite Target Attention
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target
are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group .
Note
Captured groups are saved in numbered placeholders, chronologically, in the form $1
, $2
... $n
. These placeholders can be used as parameters in the rewrite-target
annotation.
Create an Ingress rule with a rewrite annotation:
$ echo '
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
diff --git a/examples/static-ip/index.html b/examples/static-ip/index.html
index bd5537456..53d3943a0 100644
--- a/examples/static-ip/index.html
+++ b/examples/static-ip/index.html
@@ -1,12 +1,12 @@
- Static IPs - NGINX Ingress Controller
Static IPs This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.
Prerequisites You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster.
Acquiring an IP Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade.
To acquire a static IP for the ingress-nginx-controller, simply put it behind a Service of Type=LoadBalancer
.
First, create a loadbalancer Service and wait for it to acquire an IP
$ kubectl create -f static-ip-svc.yaml
+ Static IPs - NGINX Ingress Controller
Static IPs This example demonstrates how to assign a static-ip to an Ingress on through the Ingress-NGINX controller.
Prerequisites You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster.
Acquiring an IP Since instances of the ingress nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrades.
To acquire a static IP for the ingress-nginx-controller, simply put it behind a Service of Type=LoadBalancer
.
First, create a loadbalancer Service and wait for it to acquire an IP:
$ kubectl create -f static-ip-svc.yaml
service "ingress-nginx-lb" created
$ kubectl get svc ingress-nginx-lb
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m
-
then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service
flag (the example yaml used in the next step already has it set to "ingress-nginx-lb").
$ kubectl create -f ingress-nginx-controller.yaml
+
Then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service
flag (the example yaml used in the next step already has it set to "ingress-nginx-lb").
$ kubectl create -f ingress-nginx-controller.yaml
deployment "ingress-nginx-controller" created
-
Assigning the IP to an Ingress From here on every Ingress created with the ingress.class
annotation set to nginx
will get the IP allocated in the previous step
$ kubectl create -f ingress-nginx.yaml
+
Assigning the IP to an Ingress From here on every Ingress created with the ingress.class
annotation set to nginx
will get the IP allocated in the previous step.
$ kubectl create -f ingress-nginx.yaml
ingress "ingress-nginx" created
$ kubectl get ing ingress-nginx
@@ -22,7 +22,7 @@
request_version=1.1
request_uri=http://104.154.109.191:8080/
...
-
Retaining the IP You can test retention by deleting the Ingress
$ kubectl delete ing ingress-nginx
+
Retaining the IP You can test retention by deleting the Ingress:
$ kubectl delete ing ingress-nginx
ingress "ingress-nginx" deleted
$ kubectl create -f ingress-nginx.yaml
@@ -31,9 +31,9 @@
$ kubectl get ing ingress-nginx
NAME HOSTS ADDRESS PORTS AGE
ingress-nginx * 104.154.109.191 80, 443 13m
-
Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.
To promote the allocated IP to static, you can update the Service manifest
$ kubectl patch svc ingress-nginx-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}'
+
Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.
To promote the allocated IP to static, you can update the Service manifest:
$ kubectl patch svc ingress-nginx-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}'
"ingress-nginx-lb" patched
-
and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE) `
$ gcloud compute addresses create ingress-nginx-lb --addresses 104 .154.109.191 --region us-central1
+
... and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE):
$ gcloud compute addresses create ingress-nginx-lb --addresses 104 .154.109.191 --region us-central1
Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb].
---
address: 104.154.109.191
@@ -47,7 +47,7 @@
status: IN_USE
users:
- us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000
-
Now even if the Service is deleted, the IP will persist, so you can recreate the Service with spec.loadBalancerIP
set to 104.154.109.191
.
How it works The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.
NGINX configuration The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream
configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
NGINX model Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.
To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer
. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.
One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.
The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.
Building the NGINX model Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue .
Operations to build the model:
Order Ingress rules by CreationTimestamp
field, i.e., old rules first.
If the same path for the same host is defined in more than one Ingress, the oldest rule wins.
If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.
Create a list of NGINX Servers (per hostname)
Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. When a reload is required The next list describes the scenarios when a reload is required:
New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance
annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated. Avoiding reloads In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.
Avoiding reloads on Endpoints changes On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua
context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream
configuration in Nginx as well.
In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.
Avoiding outage from wrong configuration Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet
annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.
To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.
How it works The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one.
NGINX configuration The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream
configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
NGINX model Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.
To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer
. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.
One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.
The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.
Building the NGINX model Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue .
Operations to build the model:
Order Ingress rules by CreationTimestamp
field, i.e., old rules first.
If the same path for the same host is defined in more than one Ingress, the oldest rule wins.
If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.
Create a list of NGINX Servers (per hostname)
Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. When a reload is required The next list describes the scenarios when a reload is required:
New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance
annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated. Avoiding reloads In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.
Avoiding reloads on Endpoints changes On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua
context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream
configuration in Nginx as well.
In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.
Avoiding outage from wrong configuration Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet
annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.
To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.
Overview This is the documentation for the NGINX Ingress Controller.
It is built around the Kubernetes Ingress resource , using a ConfigMap to store the controller configuration.
You can learn more about using Ingress in the official Kubernetes documentation .
Getting Started See Deployment for a whirlwind tour that will get you started.
FAQ - Migration to apiVersion networking.k8s.io/v1
If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetess v1.22, this section is relevant to you.
What is an IngressClass and why is it important for users of Ingress-NGINX controller now ? IngressClass is a Kubernetes resource. See the description below. Its important because until now, a default install of the Ingress-NGINX controller did not require any IngressClass object. From version 1.0.0 of the Ingress-NGINX Controller, an IngressClass object is required.
On clusters with more than one instance of the Ingress-NGINX controller, all instances of the controllers must be aware of which Ingress objects they serve. The ingressClassName
field of an Ingress is the way to let the controller know about that.
kubectl explain ingressclass
+ Welcome - NGINX Ingress Controller
Overview This is the documentation for the Ingress NGINX Controller.
It is built around the Kubernetes Ingress resource , using a ConfigMap to store the controller configuration.
You can learn more about using Ingress in the official Kubernetes documentation .
Getting Started See Deployment for a whirlwind tour that will get you started.
FAQ - Migration to apiVersion networking.k8s.io/v1
If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetess v1.22, this section is relevant to you.
What is an IngressClass and why is it important for users of Ingress-NGINX controller now ? IngressClass is a Kubernetes resource. See the description below. Its important because until now, a default install of the Ingress-NGINX controller did not require any IngressClass object. From version 1.0.0 of the Ingress-NGINX Controller, an IngressClass object is required.
On clusters with more than one instance of the Ingress-NGINX controller, all instances of the controllers must be aware of which Ingress objects they serve. The ingressClassName
field of an Ingress is the way to let the controller know about that.
kubectl explain ingressclass
KIND: IngressClass
VERSION: networking.k8s.io/v1
@@ -29,24 +29,24 @@ FIELDS:
spec <Object>
Spec is the desired state of the IngressClass. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status`
-
What has caused this change in behaviour ? There are 2 reasons primarily.
(Reason #1) Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:
extensions/v1beta1
networking.k8s.io/v1beta1
You would get a message about deprecation, but the Ingress resource would get created.
From K8s version 1.22 onwards, you can only access the Ingress API via the stable, networking.k8s.io/v1
API. The reason is explained in the official blog on deprecated ingress API versions .
(Reason #2) if you are already using the Ingress-NGINX controller and then upgrade to K8s version v1.22 , there are several scenarios where your existing Ingress objects will not work how you expect. Read this FAQ to check which scenario matches your use case.
What is ingressClassName field ? ingressClassName
is a field in the specs of an Ingress object.
kubectl explain ingress.spec.ingressClassName
-
KIND: Ingress
-VERSION: networking.k8s.io/v1
+
What has caused this change in behaviour ? There are 2 reasons primarily.
Reason #1 Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:
extensions/v1beta1
networking.k8s.io/v1beta1
You would get a message about deprecation, but the Ingress resource would get created.
From K8s version 1.22 onwards, you can only access the Ingress API via the stable, networking.k8s.io/v1
API. The reason is explained in the official blog on deprecated ingress API versions .
Reason #2 If you are already using the Ingress-NGINX controller and then upgrade to K8s version v1.22 , there are several scenarios where your existing Ingress objects will not work how you expect. Read this FAQ to check which scenario matches your use case.
What is ingressClassName field ? ingressClassName
is a field in the specs of an Ingress object.
kubectl explain ingress.spec.ingressClassName
+
KIND: Ingress
+VERSION: networking.k8s.io/v1
-FIELD: ingressClassName <string>
+FIELD: ingressClassName <string>
-DESCRIPTION:
- IngressClassName is the name of the IngressClass cluster resource. The
- associated IngressClass defines which controller will implement the
- resource. This replaces the deprecated `kubernetes.io/ingress.class`
- annotation. For backwards compatibility, when that annotation is set, it
- must be given precedence over this field. The controller may emit a warning
- if the field and annotation have different values. Implementations of this
- API should ignore Ingresses without a class specified. An IngressClass
- resource may be marked as default, which can be used to set a default value
- for this field. For more information, refer to the IngressClass
- documentation.
-
The .spec.ingressClassName
behavior has precedence over the deprecated kubernetes.io/ingress.class
annotation.
I have only one instance of the Ingress-NGINX controller in my cluster. What should I do ? If you have only one instance of the Ingress-NGINX controller running in your cluster, and you still want to use IngressClass, you should add the annotation ingressclass.kubernetes.io/is-default-class
in your IngressClass, so that any new Ingress objects will have this one as default IngressClass. In this case, you need to make your controller aware of the objects. If you have any Ingress objects that don't yet have either the .spec.ingressClassName
field set in their manifest, or the ingress annotation (kubernetes.io/ingress.class
), then you should start your Ingress-NGINX controller with the flag --watch-ingress-without-class=true
.
You can configure your Helm chart installation's values file with .controller.watchIngressWithoutClass: true
.
We recommend that you create the IngressClass as shown below:
---
+DESCRIPTION:
+ IngressClassName is the name of the IngressClass cluster resource. The
+ associated IngressClass defines which controller will implement the
+ resource. This replaces the deprecated `kubernetes.io/ingress.class`
+ annotation. For backwards compatibility, when that annotation is set, it
+ must be given precedence over this field. The controller may emit a warning
+ if the field and annotation have different values. Implementations of this
+ API should ignore Ingresses without a class specified. An IngressClass
+ resource may be marked as default, which can be used to set a default value
+ for this field. For more information, refer to the IngressClass
+ documentation.
+
The .spec.ingressClassName
behavior has precedence over the deprecated kubernetes.io/ingress.class
annotation.
I have only one instance of the Ingress-NGINX controller in my cluster. What should I do ? If you have only one instance of the Ingress-NGINX controller running in your cluster, and you still want to use IngressClass, you should add the annotation ingressclass.kubernetes.io/is-default-class
in your IngressClass, so that any new Ingress objects will have this one as default IngressClass. In this case, you need to make your controller aware of the objects. If you have any Ingress objects that don't yet have either the .spec.ingressClassName
field set in their manifest, or the ingress annotation (kubernetes.io/ingress.class
), then you should start your Ingress-NGINX controller with the flag --watch-ingress-without-class=true .
You can configure your Helm chart installation's values file with .controller.watchIngressWithoutClass: true
.
We recommend that you create the IngressClass as shown below:
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
@@ -57,7 +57,7 @@ metadata:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: k8s.io/ingress-nginx
-
And add the value "spec.ingressClassName=nginx" in your Ingress objects I have multiple ingress objects in my cluster. What should I do ? If you have lot of ingress objects without ingressClass configuration, you can run the ingress-controller with the flag --watch-ingress-without-class=true
. What is the flag '--watch-ingress-without-class' ? Its a flag that is passed,as an argument, to the nginx-ingress-controller
executable. In the configuration, it looks like this ; And add the value spec.ingressClassName=nginx
in your Ingress objects.
I have multiple ingress objects in my cluster. What should I do ? If you have lot of ingress objects without ingressClass configuration, you can run the ingress-controller with the flag --watch-ingress-without-class=true
. What is the flag '--watch-ingress-without-class' ? Its a flag that is passed,as an argument, to the nginx-ingress-controller
executable. In the configuration, it looks like this: ...
...
args:
- /nginx-ingress-controller
@@ -74,7 +74,7 @@ args:
I have more than one controller in my cluster and already use the annotation ? No problem. This should still keep working, but we highly recommend you to test!
Even though kubernetes.io/ingress.class
is deprecated, the Ingress-NGINX controller still understands that annotation. If you want to follow good practice, you should consider migrating to use IngressClass and .spec.ingressClassName
.
I have more than one controller running in my cluster, and I want to use the new API ? In this scenario, you need to create multiple IngressClasses (see example one). But be aware that IngressClass works in a very specific way: you will need to change the .spec.controller
value in your IngressClass and configure the controller to expect the exact same value.
Let's see some example, supposing that you have three IngressClasses:
IngressClass ingress-nginx-one
, with .spec.controller
equal to example.com/ingress-nginx1
IngressClass ingress-nginx-two
, with .spec.controller
equal to example.com/ingress-nginx2
IngressClass ingress-nginx-three
, with .spec.controller
equal to example.com/ingress-nginx1
(for private use, you can also use a controller name that doesn't contain a /
; for example: ingress-nginx1
)
When deploying your ingress controllers, you will have to change the --controller-class
field as follows:
Ingress-Nginx A, configured to use controller class name example.com/ingress-nginx1
Ingress-Nginx B, configured to use controller class name example.com/ingress-nginx2
Then, when you create an Ingress object with its ingressClassName
set to ingress-nginx-two
, only controllers looking for the example.com/ingress-nginx2
controller class pay attention to the new object. Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress.
Bear in mind that, if you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true
, then it will serve:
Ingresses without any ingressClassName
set Ingresses where the the deprecated annotation (kubernetes.io/ingress.class
) matches the value set in the command line argument --ingress-class
Ingresses that refer to any IngressClass that has the same spec.controller
as configured in --controller-class
If you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true
and you run Ingress-Nginx A with the command line argument --watch-ingress-without-class=false
then this is a supported configuration. If you have two Ingress-NGINX controllers for the same cluster, both running with --watch-ingress-without-class=true
then there is likely to be a conflict.
I am seeing this error message in the logs of the Ingress-NGINX controller: "ingress class annotation is not equal to the expected by Ingress Controller". Why ? It is highly likely that you will also see the name of the ingress resource in the same error message. This error messsage has been observed on use the deprecated annotation (kubernetes.io/ingress.class
) in a Ingress resource manifest. It is recommended to use the .spec.ingressClassName
field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining. How to easily install multiple instances of the ingress-NGINX controller in the same cluster ? Create a new namespace kubectl create namespace ingress-nginx-2
Use Helm to install the additional instance of the ingress controller Ensure you have Helm working (refer to the Helm documentation ) We have to assume that you have the helm repo for the ingress-NGINX controller already added to your Helm config. But, if you have not added the helm repo then you can do this to add the repo to your helm config; helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Make sure you have updated the helm repo data; Now, install an additional instance of the ingress-NGINX controller like this ; helm install ingress-nginx-2 ingress-nginx/ingress-nginx \
+
Now, install an additional instance of the ingress-NGINX controller like this: helm install ingress-nginx-2 ingress-nginx/ingress-nginx \
--namespace ingress-nginx-2 \
--set controller.ingressClassResource.name=nginx-two \
--set controller.ingressClassResource.controllerValue="example.com/ingress-nginx-2" \
diff --git a/kubectl-plugin/index.html b/kubectl-plugin/index.html
index 5a147a985..4390da3cd 100644
--- a/kubectl-plugin/index.html
+++ b/kubectl-plugin/index.html
@@ -152,11 +152,11 @@ Do not move it without providing redirects.
info Shows the internal and external IP/CNAMES for an ingress-nginx
service.
$ kubectl ingress-nginx info -n ingress-nginx
Service cluster IP address: 10.187.253.31
LoadBalancer IP|CNAME: 35.123.123.123
-
Use the --service <service>
flag if your ingress-nginx
LoadBalancer
service is not named ingress-nginx
.
ingresses kubectl ingress-nginx ingresses
, alternately kubectl ingress-nginx ing
, shows a more detailed view of the ingress definitions in a namespace. Compare:
$ kubectl get ingresses --all-namespaces
+
Use the --service <service>
flag if your ingress-nginx
LoadBalancer
service is not named ingress-nginx
.
ingresses kubectl ingress-nginx ingresses
, alternately kubectl ingress-nginx ing
, shows a more detailed view of the ingress definitions in a namespace.
Compare:
$ kubectl get ingresses --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d
default test-ingress-2 * localhost 80 5d
-
vs
$ kubectl ingress-nginx ingresses --all-namespaces
+
vs.
$ kubectl ingress-nginx ingresses --all-namespaces
NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS
default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5
default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1
@@ -181,7 +181,7 @@ Do not move it without providing redirects.
- Uses removed config flag --enable-dynamic-certificates
Lint added for version 0.24.0
https://github.com/kubernetes/ingress-nginx/issues/3808
-
to show the lints added only for a particular ingress-nginx
release, use the --from-version
and --to-version
flags:
$ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0
+
To show the lints added only for a particular ingress-nginx
release, use the --from-version
and --to-version
flags:
$ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0
Checking ingresses...
✗ anamespace/this-nginx
- Contains the removed session-cookie-hash annotation.
diff --git a/search/search_index.json b/search/search_index.json
index ded84f070..f31d0cf2b 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Overview \u00b6 This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the controller configuration. You can learn more about using Ingress in the official Kubernetes documentation . Getting Started \u00b6 See Deployment for a whirlwind tour that will get you started. FAQ - Migration to apiVersion networking.k8s.io/v1 \u00b6 If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetess v1.22, this section is relevant to you. Please read this official blog on deprecated Ingress API versions Please read this official documentation on the IngressClass object What is an IngressClass and why is it important for users of Ingress-NGINX controller now ? \u00b6 IngressClass is a Kubernetes resource. See the description below. Its important because until now, a default install of the Ingress-NGINX controller did not require any IngressClass object. From version 1.0.0 of the Ingress-NGINX Controller, an IngressClass object is required. On clusters with more than one instance of the Ingress-NGINX controller, all instances of the controllers must be aware of which Ingress objects they serve. The ingressClassName field of an Ingress is the way to let the controller know about that. kubectl explain ingressclass KIND: IngressClass VERSION: networking.k8s.io/v1 DESCRIPTION: IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Spec is the desired state of the IngressClass. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status` What has caused this change in behaviour ? \u00b6 There are 2 reasons primarily. (Reason #1) Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as: extensions/v1beta1 networking.k8s.io/v1beta1 You would get a message about deprecation, but the Ingress resource would get created. From K8s version 1.22 onwards, you can only access the Ingress API via the stable, networking.k8s.io/v1 API. The reason is explained in the official blog on deprecated ingress API versions . (Reason #2) if you are already using the Ingress-NGINX controller and then upgrade to K8s version v1.22 , there are several scenarios where your existing Ingress objects will not work how you expect. Read this FAQ to check which scenario matches your use case. What is ingressClassName field ? \u00b6 ingressClassName is a field in the specs of an Ingress object. kubectl explain ingress.spec.ingressClassName KIND: Ingress VERSION: networking.k8s.io/v1 FIELD: ingressClassName DESCRIPTION: IngressClassName is the name of the IngressClass cluster resource. The associated IngressClass defines which controller will implement the resource. This replaces the deprecated `kubernetes.io/ingress.class` annotation. For backwards compatibility, when that annotation is set, it must be given precedence over this field. The controller may emit a warning if the field and annotation have different values. Implementations of this API should ignore Ingresses without a class specified. An IngressClass resource may be marked as default, which can be used to set a default value for this field. For more information, refer to the IngressClass documentation. The .spec.ingressClassName behavior has precedence over the deprecated kubernetes.io/ingress.class annotation. I have only one instance of the Ingress-NGINX controller in my cluster. What should I do ? \u00b6 If you have only one instance of the Ingress-NGINX controller running in your cluster, and you still want to use IngressClass, you should add the annotation ingressclass.kubernetes.io/is-default-class in your IngressClass, so that any new Ingress objects will have this one as default IngressClass. In this case, you need to make your controller aware of the objects. If you have any Ingress objects that don't yet have either the .spec.ingressClassName field set in their manifest, or the ingress annotation ( kubernetes.io/ingress.class ), then you should start your Ingress-NGINX controller with the flag --watch-ingress-without-class=true . You can configure your Helm chart installation's values file with .controller.watchIngressWithoutClass: true . We recommend that you create the IngressClass as shown below: --- apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: labels: app.kubernetes.io/component: controller name: nginx annotations: ingressclass.kubernetes.io/is-default-class: \"true\" spec: controller: k8s.io/ingress-nginx And add the value \"spec.ingressClassName=nginx\" in your Ingress objects I have multiple ingress objects in my cluster. What should I do ? \u00b6 If you have lot of ingress objects without ingressClass configuration, you can run the ingress-controller with the flag --watch-ingress-without-class=true . What is the flag '--watch-ingress-without-class' ? \u00b6 Its a flag that is passed,as an argument, to the nginx-ingress-controller executable. In the configuration, it looks like this ; ... ... args: - /nginx-ingress-controller - --watch-ingress-without-class=true - --publish-service=$(POD_NAMESPACE)/ingress-nginx-dev-v1-test-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-dev-v1-test-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key ... ... I have more than one controller in my cluster and already use the annotation ? \u00b6 No problem. This should still keep working, but we highly recommend you to test! Even though kubernetes.io/ingress.class is deprecated, the Ingress-NGINX controller still understands that annotation. If you want to follow good practice, you should consider migrating to use IngressClass and .spec.ingressClassName . I have more than one controller running in my cluster, and I want to use the new API ? \u00b6 In this scenario, you need to create multiple IngressClasses (see example one). But be aware that IngressClass works in a very specific way: you will need to change the .spec.controller value in your IngressClass and configure the controller to expect the exact same value. Let's see some example, supposing that you have three IngressClasses: IngressClass ingress-nginx-one , with .spec.controller equal to example.com/ingress-nginx1 IngressClass ingress-nginx-two , with .spec.controller equal to example.com/ingress-nginx2 IngressClass ingress-nginx-three , with .spec.controller equal to example.com/ingress-nginx1 (for private use, you can also use a controller name that doesn't contain a / ; for example: ingress-nginx1 ) When deploying your ingress controllers, you will have to change the --controller-class field as follows: Ingress-Nginx A, configured to use controller class name example.com/ingress-nginx1 Ingress-Nginx B, configured to use controller class name example.com/ingress-nginx2 Then, when you create an Ingress object with its ingressClassName set to ingress-nginx-two , only controllers looking for the example.com/ingress-nginx2 controller class pay attention to the new object. Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress. Bear in mind that, if you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true , then it will serve: Ingresses without any ingressClassName set Ingresses where the the deprecated annotation ( kubernetes.io/ingress.class ) matches the value set in the command line argument --ingress-class Ingresses that refer to any IngressClass that has the same spec.controller as configured in --controller-class If you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true and you run Ingress-Nginx A with the command line argument --watch-ingress-without-class=false then this is a supported configuration. If you have two Ingress-NGINX controllers for the same cluster, both running with --watch-ingress-without-class=true then there is likely to be a conflict. I am seeing this error message in the logs of the Ingress-NGINX controller: \"ingress class annotation is not equal to the expected by Ingress Controller\". Why ? \u00b6 It is highly likely that you will also see the name of the ingress resource in the same error message. This error messsage has been observed on use the deprecated annotation ( kubernetes.io/ingress.class ) in a Ingress resource manifest. It is recommended to use the .spec.ingressClassName field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining. How to easily install multiple instances of the ingress-NGINX controller in the same cluster ? \u00b6 Create a new namespace kubectl create namespace ingress-nginx-2 Use Helm to install the additional instance of the ingress controller Ensure you have Helm working (refer to the Helm documentation ) We have to assume that you have the helm repo for the ingress-NGINX controller already added to your Helm config. But, if you have not added the helm repo then you can do this to add the repo to your helm config; helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx Make sure you have updated the helm repo data; helm repo update Now, install an additional instance of the ingress-NGINX controller like this ; helm install ingress-nginx-2 ingress-nginx/ingress-nginx \\ --namespace ingress-nginx-2 \\ --set controller.ingressClassResource.name=nginx-two \\ --set controller.ingressClassResource.controllerValue=\"example.com/ingress-nginx-2\" \\ --set controller.ingressClassResource.enabled=true \\ --set controller.ingressClassByName=true If you need to install yet another instance, then repeat the procedure to create a new namespace, change the values such as names & namespaces (for example from \"-2\" to \"-3\"), or anything else that meets your needs.","title":"Welcome"},{"location":"#overview","text":"This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the controller configuration. You can learn more about using Ingress in the official Kubernetes documentation .","title":"Overview"},{"location":"#getting-started","text":"See Deployment for a whirlwind tour that will get you started.","title":"Getting Started"},{"location":"#faq-migration-to-apiversion-networkingk8siov1","text":"If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetess v1.22, this section is relevant to you. Please read this official blog on deprecated Ingress API versions Please read this official documentation on the IngressClass object","title":"FAQ - Migration to apiVersion networking.k8s.io/v1"},{"location":"#what-is-an-ingressclass-and-why-is-it-important-for-users-of-ingress-nginx-controller-now","text":"IngressClass is a Kubernetes resource. See the description below. Its important because until now, a default install of the Ingress-NGINX controller did not require any IngressClass object. From version 1.0.0 of the Ingress-NGINX Controller, an IngressClass object is required. On clusters with more than one instance of the Ingress-NGINX controller, all instances of the controllers must be aware of which Ingress objects they serve. The ingressClassName field of an Ingress is the way to let the controller know about that. kubectl explain ingressclass KIND: IngressClass VERSION: networking.k8s.io/v1 DESCRIPTION: IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. FIELDS: apiVersion APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec Spec is the desired state of the IngressClass. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status`","title":"What is an IngressClass and why is it important for users of Ingress-NGINX controller now ?"},{"location":"#what-has-caused-this-change-in-behaviour","text":"There are 2 reasons primarily. (Reason #1) Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as: extensions/v1beta1 networking.k8s.io/v1beta1 You would get a message about deprecation, but the Ingress resource would get created. From K8s version 1.22 onwards, you can only access the Ingress API via the stable, networking.k8s.io/v1 API. The reason is explained in the official blog on deprecated ingress API versions . (Reason #2) if you are already using the Ingress-NGINX controller and then upgrade to K8s version v1.22 , there are several scenarios where your existing Ingress objects will not work how you expect. Read this FAQ to check which scenario matches your use case.","title":"What has caused this change in behaviour ?"},{"location":"#what-is-ingressclassname-field","text":"ingressClassName is a field in the specs of an Ingress object. kubectl explain ingress.spec.ingressClassName KIND: Ingress VERSION: networking.k8s.io/v1 FIELD: ingressClassName DESCRIPTION: IngressClassName is the name of the IngressClass cluster resource. The associated IngressClass defines which controller will implement the resource. This replaces the deprecated `kubernetes.io/ingress.class` annotation. For backwards compatibility, when that annotation is set, it must be given precedence over this field. The controller may emit a warning if the field and annotation have different values. Implementations of this API should ignore Ingresses without a class specified. An IngressClass resource may be marked as default, which can be used to set a default value for this field. For more information, refer to the IngressClass documentation. The .spec.ingressClassName behavior has precedence over the deprecated kubernetes.io/ingress.class annotation.","title":"What is ingressClassName field ?"},{"location":"#i-have-only-one-instance-of-the-ingress-nginx-controller-in-my-cluster-what-should-i-do","text":"If you have only one instance of the Ingress-NGINX controller running in your cluster, and you still want to use IngressClass, you should add the annotation ingressclass.kubernetes.io/is-default-class in your IngressClass, so that any new Ingress objects will have this one as default IngressClass. In this case, you need to make your controller aware of the objects. If you have any Ingress objects that don't yet have either the .spec.ingressClassName field set in their manifest, or the ingress annotation ( kubernetes.io/ingress.class ), then you should start your Ingress-NGINX controller with the flag --watch-ingress-without-class=true . You can configure your Helm chart installation's values file with .controller.watchIngressWithoutClass: true . We recommend that you create the IngressClass as shown below: --- apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: labels: app.kubernetes.io/component: controller name: nginx annotations: ingressclass.kubernetes.io/is-default-class: \"true\" spec: controller: k8s.io/ingress-nginx And add the value \"spec.ingressClassName=nginx\" in your Ingress objects","title":"I have only one instance of the Ingress-NGINX controller in my cluster. What should I do ?"},{"location":"#i-have-multiple-ingress-objects-in-my-cluster-what-should-i-do","text":"If you have lot of ingress objects without ingressClass configuration, you can run the ingress-controller with the flag --watch-ingress-without-class=true .","title":"I have multiple ingress objects in my cluster. What should I do ?"},{"location":"#what-is-the-flag-watch-ingress-without-class","text":"Its a flag that is passed,as an argument, to the nginx-ingress-controller executable. In the configuration, it looks like this ; ... ... args: - /nginx-ingress-controller - --watch-ingress-without-class=true - --publish-service=$(POD_NAMESPACE)/ingress-nginx-dev-v1-test-controller - --election-id=ingress-controller-leader - --controller-class=k8s.io/ingress-nginx - --configmap=$(POD_NAMESPACE)/ingress-nginx-dev-v1-test-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key ... ...","title":"What is the flag '--watch-ingress-without-class' ?"},{"location":"#i-have-more-than-one-controller-in-my-cluster-and-already-use-the-annotation","text":"No problem. This should still keep working, but we highly recommend you to test! Even though kubernetes.io/ingress.class is deprecated, the Ingress-NGINX controller still understands that annotation. If you want to follow good practice, you should consider migrating to use IngressClass and .spec.ingressClassName .","title":"I have more than one controller in my cluster and already use the annotation ?"},{"location":"#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-api","text":"In this scenario, you need to create multiple IngressClasses (see example one). But be aware that IngressClass works in a very specific way: you will need to change the .spec.controller value in your IngressClass and configure the controller to expect the exact same value. Let's see some example, supposing that you have three IngressClasses: IngressClass ingress-nginx-one , with .spec.controller equal to example.com/ingress-nginx1 IngressClass ingress-nginx-two , with .spec.controller equal to example.com/ingress-nginx2 IngressClass ingress-nginx-three , with .spec.controller equal to example.com/ingress-nginx1 (for private use, you can also use a controller name that doesn't contain a / ; for example: ingress-nginx1 ) When deploying your ingress controllers, you will have to change the --controller-class field as follows: Ingress-Nginx A, configured to use controller class name example.com/ingress-nginx1 Ingress-Nginx B, configured to use controller class name example.com/ingress-nginx2 Then, when you create an Ingress object with its ingressClassName set to ingress-nginx-two , only controllers looking for the example.com/ingress-nginx2 controller class pay attention to the new object. Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress. Bear in mind that, if you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true , then it will serve: Ingresses without any ingressClassName set Ingresses where the the deprecated annotation ( kubernetes.io/ingress.class ) matches the value set in the command line argument --ingress-class Ingresses that refer to any IngressClass that has the same spec.controller as configured in --controller-class If you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true and you run Ingress-Nginx A with the command line argument --watch-ingress-without-class=false then this is a supported configuration. If you have two Ingress-NGINX controllers for the same cluster, both running with --watch-ingress-without-class=true then there is likely to be a conflict.","title":"I have more than one controller running in my cluster, and I want to use the new API ?"},{"location":"#i-am-seeing-this-error-message-in-the-logs-of-the-ingress-nginx-controller-ingress-class-annotation-is-not-equal-to-the-expected-by-ingress-controller-why","text":"It is highly likely that you will also see the name of the ingress resource in the same error message. This error messsage has been observed on use the deprecated annotation ( kubernetes.io/ingress.class ) in a Ingress resource manifest. It is recommended to use the .spec.ingressClassName field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining.","title":"I am seeing this error message in the logs of the Ingress-NGINX controller: \"ingress class annotation is not equal to the expected by Ingress Controller\". Why ?"},{"location":"#how-to-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster","text":"Create a new namespace kubectl create namespace ingress-nginx-2 Use Helm to install the additional instance of the ingress controller Ensure you have Helm working (refer to the Helm documentation ) We have to assume that you have the helm repo for the ingress-NGINX controller already added to your Helm config. But, if you have not added the helm repo then you can do this to add the repo to your helm config; helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx Make sure you have updated the helm repo data; helm repo update Now, install an additional instance of the ingress-NGINX controller like this ; helm install ingress-nginx-2 ingress-nginx/ingress-nginx \\ --namespace ingress-nginx-2 \\ --set controller.ingressClassResource.name=nginx-two \\ --set controller.ingressClassResource.controllerValue=\"example.com/ingress-nginx-2\" \\ --set controller.ingressClassResource.enabled=true \\ --set controller.ingressClassByName=true If you need to install yet another instance, then repeat the procedure to create a new namespace, change the values such as names & namespaces (for example from \"-2\" to \"-3\"), or anything else that meets your needs.","title":"How to easily install multiple instances of the ingress-NGINX controller in the same cluster ?"},{"location":"e2e-tests/","text":"e2e test suite for NGINX Ingress Controller \u00b6 [Default Backend] change default settings \u00b6 should apply the annotation to the default backend [Default Backend] \u00b6 should return 404 sending requests when only a default backend is running enables access logging for default backend disables access logging for default backend [Default Backend] custom service \u00b6 uses custom default backend that returns 200 as status code [Default Backend] SSL \u00b6 should return a self generated SSL certificate [TCP] tcp-services \u00b6 should expose a TCP service should expose an ExternalName TCP service auth-* \u00b6 should return status code 200 when no authentication is configured should return status code 503 when authentication is configured with an invalid secret should return status code 401 when authentication is configured but Authorization header is not configured should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials should return status code 200 when authentication is configured and Authorization header is sent should return status code 200 when authentication is configured with a map and Authorization header is sent should return status code 401 when authentication is configured with invalid content and Authorization header is sent proxy_set_header My-Custom-Header 42; proxy_set_header My-Custom-Header 42; proxy_set_header 'My-Custom-Header' '42'; retains cookie set by external authentication server should return status code 200 when signed in should redirect to signin url when not signed in keeps processing new ingresses even if one of the existing ingresses is misconfigured should return status code 200 when signed in should redirect to signin url when not signed in keeps processing new ingresses even if one of the existing ingresses is misconfigured should return status code 200 when signed in after auth backend is deleted should deny login for different location on same server should deny login for different servers should redirect to signin url when not signed in affinitymode \u00b6 Balanced affinity mode should balance Check persistent affinity mode proxy-* \u00b6 should set proxy_redirect to off should set proxy_redirect to default should set proxy_redirect to hello.com goodbye.com should set proxy client-max-body-size to 8m should not set proxy client-max-body-size to incorrect value should set valid proxy timeouts should not set invalid proxy timeouts should turn on proxy-buffering should turn off proxy-request-buffering should build proxy next upstream should setup proxy cookies should change the default proxy HTTP version affinity session-cookie-name \u00b6 should set sticky cookie SERVERID should change cookie name on ingress definition change should set the path to /something on the generated cookie does not set the path to / on the generated cookie if there's more than one rule referring to the same backend should set cookie with expires should work with use-regex annotation and session-cookie-path should warn user when use-regex is true and session-cookie-path is not set should not set affinity across all server locations when using separate ingresses should set sticky cookie without host should work with server-alias annotation mirror-* \u00b6 should set mirror-target to http://localhost/mirror should set mirror-target to https://test.env.com/$request_uri should disable mirror-request-body canary-* \u00b6 should response with a 200 status from the mainline upstream when requests are made to the mainline ingress should return 404 status for requests to the canary if no matching ingress is found should return the correct status codes when endpoints are unavailable should route requests to the correct upstream if mainline ingress is created before the canary ingress should route requests to the correct upstream if mainline ingress is created after the canary ingress should route requests to the correct upstream if the mainline ingress is modified should route requests to the correct upstream if the canary ingress is modified should route requests to the correct upstream should route requests to the correct upstream should route requests to the correct upstream should route requests to the correct upstream should routes to mainline upstream when the given Regex causes error should route requests to the correct upstream should route requests to the correct upstream should route requests to the correct upstream should not use canary as a catch-all server should not use canary with domain as a server does not crash when canary ingress has multiple paths to the same non-matching backend limit-rate \u00b6 Check limit-rate annotation force-ssl-redirect \u00b6 should redirect to https http2-push-preload \u00b6 enable the http2-push-preload directive proxy-ssl-* \u00b6 should set valid proxy-ssl-secret should set valid proxy-ssl-secret, proxy-ssl-verify to on, proxy-ssl-verify-depth to 2, and proxy-ssl-server-name to on should set valid proxy-ssl-secret, proxy-ssl-ciphers to HIGH:!AES should set valid proxy-ssl-secret, proxy-ssl-protocols proxy-ssl-location-only flag should change the nginx config server part modsecurity owasp \u00b6 should enable modsecurity should enable modsecurity with transaction ID and OWASP rules should disable modsecurity should enable modsecurity with snippet should enable modsecurity without using 'modsecurity on;' should disable modsecurity using 'modsecurity off;' should enable modsecurity with snippet and block requests should enable modsecurity globally and with modsecurity-snippet block requests backend-protocol - GRPC \u00b6 should use grpc_pass in the configuration file should return OK for service with backend protocol GRPC should return OK for service with backend protocol GRPCS cors-* \u00b6 should enable cors should set cors methods to only allow POST, GET should set cors max-age should disable cors allow credentials should allow origin for cors should allow headers for cors should expose headers for cors influxdb-* \u00b6 should send the request metric to the influxdb server Annotation - limit-connections \u00b6 should limit-connections client-body-buffer-size \u00b6 should set client_body_buffer_size to 1000 should set client_body_buffer_size to 1K should set client_body_buffer_size to 1k should set client_body_buffer_size to 1m should set client_body_buffer_size to 1M should not set client_body_buffer_size to invalid 1b default-backend \u00b6 should use a custom default backend as upstream connection-proxy-header \u00b6 set connection header to keep-alive upstream-vhost \u00b6 set host to upstreamvhost.bar.com custom-http-errors \u00b6 configures Nginx correctly disable-access-log disable-http-access-log disable-stream-access-log \u00b6 disable-access-log set access_log off disable-http-access-log set access_log off server-snippet \u00b6 rewrite-target use-regex enable-rewrite-log \u00b6 should write rewrite logs should use correct longest path match should use ~* location modifier if regex annotation is present should fail to use longest match for documented warning should allow for custom rewrite parameters app-root \u00b6 should redirect to /foo whitelist-source-range \u00b6 should set valid ip whitelist range enable-access-log enable-rewrite-log \u00b6 set access_log off set rewrite_log on x-forwarded-prefix \u00b6 should set the X-Forwarded-Prefix to the annotation value should not add X-Forwarded-Prefix if the annotation value is empty configuration-snippet \u00b6 in all locations backend-protocol - FastCGI \u00b6 should use fastcgi_pass in the configuration file should add fastcgi_index in the configuration file should add fastcgi_param in the configuration file should return OK for service with backend protocol FastCGI from-to-www-redirect \u00b6 should redirect from www HTTP to HTTP should redirect from www HTTPS to HTTPS permanent-redirect permanent-redirect-code \u00b6 should respond with a standard redirect code should respond with a custom redirect code upstream-hash-by-* \u00b6 should connect to the same pod should connect to the same subset of pods annotation-global-rate-limit \u00b6 generates correct configuration backend-protocol \u00b6 should set backend protocol to https:// and use proxy_pass should set backend protocol to grpc:// and use grpc_pass should set backend protocol to grpcs:// and use grpc_pass should set backend protocol to '' and use fastcgi_pass should set backend protocol to '' and use ajp_pass satisfy \u00b6 should configure satisfy directive correctly should allow multiple auth with satisfy any server-alias \u00b6 should return status code 200 for host 'foo' and 404 for 'bar' should return status code 200 for host 'foo' and 'bar' should return status code 200 for hosts defined in two ingresses, different path with one alias ssl-ciphers \u00b6 should change ssl ciphers auth-tls-* \u00b6 should set valid auth-tls-secret should set valid auth-tls-secret, sslVerify to off, and sslVerifyDepth to 2 should set valid auth-tls-secret, pass certificate to upstream, and error page should validate auth-tls-verify-client [Status] status update \u00b6 should update status field after client-go reconnection Debug CLI \u00b6 should list the backend servers should get information for a specific backend server should produce valid JSON for /dbg general [Memory Leak] Dynamic Certificates \u00b6 should not leak memory from ingress SSL certificates or configuration updates [Ingress] [PathType] mix Exact and Prefix paths \u00b6 should choose the correct location [Ingress] definition without host \u00b6 should set ingress details variables for ingresses without a host should set ingress details variables for ingresses with host without IngressRuleValue, only Backend single ingress - multiple hosts \u00b6 should set the correct $service_name NGINX variable [Ingress] [PathType] exact \u00b6 should choose exact location for /exact [Ingress] [PathType] prefix checks \u00b6 should return 404 when prefix /aaa does not match request /aaaccc [Security] request smuggling \u00b6 should not return body content from error_page [SSL] [Flag] default-ssl-certificate \u00b6 uses default ssl certificate for catch-all ingress uses default ssl certificate for host based ingress when configured certificate does not match host enable-real-ip \u00b6 trusts X-Forwarded-For header only when setting is true should not trust X-Forwarded-For header when setting is false access-log \u00b6 use the default configuration use the specified configuration use the specified configuration use the specified configuration use the specified configuration [Lua] lua-shared-dicts \u00b6 configures lua shared dicts server-tokens \u00b6 should not exists Server header in the response should exists Server header in the response when is enabled use-proxy-protocol \u00b6 should respect port passed by the PROXY Protocol should respect proto passed by the PROXY Protocol server port should enable PROXY Protocol for HTTPS should enable PROXY Protocol for TCP [Flag] custom HTTP and HTTPS ports \u00b6 should set X-Forwarded-Port headers accordingly when listening on a non-default HTTP port should set X-Forwarded-Port header to 443 should set the X-Forwarded-Port header to 443 [Security] no-auth-locations \u00b6 should return status code 401 when accessing '/' unauthentication should return status code 200 when accessing '/' authentication should return status code 200 when accessing '/noauth' unauthenticated Dynamic $proxy_host \u00b6 should exist a proxy_host should exist a proxy_host using the upstream-vhost annotation value proxy-connect-timeout \u00b6 should set valid proxy timeouts using configmap values should not set invalid proxy timeouts using configmap values [Security] Pod Security Policies \u00b6 should be running with a Pod Security Policy Geoip2 \u00b6 should only allow requests from specific countries [Security] Pod Security Policies with volumes \u00b6 should be running with a Pod Security Policy enable-multi-accept \u00b6 should be enabled by default should be enabled when set to true should be disabled when set to false log-format-* \u00b6 should disable the log-format-escape-json by default should enable the log-format-escape-json should disable the log-format-escape-json log-format-escape-json enabled log-format-escape-json disabled [Flag] ingress-class \u00b6 should ignore Ingress with class should ignore Ingress with no class should delete Ingress when class is removed check scenarios for IngressClass and ingress.class annotation ssl-ciphers \u00b6 Add ssl ciphers proxy-next-upstream \u00b6 should build proxy next upstream using configmap values [Security] global-auth-url \u00b6 should return status code 401 when request any protected service should return status code 200 when request whitelisted (via no-auth-locations) service and 401 when request protected service should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service should still return status code 200 after auth backend is deleted using cache [Security] block-* \u00b6 should block CIDRs defined in the ConfigMap should block User-Agents defined in the ConfigMap should block Referers defined in the ConfigMap plugins \u00b6 should exist a x-hello-world header Configmap - limit-rate \u00b6 Check limit-rate config Configure OpenTracing \u00b6 should not exists opentracing directive should exists opentracing directive when is enabled should not exists opentracing_operation_name directive when is empty should exists opentracing_operation_name directive when is configured should not exists opentracing_location_operation_name directive when is empty should exists opentracing_location_operation_name directive when is configured should enable opentracing using zipkin should enable opentracing using jaeger should enable opentracing using jaeger with sampler host should propagate the w3c header when configured with jaeger should enable opentracing using datadog use-forwarded-headers \u00b6 should trust X-Forwarded headers when setting is true should not trust X-Forwarded headers when setting is false proxy-send-timeout \u00b6 should set valid proxy send timeouts using configmap values should not set invalid proxy send timeouts using configmap values Add no tls redirect locations \u00b6 Check no tls redirect locations config settings-global-rate-limit \u00b6 generates correct NGINX configuration add-headers \u00b6 Add a custom header Add multiple custom headers hash size \u00b6 should set server_names_hash_bucket_size should set server_names_hash_max_size should set proxy-headers-hash-bucket-size should set proxy-headers-hash-max-size should set variables-hash-bucket-size should set variables-hash-max-size should set vmap-hash-bucket-size keep-alive keep-alive-requests \u00b6 should set keepalive_timeout should set keepalive_requests should set keepalive connection to upstream server should set keep alive connection timeout to upstream server should set the request count to upstream server through one keep alive connection [Flag] disable-catch-all \u00b6 should ignore catch all Ingress should delete Ingress updated to catch-all should allow Ingress with both a default backend and rules main-snippet \u00b6 should add value of main-snippet setting to nginx config [SSL] TLS protocols, ciphers and headers) \u00b6 setting cipher suite enforcing TLS v1.0 setting max-age parameter setting includeSubDomains parameter setting preload parameter overriding what's set from the upstream should not use ports during the HTTP to HTTPS redirection should not use ports or X-Forwarded-Host during the HTTP to HTTPS redirection Configmap change \u00b6 should reload after an update in the configuration proxy-read-timeout \u00b6 should set valid proxy read timeouts using configmap values should not set invalid proxy read timeouts using configmap values [Security] modsecurity-snippet \u00b6 should add value of modsecurity-snippet setting to nginx config OCSP \u00b6 should enable OCSP and contain stapling information in the connection reuse-port \u00b6 reuse port should be enabled by default reuse port should be disabled reuse port should be enabled [Shutdown] Graceful shutdown with pending request \u00b6 should let slow requests finish before shutting down [Shutdown] ingress controller \u00b6 should shutdown in less than 60 secons without pending connections should shutdown after waiting 60 seconds for pending connections to be closed should shutdown after waiting 150 seconds for pending connections to be closed [Service] backend status code 503 \u00b6 should return 503 when backend service does not exist should return 503 when all backend service endpoints are unavailable [Service] Type ExternalName \u00b6 works with external name set to incomplete fqdn should return 200 for service type=ExternalName without a port defined should return 200 for service type=ExternalName with a port defined should return status 502 for service type=ExternalName with an invalid host should return 200 for service type=ExternalName using a port name should return 200 for service type=ExternalName using FQDN with trailing dot should update the external name after a service update","title":"e2e test suite for [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx/tree/main/)"},{"location":"e2e-tests/#e2e-test-suite-for-nginx-ingress-controller","text":"","title":"e2e test suite for NGINX Ingress Controller"},{"location":"e2e-tests/#default-backend-change-default-settings","text":"should apply the annotation to the default backend","title":"[Default Backend] change default settings"},{"location":"e2e-tests/#default-backend","text":"should return 404 sending requests when only a default backend is running enables access logging for default backend disables access logging for default backend","title":"[Default Backend]"},{"location":"e2e-tests/#default-backend-custom-service","text":"uses custom default backend that returns 200 as status code","title":"[Default Backend] custom service"},{"location":"e2e-tests/#default-backend-ssl","text":"should return a self generated SSL certificate","title":"[Default Backend] SSL"},{"location":"e2e-tests/#tcp-tcp-services","text":"should expose a TCP service should expose an ExternalName TCP service","title":"[TCP] tcp-services"},{"location":"e2e-tests/#auth-","text":"should return status code 200 when no authentication is configured should return status code 503 when authentication is configured with an invalid secret should return status code 401 when authentication is configured but Authorization header is not configured should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials should return status code 200 when authentication is configured and Authorization header is sent should return status code 200 when authentication is configured with a map and Authorization header is sent should return status code 401 when authentication is configured with invalid content and Authorization header is sent proxy_set_header My-Custom-Header 42; proxy_set_header My-Custom-Header 42; proxy_set_header 'My-Custom-Header' '42'; retains cookie set by external authentication server should return status code 200 when signed in should redirect to signin url when not signed in keeps processing new ingresses even if one of the existing ingresses is misconfigured should return status code 200 when signed in should redirect to signin url when not signed in keeps processing new ingresses even if one of the existing ingresses is misconfigured should return status code 200 when signed in after auth backend is deleted should deny login for different location on same server should deny login for different servers should redirect to signin url when not signed in","title":"auth-*"},{"location":"e2e-tests/#affinitymode","text":"Balanced affinity mode should balance Check persistent affinity mode","title":"affinitymode"},{"location":"e2e-tests/#proxy-","text":"should set proxy_redirect to off should set proxy_redirect to default should set proxy_redirect to hello.com goodbye.com should set proxy client-max-body-size to 8m should not set proxy client-max-body-size to incorrect value should set valid proxy timeouts should not set invalid proxy timeouts should turn on proxy-buffering should turn off proxy-request-buffering should build proxy next upstream should setup proxy cookies should change the default proxy HTTP version","title":"proxy-*"},{"location":"e2e-tests/#affinity-session-cookie-name","text":"should set sticky cookie SERVERID should change cookie name on ingress definition change should set the path to /something on the generated cookie does not set the path to / on the generated cookie if there's more than one rule referring to the same backend should set cookie with expires should work with use-regex annotation and session-cookie-path should warn user when use-regex is true and session-cookie-path is not set should not set affinity across all server locations when using separate ingresses should set sticky cookie without host should work with server-alias annotation","title":"affinity session-cookie-name"},{"location":"e2e-tests/#mirror-","text":"should set mirror-target to http://localhost/mirror should set mirror-target to https://test.env.com/$request_uri should disable mirror-request-body","title":"mirror-*"},{"location":"e2e-tests/#canary-","text":"should response with a 200 status from the mainline upstream when requests are made to the mainline ingress should return 404 status for requests to the canary if no matching ingress is found should return the correct status codes when endpoints are unavailable should route requests to the correct upstream if mainline ingress is created before the canary ingress should route requests to the correct upstream if mainline ingress is created after the canary ingress should route requests to the correct upstream if the mainline ingress is modified should route requests to the correct upstream if the canary ingress is modified should route requests to the correct upstream should route requests to the correct upstream should route requests to the correct upstream should route requests to the correct upstream should routes to mainline upstream when the given Regex causes error should route requests to the correct upstream should route requests to the correct upstream should route requests to the correct upstream should not use canary as a catch-all server should not use canary with domain as a server does not crash when canary ingress has multiple paths to the same non-matching backend","title":"canary-*"},{"location":"e2e-tests/#limit-rate","text":"Check limit-rate annotation","title":"limit-rate"},{"location":"e2e-tests/#force-ssl-redirect","text":"should redirect to https","title":"force-ssl-redirect"},{"location":"e2e-tests/#http2-push-preload","text":"enable the http2-push-preload directive","title":"http2-push-preload"},{"location":"e2e-tests/#proxy-ssl-","text":"should set valid proxy-ssl-secret should set valid proxy-ssl-secret, proxy-ssl-verify to on, proxy-ssl-verify-depth to 2, and proxy-ssl-server-name to on should set valid proxy-ssl-secret, proxy-ssl-ciphers to HIGH:!AES should set valid proxy-ssl-secret, proxy-ssl-protocols proxy-ssl-location-only flag should change the nginx config server part","title":"proxy-ssl-*"},{"location":"e2e-tests/#modsecurity-owasp","text":"should enable modsecurity should enable modsecurity with transaction ID and OWASP rules should disable modsecurity should enable modsecurity with snippet should enable modsecurity without using 'modsecurity on;' should disable modsecurity using 'modsecurity off;' should enable modsecurity with snippet and block requests should enable modsecurity globally and with modsecurity-snippet block requests","title":"modsecurity owasp"},{"location":"e2e-tests/#backend-protocol-grpc","text":"should use grpc_pass in the configuration file should return OK for service with backend protocol GRPC should return OK for service with backend protocol GRPCS","title":"backend-protocol - GRPC"},{"location":"e2e-tests/#cors-","text":"should enable cors should set cors methods to only allow POST, GET should set cors max-age should disable cors allow credentials should allow origin for cors should allow headers for cors should expose headers for cors","title":"cors-*"},{"location":"e2e-tests/#influxdb-","text":"should send the request metric to the influxdb server","title":"influxdb-*"},{"location":"e2e-tests/#annotation-limit-connections","text":"should limit-connections","title":"Annotation - limit-connections"},{"location":"e2e-tests/#client-body-buffer-size","text":"should set client_body_buffer_size to 1000 should set client_body_buffer_size to 1K should set client_body_buffer_size to 1k should set client_body_buffer_size to 1m should set client_body_buffer_size to 1M should not set client_body_buffer_size to invalid 1b","title":"client-body-buffer-size"},{"location":"e2e-tests/#default-backend_1","text":"should use a custom default backend as upstream","title":"default-backend"},{"location":"e2e-tests/#connection-proxy-header","text":"set connection header to keep-alive","title":"connection-proxy-header"},{"location":"e2e-tests/#upstream-vhost","text":"set host to upstreamvhost.bar.com","title":"upstream-vhost"},{"location":"e2e-tests/#custom-http-errors","text":"configures Nginx correctly","title":"custom-http-errors"},{"location":"e2e-tests/#disable-access-log-disable-http-access-log-disable-stream-access-log","text":"disable-access-log set access_log off disable-http-access-log set access_log off","title":"disable-access-log disable-http-access-log disable-stream-access-log"},{"location":"e2e-tests/#server-snippet","text":"","title":"server-snippet"},{"location":"e2e-tests/#rewrite-target-use-regex-enable-rewrite-log","text":"should write rewrite logs should use correct longest path match should use ~* location modifier if regex annotation is present should fail to use longest match for documented warning should allow for custom rewrite parameters","title":"rewrite-target use-regex enable-rewrite-log"},{"location":"e2e-tests/#app-root","text":"should redirect to /foo","title":"app-root"},{"location":"e2e-tests/#whitelist-source-range","text":"should set valid ip whitelist range","title":"whitelist-source-range"},{"location":"e2e-tests/#enable-access-log-enable-rewrite-log","text":"set access_log off set rewrite_log on","title":"enable-access-log enable-rewrite-log"},{"location":"e2e-tests/#x-forwarded-prefix","text":"should set the X-Forwarded-Prefix to the annotation value should not add X-Forwarded-Prefix if the annotation value is empty","title":"x-forwarded-prefix"},{"location":"e2e-tests/#configuration-snippet","text":"in all locations","title":"configuration-snippet"},{"location":"e2e-tests/#backend-protocol-fastcgi","text":"should use fastcgi_pass in the configuration file should add fastcgi_index in the configuration file should add fastcgi_param in the configuration file should return OK for service with backend protocol FastCGI","title":"backend-protocol - FastCGI"},{"location":"e2e-tests/#from-to-www-redirect","text":"should redirect from www HTTP to HTTP should redirect from www HTTPS to HTTPS","title":"from-to-www-redirect"},{"location":"e2e-tests/#permanent-redirect-permanent-redirect-code","text":"should respond with a standard redirect code should respond with a custom redirect code","title":"permanent-redirect permanent-redirect-code"},{"location":"e2e-tests/#upstream-hash-by-","text":"should connect to the same pod should connect to the same subset of pods","title":"upstream-hash-by-*"},{"location":"e2e-tests/#annotation-global-rate-limit","text":"generates correct configuration","title":"annotation-global-rate-limit"},{"location":"e2e-tests/#backend-protocol","text":"should set backend protocol to https:// and use proxy_pass should set backend protocol to grpc:// and use grpc_pass should set backend protocol to grpcs:// and use grpc_pass should set backend protocol to '' and use fastcgi_pass should set backend protocol to '' and use ajp_pass","title":"backend-protocol"},{"location":"e2e-tests/#satisfy","text":"should configure satisfy directive correctly should allow multiple auth with satisfy any","title":"satisfy"},{"location":"e2e-tests/#server-alias","text":"should return status code 200 for host 'foo' and 404 for 'bar' should return status code 200 for host 'foo' and 'bar' should return status code 200 for hosts defined in two ingresses, different path with one alias","title":"server-alias"},{"location":"e2e-tests/#ssl-ciphers","text":"should change ssl ciphers","title":"ssl-ciphers"},{"location":"e2e-tests/#auth-tls-","text":"should set valid auth-tls-secret should set valid auth-tls-secret, sslVerify to off, and sslVerifyDepth to 2 should set valid auth-tls-secret, pass certificate to upstream, and error page should validate auth-tls-verify-client","title":"auth-tls-*"},{"location":"e2e-tests/#status-status-update","text":"should update status field after client-go reconnection","title":"[Status] status update"},{"location":"e2e-tests/#debug-cli","text":"should list the backend servers should get information for a specific backend server should produce valid JSON for /dbg general","title":"Debug CLI"},{"location":"e2e-tests/#memory-leak-dynamic-certificates","text":"should not leak memory from ingress SSL certificates or configuration updates","title":"[Memory Leak] Dynamic Certificates"},{"location":"e2e-tests/#ingress-pathtype-mix-exact-and-prefix-paths","text":"should choose the correct location","title":"[Ingress] [PathType] mix Exact and Prefix paths"},{"location":"e2e-tests/#ingress-definition-without-host","text":"should set ingress details variables for ingresses without a host should set ingress details variables for ingresses with host without IngressRuleValue, only Backend","title":"[Ingress] definition without host"},{"location":"e2e-tests/#single-ingress-multiple-hosts","text":"should set the correct $service_name NGINX variable","title":"single ingress - multiple hosts"},{"location":"e2e-tests/#ingress-pathtype-exact","text":"should choose exact location for /exact","title":"[Ingress] [PathType] exact"},{"location":"e2e-tests/#ingress-pathtype-prefix-checks","text":"should return 404 when prefix /aaa does not match request /aaaccc","title":"[Ingress] [PathType] prefix checks"},{"location":"e2e-tests/#security-request-smuggling","text":"should not return body content from error_page","title":"[Security] request smuggling"},{"location":"e2e-tests/#ssl-flag-default-ssl-certificate","text":"uses default ssl certificate for catch-all ingress uses default ssl certificate for host based ingress when configured certificate does not match host","title":"[SSL] [Flag] default-ssl-certificate"},{"location":"e2e-tests/#enable-real-ip","text":"trusts X-Forwarded-For header only when setting is true should not trust X-Forwarded-For header when setting is false","title":"enable-real-ip"},{"location":"e2e-tests/#access-log","text":"use the default configuration use the specified configuration use the specified configuration use the specified configuration use the specified configuration","title":"access-log"},{"location":"e2e-tests/#lua-lua-shared-dicts","text":"configures lua shared dicts","title":"[Lua] lua-shared-dicts"},{"location":"e2e-tests/#server-tokens","text":"should not exists Server header in the response should exists Server header in the response when is enabled","title":"server-tokens"},{"location":"e2e-tests/#use-proxy-protocol","text":"should respect port passed by the PROXY Protocol should respect proto passed by the PROXY Protocol server port should enable PROXY Protocol for HTTPS should enable PROXY Protocol for TCP","title":"use-proxy-protocol"},{"location":"e2e-tests/#flag-custom-http-and-https-ports","text":"should set X-Forwarded-Port headers accordingly when listening on a non-default HTTP port should set X-Forwarded-Port header to 443 should set the X-Forwarded-Port header to 443","title":"[Flag] custom HTTP and HTTPS ports"},{"location":"e2e-tests/#security-no-auth-locations","text":"should return status code 401 when accessing '/' unauthentication should return status code 200 when accessing '/' authentication should return status code 200 when accessing '/noauth' unauthenticated","title":"[Security] no-auth-locations"},{"location":"e2e-tests/#dynamic-proxy_host","text":"should exist a proxy_host should exist a proxy_host using the upstream-vhost annotation value","title":"Dynamic $proxy_host"},{"location":"e2e-tests/#proxy-connect-timeout","text":"should set valid proxy timeouts using configmap values should not set invalid proxy timeouts using configmap values","title":"proxy-connect-timeout"},{"location":"e2e-tests/#security-pod-security-policies","text":"should be running with a Pod Security Policy","title":"[Security] Pod Security Policies"},{"location":"e2e-tests/#geoip2","text":"should only allow requests from specific countries","title":"Geoip2"},{"location":"e2e-tests/#security-pod-security-policies-with-volumes","text":"should be running with a Pod Security Policy","title":"[Security] Pod Security Policies with volumes"},{"location":"e2e-tests/#enable-multi-accept","text":"should be enabled by default should be enabled when set to true should be disabled when set to false","title":"enable-multi-accept"},{"location":"e2e-tests/#log-format-","text":"should disable the log-format-escape-json by default should enable the log-format-escape-json should disable the log-format-escape-json log-format-escape-json enabled log-format-escape-json disabled","title":"log-format-*"},{"location":"e2e-tests/#flag-ingress-class","text":"should ignore Ingress with class should ignore Ingress with no class should delete Ingress when class is removed check scenarios for IngressClass and ingress.class annotation","title":"[Flag] ingress-class"},{"location":"e2e-tests/#ssl-ciphers_1","text":"Add ssl ciphers","title":"ssl-ciphers"},{"location":"e2e-tests/#proxy-next-upstream","text":"should build proxy next upstream using configmap values","title":"proxy-next-upstream"},{"location":"e2e-tests/#security-global-auth-url","text":"should return status code 401 when request any protected service should return status code 200 when request whitelisted (via no-auth-locations) service and 401 when request protected service should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service should still return status code 200 after auth backend is deleted using cache","title":"[Security] global-auth-url"},{"location":"e2e-tests/#security-block-","text":"should block CIDRs defined in the ConfigMap should block User-Agents defined in the ConfigMap should block Referers defined in the ConfigMap","title":"[Security] block-*"},{"location":"e2e-tests/#plugins","text":"should exist a x-hello-world header","title":"plugins"},{"location":"e2e-tests/#configmap-limit-rate","text":"Check limit-rate config","title":"Configmap - limit-rate"},{"location":"e2e-tests/#configure-opentracing","text":"should not exists opentracing directive should exists opentracing directive when is enabled should not exists opentracing_operation_name directive when is empty should exists opentracing_operation_name directive when is configured should not exists opentracing_location_operation_name directive when is empty should exists opentracing_location_operation_name directive when is configured should enable opentracing using zipkin should enable opentracing using jaeger should enable opentracing using jaeger with sampler host should propagate the w3c header when configured with jaeger should enable opentracing using datadog","title":"Configure OpenTracing"},{"location":"e2e-tests/#use-forwarded-headers","text":"should trust X-Forwarded headers when setting is true should not trust X-Forwarded headers when setting is false","title":"use-forwarded-headers"},{"location":"e2e-tests/#proxy-send-timeout","text":"should set valid proxy send timeouts using configmap values should not set invalid proxy send timeouts using configmap values","title":"proxy-send-timeout"},{"location":"e2e-tests/#add-no-tls-redirect-locations","text":"Check no tls redirect locations config","title":"Add no tls redirect locations"},{"location":"e2e-tests/#settings-global-rate-limit","text":"generates correct NGINX configuration","title":"settings-global-rate-limit"},{"location":"e2e-tests/#add-headers","text":"Add a custom header Add multiple custom headers","title":"add-headers"},{"location":"e2e-tests/#hash-size","text":"should set server_names_hash_bucket_size should set server_names_hash_max_size should set proxy-headers-hash-bucket-size should set proxy-headers-hash-max-size should set variables-hash-bucket-size should set variables-hash-max-size should set vmap-hash-bucket-size","title":"hash size"},{"location":"e2e-tests/#keep-alive-keep-alive-requests","text":"should set keepalive_timeout should set keepalive_requests should set keepalive connection to upstream server should set keep alive connection timeout to upstream server should set the request count to upstream server through one keep alive connection","title":"keep-alive keep-alive-requests"},{"location":"e2e-tests/#flag-disable-catch-all","text":"should ignore catch all Ingress should delete Ingress updated to catch-all should allow Ingress with both a default backend and rules","title":"[Flag] disable-catch-all"},{"location":"e2e-tests/#main-snippet","text":"should add value of main-snippet setting to nginx config","title":"main-snippet"},{"location":"e2e-tests/#ssl-tls-protocols-ciphers-and-headers","text":"setting cipher suite enforcing TLS v1.0 setting max-age parameter setting includeSubDomains parameter setting preload parameter overriding what's set from the upstream should not use ports during the HTTP to HTTPS redirection should not use ports or X-Forwarded-Host during the HTTP to HTTPS redirection","title":"[SSL] TLS protocols, ciphers and headers)"},{"location":"e2e-tests/#configmap-change","text":"should reload after an update in the configuration","title":"Configmap change"},{"location":"e2e-tests/#proxy-read-timeout","text":"should set valid proxy read timeouts using configmap values should not set invalid proxy read timeouts using configmap values","title":"proxy-read-timeout"},{"location":"e2e-tests/#security-modsecurity-snippet","text":"should add value of modsecurity-snippet setting to nginx config","title":"[Security] modsecurity-snippet"},{"location":"e2e-tests/#ocsp","text":"should enable OCSP and contain stapling information in the connection","title":"OCSP"},{"location":"e2e-tests/#reuse-port","text":"reuse port should be enabled by default reuse port should be disabled reuse port should be enabled","title":"reuse-port"},{"location":"e2e-tests/#shutdown-graceful-shutdown-with-pending-request","text":"should let slow requests finish before shutting down","title":"[Shutdown] Graceful shutdown with pending request"},{"location":"e2e-tests/#shutdown-ingress-controller","text":"should shutdown in less than 60 secons without pending connections should shutdown after waiting 60 seconds for pending connections to be closed should shutdown after waiting 150 seconds for pending connections to be closed","title":"[Shutdown] ingress controller"},{"location":"e2e-tests/#service-backend-status-code-503","text":"should return 503 when backend service does not exist should return 503 when all backend service endpoints are unavailable","title":"[Service] backend status code 503"},{"location":"e2e-tests/#service-type-externalname","text":"works with external name set to incomplete fqdn should return 200 for service type=ExternalName without a port defined should return 200 for service type=ExternalName with a port defined should return status 502 for service type=ExternalName with an invalid host should return 200 for service type=ExternalName using a port name should return 200 for service type=ExternalName using FQDN with trailing dot should update the external name after a service update","title":"[Service] Type ExternalName"},{"location":"how-it-works/","text":"How it works \u00b6 The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one. NGINX configuration \u00b6 The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done. NGINX model \u00b6 Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template. Building the NGINX model \u00b6 Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. When a reload is required \u00b6 The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated. Avoiding reloads \u00b6 In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes. Avoiding reloads on Endpoints changes \u00b6 On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on. Avoiding outage from wrong configuration \u00b6 Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.","title":"How it works"},{"location":"how-it-works/#how-it-works","text":"The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.","title":"How it works"},{"location":"how-it-works/#nginx-configuration","text":"The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done.","title":"NGINX configuration"},{"location":"how-it-works/#nginx-model","text":"Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.","title":"NGINX model"},{"location":"how-it-works/#building-the-nginx-model","text":"Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.","title":"Building the NGINX model"},{"location":"how-it-works/#when-a-reload-is-required","text":"The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated.","title":"When a reload is required"},{"location":"how-it-works/#avoiding-reloads","text":"In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.","title":"Avoiding reloads"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","text":"On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.","title":"Avoiding reloads on Endpoints changes"},{"location":"how-it-works/#avoiding-outage-from-wrong-configuration","text":"Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.","title":"Avoiding outage from wrong configuration"},{"location":"kubectl-plugin/","text":"The ingress-nginx kubectl plugin \u00b6 Installation \u00b6 Install krew , then run kubectl krew install ingress-nginx to install the plugin. Then run kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands: kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use \"ingress-nginx [command] --help\" for more information about a command. Common Flags \u00b6 Every subcommand supports the basic kubectl configuration flags like --namespace , --context , --client-key and so on. Subcommands that act on a particular ingress-nginx pod ( backends , certs , conf , exec , general , logs , ssh ), support the --deployment and --pod flags to select either a pod from a deployment with the given name, or a pod with the given name. The --deployment flag defaults to ingress-nginx-controller . Subcommands that inspect resources ( ingresses , lint ) support the --all-namespaces flag, which causes them to inspect resources in every namespace. Subcommands \u00b6 Note that backends , general , certs , and conf require ingress-nginx version 0.23.0 or higher. backends \u00b6 Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about: $ kubectl ingress-nginx backends -n ingress-nginx [ { \"name\": \"default-apple-service-5678\", \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { \"ports\": [ { \"protocol\": \"TCP\", \"port\": 5678, \"targetPort\": 5678 } ], \"selector\": { \"app\": \"apple\" }, \"clusterIP\": \"10.97.230.121\", \"type\": \"ClusterIP\", \"sessionAffinity\": \"None\" }, \"status\": { \"loadBalancer\": {} } }, \"port\": 0, \"sslPassthrough\": false, \"endpoints\": [ { \"address\": \"10.1.3.86\", \"port\": \"5678\" } ], \"sessionAffinityConfig\": { \"name\": \"\", \"cookieSessionAffinity\": { \"name\": \"\" } }, \"upstreamHashByConfig\": { \"upstream-hash-by-subset-size\": 3 }, \"noServer\": false, \"trafficShapingPolicy\": { \"weight\": 0, \"header\": \"\", \"headerValue\": \"\", \"cookie\": \"\" } }, { \"name\": \"default-echo-service-8080\", ... }, { \"name\": \"upstream-default-backend\", ... } ] Add the --list option to show only the backend names. Add the --backend option to show only the backend with the given name. certs \u00b6 Use kubectl ingress-nginx certs --host to dump the SSL cert/key information for a given host. WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- conf \u00b6 Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host option to view only the server block for that host: kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server_name testaddr.local ; listen 80; set $proxy_upstream_name \"-\"; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; location / { set $namespace \"\"; set $ingress_name \"\"; set $service_name \"\"; set $service_port \"0\"; set $location_path \"/\"; ... exec \u00b6 kubectl ingress-nginx exec is exactly the same as kubectl exec , with the same command flags. It will automatically choose an ingress-nginx pod to run the command in. $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi_params geoip lua mime.types modsecurity modules nginx.conf opentracing.json owasp-modsecurity-crs template info \u00b6 Shows the internal and external IP/CNAMES for an ingress-nginx service. $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 Use the --service flag if your ingress-nginx LoadBalancer service is not named ingress-nginx . ingresses \u00b6 kubectl ingress-nginx ingresses , alternately kubectl ingress-nginx ing , shows a more detailed view of the ingress definitions in a namespace. Compare: $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 * localhost 80 5d vs $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 * localhost NO echo-service 8080 2 lint \u00b6 kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions. $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 \u2717 othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 to show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags: $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0 Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 logs \u00b6 kubectl ingress-nginx logs is almost the same as kubectl logs , with fewer flags. It will automatically choose an ingress-nginx pod to read logs from. $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ... ssh \u00b6 kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash . Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container. $ kubectl ingress-nginx ssh -n ingress-nginx www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$","title":"kubectl plugin"},{"location":"kubectl-plugin/#the-ingress-nginx-kubectl-plugin","text":"","title":"The ingress-nginx kubectl plugin"},{"location":"kubectl-plugin/#installation","text":"Install krew , then run kubectl krew install ingress-nginx to install the plugin. Then run kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands: kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use \"ingress-nginx [command] --help\" for more information about a command.","title":"Installation"},{"location":"kubectl-plugin/#common-flags","text":"Every subcommand supports the basic kubectl configuration flags like --namespace , --context , --client-key and so on. Subcommands that act on a particular ingress-nginx pod ( backends , certs , conf , exec , general , logs , ssh ), support the --deployment and --pod flags to select either a pod from a deployment with the given name, or a pod with the given name. The --deployment flag defaults to ingress-nginx-controller . Subcommands that inspect resources ( ingresses , lint ) support the --all-namespaces flag, which causes them to inspect resources in every namespace.","title":"Common Flags"},{"location":"kubectl-plugin/#subcommands","text":"Note that backends , general , certs , and conf require ingress-nginx version 0.23.0 or higher.","title":"Subcommands"},{"location":"kubectl-plugin/#backends","text":"Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about: $ kubectl ingress-nginx backends -n ingress-nginx [ { \"name\": \"default-apple-service-5678\", \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { \"ports\": [ { \"protocol\": \"TCP\", \"port\": 5678, \"targetPort\": 5678 } ], \"selector\": { \"app\": \"apple\" }, \"clusterIP\": \"10.97.230.121\", \"type\": \"ClusterIP\", \"sessionAffinity\": \"None\" }, \"status\": { \"loadBalancer\": {} } }, \"port\": 0, \"sslPassthrough\": false, \"endpoints\": [ { \"address\": \"10.1.3.86\", \"port\": \"5678\" } ], \"sessionAffinityConfig\": { \"name\": \"\", \"cookieSessionAffinity\": { \"name\": \"\" } }, \"upstreamHashByConfig\": { \"upstream-hash-by-subset-size\": 3 }, \"noServer\": false, \"trafficShapingPolicy\": { \"weight\": 0, \"header\": \"\", \"headerValue\": \"\", \"cookie\": \"\" } }, { \"name\": \"default-echo-service-8080\", ... }, { \"name\": \"upstream-default-backend\", ... } ] Add the --list option to show only the backend names. Add the --backend option to show only the backend with the given name.","title":"backends"},{"location":"kubectl-plugin/#certs","text":"Use kubectl ingress-nginx certs --host to dump the SSL cert/key information for a given host. WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY-----","title":"certs"},{"location":"kubectl-plugin/#conf","text":"Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host option to view only the server block for that host: kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server_name testaddr.local ; listen 80; set $proxy_upstream_name \"-\"; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; location / { set $namespace \"\"; set $ingress_name \"\"; set $service_name \"\"; set $service_port \"0\"; set $location_path \"/\"; ...","title":"conf"},{"location":"kubectl-plugin/#exec","text":"kubectl ingress-nginx exec is exactly the same as kubectl exec , with the same command flags. It will automatically choose an ingress-nginx pod to run the command in. $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi_params geoip lua mime.types modsecurity modules nginx.conf opentracing.json owasp-modsecurity-crs template","title":"exec"},{"location":"kubectl-plugin/#info","text":"Shows the internal and external IP/CNAMES for an ingress-nginx service. $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 Use the --service flag if your ingress-nginx LoadBalancer service is not named ingress-nginx .","title":"info"},{"location":"kubectl-plugin/#ingresses","text":"kubectl ingress-nginx ingresses , alternately kubectl ingress-nginx ing , shows a more detailed view of the ingress definitions in a namespace. Compare: $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 * localhost 80 5d vs $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 * localhost NO echo-service 8080 2","title":"ingresses"},{"location":"kubectl-plugin/#lint","text":"kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions. $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 \u2717 othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 to show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags: $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0 Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808","title":"lint"},{"location":"kubectl-plugin/#logs","text":"kubectl ingress-nginx logs is almost the same as kubectl logs , with fewer flags. It will automatically choose an ingress-nginx pod to read logs from. $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ...","title":"logs"},{"location":"kubectl-plugin/#ssh","text":"kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash . Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container. $ kubectl ingress-nginx ssh -n ingress-nginx www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$","title":"ssh"},{"location":"troubleshooting/","text":"Troubleshooting \u00b6 Ingress-Controller Logs and Events \u00b6 There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress Check the Ingress Controller Logs $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 240s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m Debug Logging \u00b6 Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m ingress-nginx-controller 1 1 1 1 35m $ kubectl edit deploy -n ingress-nginx-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode Authentication to the Kubernetes API Server \u00b6 A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+ Service Account \u00b6 If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run -it --rm test --image = curlimages/curl --restart = Never -- /bin/sh # check if secret exists / $ ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token / $ # check base connectivity from cluster inside / $ curl -k https://kubernetes.default.svc.cluster.local { \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403 }/ $ # connect using tokens }/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local && echo { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/\", ... TRUNCATED \"/readyz/shutdown\", \"/version\" ] } / $ # when you type ` exit ` or ` ^D ` the test pod will be deleted. If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts Kube-Config \u00b6 If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment. Using GDB with Nginx \u00b6 Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep ingress-nginx-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a k8s.gcr.io/ingress-nginx/controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /ingress-nginx-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Troubleshooting"},{"location":"troubleshooting/#troubleshooting","text":"","title":"Troubleshooting"},{"location":"troubleshooting/#ingress-controller-logs-and-events","text":"There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress Check the Ingress Controller Logs $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 240s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m","title":"Ingress-Controller Logs and Events"},{"location":"troubleshooting/#debug-logging","text":"Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m ingress-nginx-controller 1 1 1 1 35m $ kubectl edit deploy -n ingress-nginx-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode","title":"Debug Logging"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","text":"A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+","title":"Authentication to the Kubernetes API Server"},{"location":"troubleshooting/#service-account","text":"If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run -it --rm test --image = curlimages/curl --restart = Never -- /bin/sh # check if secret exists / $ ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token / $ # check base connectivity from cluster inside / $ curl -k https://kubernetes.default.svc.cluster.local { \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403 }/ $ # connect using tokens }/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local && echo { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/\", ... TRUNCATED \"/readyz/shutdown\", \"/version\" ] } / $ # when you type ` exit ` or ` ^D ` the test pod will be deleted. If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts","title":"Service Account"},{"location":"troubleshooting/#kube-config","text":"If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.","title":"Kube-Config"},{"location":"troubleshooting/#using-gdb-with-nginx","text":"Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep ingress-nginx-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a k8s.gcr.io/ingress-nginx/controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /ingress-nginx-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Using GDB with Nginx"},{"location":"deploy/","text":"Installation Guide \u00b6 There are multiple ways to install the NGINX ingress controller: with Helm , using the project repository chart; with kubectl apply , using YAML manifests; with specific addons (e.g. for minikube or MicroK8s ). On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. we recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider. Contents \u00b6 Quick start Environment-specific instructions ... Docker Desktop ... minikube ... MicroK8s ... AWS ... GCE - GKE ... Azure ... Digital Ocean ... Scaleway ... Exoscale ... Oracle Cloud Infrastructure ... Bare-metal Miscellaneous Quick start \u00b6 If you have Helm, you can deploy the ingress controller with the following command: helm upgrade --install ingress-nginx ingress-nginx \\ --repo https://kubernetes.github.io/ingress-nginx \\ --namespace ingress-nginx --create-namespace It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist. Info This command is idempotent : if the ingress controller is not installed, it will install it, if the ingress controller is already installed, it will upgrade it. If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml Info The YAML manifest in the command above was generated with helm template , so you will end up with almost the same resources as if you had used Helm to install the controller. If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Pre-flight check \u00b6 A few pods should start in the ingress-nginx namespace: kubectl get pods --namespace=ingress-nginx After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s Local testing \u00b6 Let's create a simple web server and the associated service: kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo Then create an ingress resource. The following example uses an host that maps to localhost : kubectl create ingress demo-localhost --class=nginx \\ --rule=demo.localdev.me/*=demo:80 Now, forward a local port to the ingress controller: kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 At this point, if you access http://demo.localdev.me:8080/, you should see an HTML page telling you \"It works!\". Online testing \u00b6 If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer , it will have allocated an external IP address or FQDN to the ingress controller. You can see that IP address or FQDN with the following command: kubectl get service ingress-nginx-controller --namespace=ingress-nginx It will be the EXTERNAL-IP field. If that field shows , this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer ). Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io : kubectl create ingress demo --class=nginx \\ --rule=\"www.demo.io/*=demo:80\" Alternatively, the above command can be rewritten as follows for the --rule command and below. kubectl create ingress demo --class=nginx \\ --rule www.demo.io/=demo:80 You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public web site hosted on a Kubernetes cluster! \ud83c\udf89 Environment-specific instructions \u00b6 Local development clusters \u00b6 minikube \u00b6 The ingress controller can be installed through minikube's addons system: minikube addons enable ingress MicroK8s \u00b6 The ingress controller can be installed through MicroK8s's addons system: microk8s enable ingress Please check the MicroK8s documentation page for details. Docker Desktop \u00b6 Kubernetes is available in Docker Desktop: Mac, from version 18.06.0-ce Windows, from version 18.06.0-ce First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-destkop . The ingress controller can be installed on Docker Desktop using the default quick start instructions. On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost , which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section . Cloud deployments \u00b6 If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster ) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command. Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true ) and in the cloud provider's load balancer configuration to function correctly. In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers. AWS \u00b6 In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer . Info The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller . Network Load Balancer (NLB) \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/aws/deploy.yaml TLS termination in AWS Load Balancer (NLB) \u00b6 By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS with using an NLB. Download the the deploy-tls-termination.yaml template: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/aws/deploy-tls-termination.yaml Edit the file and change the VPC CIDR in use for the Kubernetes cluster: proxy-real-ip-cidr: XXX.XXX.XXX/XX Change the AWS Certificate Manager (ACM) ID as well: arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX Deploy the manifest: kubectl apply -f deploy-tls-termination.yaml NLB Idle Timeouts \u00b6 Idle timeout value for TCP flows is 350 seconds and cannot be modified . For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected. By default NGINX keepalive_timeout is set to 75s . More information with regards to timeouts can be found in the official AWS documentation GCE-GKE \u00b6 First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command: kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $(gcloud config get-value account) Then, the ingress controller can be installed like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml Warning For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp , 443/tcp and 10254/tcp to also allow access to port 8443/tcp . See the GKE documentation on adding rules and the Kubernetes issue for more detail. Warning Proxy protocol is not supported in GCE/GKE. Azure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml More information with regards to Azure annotations for ingress controller can be found in the official AKS documentation . Digital Ocean \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml Scaleway \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/scw/deploy.yaml Exoscale \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation . Oracle Cloud Infrastructure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation. Bare metal clusters \u00b6 This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...) For quick testing, you can use a NodePort . This should work on almost every cluster, but it will typically use a port in the range 30000-32767. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations . Miscellaneous \u00b6 Checking ingress controller version \u00b6 Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec : POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name) kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version Scope \u00b6 By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace. See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details. Webhook network access \u00b6 Warning The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service. Certificate generation \u00b6 Attention The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. THis can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions. You can wait until it is ready to run the next command: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s Running on Kubernetes versions older than 1.19 \u00b6 Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1 , then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1 . Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only v1beta1 Ingress resources are supported - from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported - in Kubernetes 1.22 and above, only v1 Ingress resources are supported And here is how these Ingress versions are supported in NGINX Ingress Controller: - before version 1.0, only v1beta1 Ingress resources are supported - in version 1.0 and above, only v1 Ingress resources are As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the NGINX Ingress Controller (e.g. version 0.49). The Helm chart of the NGINX Ingress Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command).","title":"Installation Guide"},{"location":"deploy/#installation-guide","text":"There are multiple ways to install the NGINX ingress controller: with Helm , using the project repository chart; with kubectl apply , using YAML manifests; with specific addons (e.g. for minikube or MicroK8s ). On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. we recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider.","title":"Installation Guide"},{"location":"deploy/#contents","text":"Quick start Environment-specific instructions ... Docker Desktop ... minikube ... MicroK8s ... AWS ... GCE - GKE ... Azure ... Digital Ocean ... Scaleway ... Exoscale ... Oracle Cloud Infrastructure ... Bare-metal Miscellaneous","title":"Contents"},{"location":"deploy/#quick-start","text":"If you have Helm, you can deploy the ingress controller with the following command: helm upgrade --install ingress-nginx ingress-nginx \\ --repo https://kubernetes.github.io/ingress-nginx \\ --namespace ingress-nginx --create-namespace It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist. Info This command is idempotent : if the ingress controller is not installed, it will install it, if the ingress controller is already installed, it will upgrade it. If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml Info The YAML manifest in the command above was generated with helm template , so you will end up with almost the same resources as if you had used Helm to install the controller. If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions.","title":"Quick start"},{"location":"deploy/#pre-flight-check","text":"A few pods should start in the ingress-nginx namespace: kubectl get pods --namespace=ingress-nginx After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s","title":"Pre-flight check"},{"location":"deploy/#local-testing","text":"Let's create a simple web server and the associated service: kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo Then create an ingress resource. The following example uses an host that maps to localhost : kubectl create ingress demo-localhost --class=nginx \\ --rule=demo.localdev.me/*=demo:80 Now, forward a local port to the ingress controller: kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 At this point, if you access http://demo.localdev.me:8080/, you should see an HTML page telling you \"It works!\".","title":"Local testing"},{"location":"deploy/#online-testing","text":"If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer , it will have allocated an external IP address or FQDN to the ingress controller. You can see that IP address or FQDN with the following command: kubectl get service ingress-nginx-controller --namespace=ingress-nginx It will be the EXTERNAL-IP field. If that field shows , this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer ). Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io : kubectl create ingress demo --class=nginx \\ --rule=\"www.demo.io/*=demo:80\" Alternatively, the above command can be rewritten as follows for the --rule command and below. kubectl create ingress demo --class=nginx \\ --rule www.demo.io/=demo:80 You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public web site hosted on a Kubernetes cluster! \ud83c\udf89","title":"Online testing"},{"location":"deploy/#environment-specific-instructions","text":"","title":"Environment-specific instructions"},{"location":"deploy/#local-development-clusters","text":"","title":"Local development clusters"},{"location":"deploy/#minikube","text":"The ingress controller can be installed through minikube's addons system: minikube addons enable ingress","title":"minikube"},{"location":"deploy/#microk8s","text":"The ingress controller can be installed through MicroK8s's addons system: microk8s enable ingress Please check the MicroK8s documentation page for details.","title":"MicroK8s"},{"location":"deploy/#docker-desktop","text":"Kubernetes is available in Docker Desktop: Mac, from version 18.06.0-ce Windows, from version 18.06.0-ce First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-destkop . The ingress controller can be installed on Docker Desktop using the default quick start instructions. On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost , which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section .","title":"Docker Desktop"},{"location":"deploy/#cloud-deployments","text":"If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster ) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command. Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true ) and in the cloud provider's load balancer configuration to function correctly. In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.","title":"Cloud deployments"},{"location":"deploy/#aws","text":"In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer . Info The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller .","title":"AWS"},{"location":"deploy/#network-load-balancer-nlb","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/aws/deploy.yaml","title":"Network Load Balancer (NLB)"},{"location":"deploy/#tls-termination-in-aws-load-balancer-nlb","text":"By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS with using an NLB. Download the the deploy-tls-termination.yaml template: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/aws/deploy-tls-termination.yaml Edit the file and change the VPC CIDR in use for the Kubernetes cluster: proxy-real-ip-cidr: XXX.XXX.XXX/XX Change the AWS Certificate Manager (ACM) ID as well: arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX Deploy the manifest: kubectl apply -f deploy-tls-termination.yaml","title":"TLS termination in AWS Load Balancer (NLB)"},{"location":"deploy/#nlb-idle-timeouts","text":"Idle timeout value for TCP flows is 350 seconds and cannot be modified . For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected. By default NGINX keepalive_timeout is set to 75s . More information with regards to timeouts can be found in the official AWS documentation","title":"NLB Idle Timeouts"},{"location":"deploy/#gce-gke","text":"First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command: kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $(gcloud config get-value account) Then, the ingress controller can be installed like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml Warning For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp , 443/tcp and 10254/tcp to also allow access to port 8443/tcp . See the GKE documentation on adding rules and the Kubernetes issue for more detail. Warning Proxy protocol is not supported in GCE/GKE.","title":"GCE-GKE"},{"location":"deploy/#azure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml More information with regards to Azure annotations for ingress controller can be found in the official AKS documentation .","title":"Azure"},{"location":"deploy/#digital-ocean","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml","title":"Digital Ocean"},{"location":"deploy/#scaleway","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/scw/deploy.yaml","title":"Scaleway"},{"location":"deploy/#exoscale","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation .","title":"Exoscale"},{"location":"deploy/#oracle-cloud-infrastructure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.","title":"Oracle Cloud Infrastructure"},{"location":"deploy/#bare-metal-clusters","text":"This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...) For quick testing, you can use a NodePort . This should work on almost every cluster, but it will typically use a port in the range 30000-32767. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations .","title":"Bare metal clusters"},{"location":"deploy/#miscellaneous","text":"","title":"Miscellaneous"},{"location":"deploy/#checking-ingress-controller-version","text":"Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec : POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name) kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version","title":"Checking ingress controller version"},{"location":"deploy/#scope","text":"By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace. See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details.","title":"Scope"},{"location":"deploy/#webhook-network-access","text":"Warning The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.","title":"Webhook network access"},{"location":"deploy/#certificate-generation","text":"Attention The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. THis can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions. You can wait until it is ready to run the next command: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s","title":"Certificate generation"},{"location":"deploy/#running-on-kubernetes-versions-older-than-119","text":"Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1 , then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1 . Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only v1beta1 Ingress resources are supported - from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported - in Kubernetes 1.22 and above, only v1 Ingress resources are supported And here is how these Ingress versions are supported in NGINX Ingress Controller: - before version 1.0, only v1beta1 Ingress resources are supported - in version 1.0 and above, only v1 Ingress resources are As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the NGINX Ingress Controller (e.g. version 0.49). The Helm chart of the NGINX Ingress Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command).","title":"Running on Kubernetes versions older than 1.19"},{"location":"deploy/baremetal/","text":"Bare-metal considerations \u00b6 In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal. A pure software solution: MetalLB \u00b6 MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : namespace : metallb-system name : config data : config : | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.10-203.0.113.15 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section. Over a NodePort Service \u00b6 Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a ingress-nginx-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect Via the host network \u00b6 In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. Example Given a ingress-nginx-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments . Using a self-provisioned edge \u00b6 Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below: External IPs \u00b6 Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#bare-metal-considerations","text":"In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","text":"MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : namespace : metallb-system name : config data : config : | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.10-203.0.113.15 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.","title":"A pure software solution: MetalLB"},{"location":"deploy/baremetal/#over-a-nodeport-service","text":"Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a ingress-nginx-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect","title":"Over a NodePort Service"},{"location":"deploy/baremetal/#via-the-host-network","text":"In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. Example Given a ingress-nginx-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments .","title":"Via the host network"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","text":"Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:","title":"Using a self-provisioned edge"},{"location":"deploy/baremetal/#external-ips","text":"Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"External IPs"},{"location":"deploy/hardening-guide/","text":"Hardening Guide \u00b6 Overview \u00b6 There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points: nginx CIS Benchmark cipherlist.eu (one of many forks of the now dead project cipherli.st) This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible. Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences. This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself Configuration Guide \u00b6 Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values . Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into nginx ingress controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends @media only screen and (min-width: 768px) { td:nth-child(1){ white-space:normal !important; } .md-typeset table:not([class]) td { padding: .2rem .3rem; } }","title":"Hardening guide"},{"location":"deploy/hardening-guide/#hardening-guide","text":"","title":"Hardening Guide"},{"location":"deploy/hardening-guide/#overview","text":"There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points: nginx CIS Benchmark cipherlist.eu (one of many forks of the now dead project cipherli.st) This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible. Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences. This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself","title":"Overview"},{"location":"deploy/hardening-guide/#configuration-guide","text":"Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values . Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into nginx ingress controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends @media only screen and (min-width: 768px) { td:nth-child(1){ white-space:normal !important; } .md-typeset table:not([class]) td { padding: .2rem .3rem; } }","title":"Configuration Guide"},{"location":"deploy/rbac/","text":"Role Based Access Control (RBAC) \u00b6 Overview \u00b6 This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the ingress-nginx-controller. Service Accounts created in this example \u00b6 One ServiceAccount is created in this example, ingress-nginx . Permissions Granted in this example \u00b6 There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx , and namespace specific permissions defined by the Role named ingress-nginx . Cluster Permissions \u00b6 These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses : get, list, watch events : create, patch ingresses/status : update Namespace Permissions \u00b6 These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). configmaps : get, update (for resourceName ingress-controller-leader-nginx ) configmaps : create This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader ingress-class : nginx resourceName : - Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller. Bindings \u00b6 The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#role-based-access-control-rbac","text":"","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#overview","text":"This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the ingress-nginx-controller.","title":"Overview"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","text":"One ServiceAccount is created in this example, ingress-nginx .","title":"Service Accounts created in this example"},{"location":"deploy/rbac/#permissions-granted-in-this-example","text":"There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx , and namespace specific permissions defined by the Role named ingress-nginx .","title":"Permissions Granted in this example"},{"location":"deploy/rbac/#cluster-permissions","text":"These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses : get, list, watch events : create, patch ingresses/status : update","title":"Cluster Permissions"},{"location":"deploy/rbac/#namespace-permissions","text":"These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). configmaps : get, update (for resourceName ingress-controller-leader-nginx ) configmaps : create This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader ingress-class : nginx resourceName : - Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller.","title":"Namespace Permissions"},{"location":"deploy/rbac/#bindings","text":"The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.","title":"Bindings"},{"location":"deploy/upgrade/","text":"Upgrading \u00b6 Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx . Without Helm \u00b6 To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : ingress-nginx-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : ingress-nginx-controller image : k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef args : ... simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/ingress-nginx-controller \\ controller=k8s.gcr.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\ -n ingress-nginx For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx . With Helm \u00b6 If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx , you should be able to upgrade using helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx Migrating from stable/nginx-ingress \u00b6 See detailed steps in the upgrading section of the ingress-nginx chart README .","title":"Upgrade"},{"location":"deploy/upgrade/#upgrading","text":"Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx .","title":"Upgrading"},{"location":"deploy/upgrade/#without-helm","text":"To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : ingress-nginx-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : ingress-nginx-controller image : k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef args : ... simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/ingress-nginx-controller \\ controller=k8s.gcr.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\ -n ingress-nginx For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx .","title":"Without Helm"},{"location":"deploy/upgrade/#with-helm","text":"If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx , you should be able to upgrade using helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx","title":"With Helm"},{"location":"deploy/upgrade/#migrating-from-stablenginx-ingress","text":"See detailed steps in the upgrading section of the ingress-nginx chart README .","title":"Migrating from stable/nginx-ingress"},{"location":"developer-guide/code-overview/","text":"Ingress NGINX - Code Overview \u00b6 This document provides an overview of Ingress NGINX code. Core Golang code \u00b6 This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects , annotations , watches Endpoints and turn them into usable nginx.conf configuration. Core Sync Logics: \u00b6 Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copy of that (1) currently running configuration model and (2) the one generated in response to some changes in the cluster. The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one. There are static and dynamic configuration changes. All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua. The following parts of the code can be found: Entrypoint \u00b6 Is the main package, responsible for starting ingress-nginx program. It can be found in cmd/nginx directory. Version \u00b6 Is the package of the code responsible for adding version subcommand, and can be found in version directory. Internal code \u00b6 This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split in: Admission Controller \u00b6 Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it. This code can be found in internal/admission/controller directory. File functions \u00b6 Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories. This code can be found in internal/file directory. Ingress functions \u00b6 Contains all the logics from NGINX Ingress Controller, with some examples being: Expected Golang structures that will be used in templates and other parts of the codes - internal/ingress/types.go . supported annotations and its parsing logics - internal/ingress/annotations . reconciliation loops and logics - internal/ingress/controller Defaults - define the default struct. Error interface and types implementation - internal/ingress/errors Metrics collectors for Prometheus exporting - internal/ingress/metric . Resolver - Extracts information from a controller. Ingress Object status publisher - internal/ingress/status . And other parts of the code that will be written in this document in a future. K8s functions \u00b6 Contains helper functions for parsing Kubernetes objects. This part of the code can be found in internal/k8s directory. Networking functions \u00b6 Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc. This part of the code can be found in internal/net directory. NGINX functions \u00b6 Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts. This part of the code can be found in internal/nginx directory. Tasks / Queue \u00b6 Contains the functions responsible for the sync queue part of the controller. This part of the code can be found in internal/task directory. Other parts of internal \u00b6 Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future. E2E Test \u00b6 The e2e tests code is in test directory. Other programs \u00b6 Describe here kubectl plugin , dbg , waitshutdown and cover the hack scripts. Deploy files \u00b6 This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components. Those files are in deploy directory. Helm Chart \u00b6 Used to generate the Helm chart published. Code is in charts/ingress-nginx . Documentation/Website \u00b6 The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/ This code is available in docs and it's main \"language\" is Markdown , used by mkdocs file to generate static pages. Container Images \u00b6 Container images used to run ingress-nginx, or to build the final image. Base Images \u00b6 Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples. There are other images inside this directory. Ingress Controller Image \u00b6 The image used to build the final ingress controller, used in deploy scripts and Helm charts. This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system. The files are in rootfs directory and contains: The Dockerfile Auxiliary scripts Ingress NGINX Lua Scripts \u00b6 Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper. The directory containing Lua scripts is rootfs/etc/nginx/lua . Nginx Go template file \u00b6 One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file. To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.","title":"Code Overview"},{"location":"developer-guide/code-overview/#ingress-nginx-code-overview","text":"This document provides an overview of Ingress NGINX code.","title":"Ingress NGINX - Code Overview"},{"location":"developer-guide/code-overview/#core-golang-code","text":"This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects , annotations , watches Endpoints and turn them into usable nginx.conf configuration.","title":"Core Golang code"},{"location":"developer-guide/code-overview/#core-sync-logics","text":"Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copy of that (1) currently running configuration model and (2) the one generated in response to some changes in the cluster. The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one. There are static and dynamic configuration changes. All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua. The following parts of the code can be found:","title":"Core Sync Logics:"},{"location":"developer-guide/code-overview/#entrypoint","text":"Is the main package, responsible for starting ingress-nginx program. It can be found in cmd/nginx directory.","title":"Entrypoint"},{"location":"developer-guide/code-overview/#version","text":"Is the package of the code responsible for adding version subcommand, and can be found in version directory.","title":"Version"},{"location":"developer-guide/code-overview/#internal-code","text":"This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split in:","title":"Internal code"},{"location":"developer-guide/code-overview/#admission-controller","text":"Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it. This code can be found in internal/admission/controller directory.","title":"Admission Controller"},{"location":"developer-guide/code-overview/#file-functions","text":"Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories. This code can be found in internal/file directory.","title":"File functions"},{"location":"developer-guide/code-overview/#ingress-functions","text":"Contains all the logics from NGINX Ingress Controller, with some examples being: Expected Golang structures that will be used in templates and other parts of the codes - internal/ingress/types.go . supported annotations and its parsing logics - internal/ingress/annotations . reconciliation loops and logics - internal/ingress/controller Defaults - define the default struct. Error interface and types implementation - internal/ingress/errors Metrics collectors for Prometheus exporting - internal/ingress/metric . Resolver - Extracts information from a controller. Ingress Object status publisher - internal/ingress/status . And other parts of the code that will be written in this document in a future.","title":"Ingress functions"},{"location":"developer-guide/code-overview/#k8s-functions","text":"Contains helper functions for parsing Kubernetes objects. This part of the code can be found in internal/k8s directory.","title":"K8s functions"},{"location":"developer-guide/code-overview/#networking-functions","text":"Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc. This part of the code can be found in internal/net directory.","title":"Networking functions"},{"location":"developer-guide/code-overview/#nginx-functions","text":"Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts. This part of the code can be found in internal/nginx directory.","title":"NGINX functions"},{"location":"developer-guide/code-overview/#tasks-queue","text":"Contains the functions responsible for the sync queue part of the controller. This part of the code can be found in internal/task directory.","title":"Tasks / Queue"},{"location":"developer-guide/code-overview/#other-parts-of-internal","text":"Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future.","title":"Other parts of internal"},{"location":"developer-guide/code-overview/#e2e-test","text":"The e2e tests code is in test directory.","title":"E2E Test"},{"location":"developer-guide/code-overview/#other-programs","text":"Describe here kubectl plugin , dbg , waitshutdown and cover the hack scripts.","title":"Other programs"},{"location":"developer-guide/code-overview/#deploy-files","text":"This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components. Those files are in deploy directory.","title":"Deploy files"},{"location":"developer-guide/code-overview/#helm-chart","text":"Used to generate the Helm chart published. Code is in charts/ingress-nginx .","title":"Helm Chart"},{"location":"developer-guide/code-overview/#documentationwebsite","text":"The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/ This code is available in docs and it's main \"language\" is Markdown , used by mkdocs file to generate static pages.","title":"Documentation/Website"},{"location":"developer-guide/code-overview/#container-images","text":"Container images used to run ingress-nginx, or to build the final image.","title":"Container Images"},{"location":"developer-guide/code-overview/#base-images","text":"Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples. There are other images inside this directory.","title":"Base Images"},{"location":"developer-guide/code-overview/#ingress-controller-image","text":"The image used to build the final ingress controller, used in deploy scripts and Helm charts. This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system. The files are in rootfs directory and contains: The Dockerfile Auxiliary scripts","title":"Ingress Controller Image"},{"location":"developer-guide/code-overview/#ingress-nginx-lua-scripts","text":"Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper. The directory containing Lua scripts is rootfs/etc/nginx/lua .","title":"Ingress NGINX Lua Scripts"},{"location":"developer-guide/code-overview/#nginx-go-template-file","text":"One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file. To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.","title":"Nginx Go template file"},{"location":"developer-guide/getting-started/","text":"Developing for NGINX Ingress Controller \u00b6 This document explains how to get started with developing for NGINX Ingress controller. Prerequisites \u00b6 Install Go 1.14 or later. Note The project uses Go Modules Install Docker (v19.03.0 or later with experimental feature on) Important The majority of make tasks run as docker containers Quick Start \u00b6 Fork the repository Clone the repository to any location in your work station Add a GO111MODULE environment variable with export GO111MODULE=on Run go mod download to install dependencies Local build \u00b6 Start a local Kubernetes cluster using kind , build and deploy the ingress controller make dev-env - If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind , and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file. Testing \u00b6 Run go unit tests make test Run unit-tests for lua code make lua-test Lua tests are located in the directory rootfs/etc/nginx/lua/test Important Test files must follow the naming convention _test.lua or it will be ignored Run e2e test suite make kind-e2e-test To limit the scope of the tests to execute, we can use the environment variable FOCUS FOCUS=\"no-auth-locations\" make kind-e2e-test Note The variable FOCUS defines Ginkgo Focused Specs Valid values are defined in the describe definition of the e2e tests like Default Backend The complete list of tests can be found here Custom docker image \u00b6 In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location. This can be done setting two environment variables, REGISTRY and TAG export TAG=\"dev\" export REGISTRY=\"$USER\" make build image and then publish such version with docker push $REGISTRY/controller:$TAG","title":"Getting Started"},{"location":"developer-guide/getting-started/#developing-for-nginx-ingress-controller","text":"This document explains how to get started with developing for NGINX Ingress controller.","title":"Developing for NGINX Ingress Controller"},{"location":"developer-guide/getting-started/#prerequisites","text":"Install Go 1.14 or later. Note The project uses Go Modules Install Docker (v19.03.0 or later with experimental feature on) Important The majority of make tasks run as docker containers","title":"Prerequisites"},{"location":"developer-guide/getting-started/#quick-start","text":"Fork the repository Clone the repository to any location in your work station Add a GO111MODULE environment variable with export GO111MODULE=on Run go mod download to install dependencies","title":"Quick Start"},{"location":"developer-guide/getting-started/#local-build","text":"Start a local Kubernetes cluster using kind , build and deploy the ingress controller make dev-env - If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind , and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file.","title":"Local build"},{"location":"developer-guide/getting-started/#testing","text":"Run go unit tests make test Run unit-tests for lua code make lua-test Lua tests are located in the directory rootfs/etc/nginx/lua/test Important Test files must follow the naming convention